Statistical Parametric Speech Synthesis Heiga Zen Google June 9th, 2014

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Text-to-speech as sequence-to-sequence mapping

• Automatic speech recognition (ASR) Speech (continuous time series) → Text (discrete symbol sequence)

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

1 of 79

Text-to-speech as sequence-to-sequence mapping

• Automatic speech recognition (ASR) Speech (continuous time series) → Text (discrete symbol sequence) • Machine translation (MT) Text (discrete symbol sequence) → Text (discrete symbol sequence)

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

1 of 79

Text-to-speech as sequence-to-sequence mapping

• Automatic speech recognition (ASR) Speech (continuous time series) → Text (discrete symbol sequence) • Machine translation (MT) Text (discrete symbol sequence) → Text (discrete symbol sequence) • Text-to-speech synthesis (TTS) Text (discrete symbol sequence) → Speech (continuous time series)

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

1 of 79

Speech production process

modulation of carrier wave by speech information

freq transfer char

voiced/unvoiced

fundamental freq

text (concept)

speech

frequency transfer characteristics magnitude start--end

Sound source voiced: pulse unvoiced: noise

fundamental frequency

air flow

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

2 of 79

Typical flow of TTS system

TEXT Sentence segmentaiton Word segmentation Text normalization Part-of-speech tagging Pronunciation

discrete ⇒ discrete NLP Frontend

Text analysis Speech synthesis

Prosody prediction Waveform generation

SYNTHESIZED discrete ⇒ continuous Speech SPEECH Backend

This talk focuses on backend Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

3 of 79

Concatenative speech synthesis All segments

Target cost

Concatenation cost

• Concatenate actual instances of speech from database • Large data + automatic learning → High-quality synthetic voices can be built automatically • Single inventory per unit → diphone synthesis [1] • Multiple inventory per unit → unit selection synthesis [2] Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

4 of 79

Statistical parametric speech synthesis (SPSS) [3] Speech

Speech analysis

Text

Text analysis

y

Model training

x

Parameter generation

ˆl



Speech synthesis

x

Text analysis

Speech Text

• Training − Extract linguistic features x & acoustic features y − Train acoustic model λ given (x, y) ˆ = arg max p(y | x, λ) λ

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

5 of 79

Statistical parametric speech synthesis (SPSS) [3] Speech

Speech analysis

Text

Text analysis

y

Model training

Parameter generation

ˆl

x



Speech synthesis

x

Text analysis

Speech Text

• Training − Extract linguistic features x & acoustic features y − Train acoustic model λ given (x, y) ˆ = arg max p(y | x, λ) λ

• Synthesis − Extract x from text to be synthesized ˆ − Generate most probable y from λ

ˆ yˆ = arg max p(y | x, λ)

− Reconstruct speech from yˆ Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

5 of 79

Statistical parametric speech synthesis (SPSS) [3]

Speech

Speech analysis

Text

Text analysis

y

Model training

x

Parameter generation

ˆl

x



Speech synthesis Text analysis

Speech Text

• Large data + automatic training → Automatic voice building

• Parametric representation of speech → Flexible to change its voice characteristics Hidden Markov model (HMM) as its acoustic model → HMM-based speech synthesis system (HTS) [4]

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

6 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

HMM-based speech synthesis [4] SPEECH Speech signal DATABASE Excitation

parameter extraction Excitation parameters

TEXT

Text analysis Excitation parameters

Synthesis part Heiga Zen

Spectral parameter extraction Spectral parameters

Training HMMs

Labels

Labels

Training part

Context-dependent HMMs & state duration models Parameter generation from HMMs Spectral parameters

Excitation Excitation Synthesis generation Filter

Statistical Parametric Speech Synthesis

SYNTHESIZED SPEECH

June 9th, 2014

8 of 79

HMM-based speech synthesis [4]

SPEECH Speech signal DATABASE Excitation

parameter extraction Excitation parameters

TEXT

Text analysis Excitation parameters

Synthesis part Heiga Zen

Spectral parameter extraction Spectral parameters

Training HMMs

Labels

Labels

Training part

Context-dependent HMMs & state duration models Parameter generation from HMMs Spectral parameters

Excitation Excitation Synthesis generation Filter

Statistical Parametric Speech Synthesis

SYNTHESIZED SPEECH June 9th, 2014

9 of 79

Speech production process

modulation of carrier wave by speech information

freq transfer char

voiced/unvoiced

fundamental freq

text (concept)

speech

frequency transfer characteristics magnitude start--end

Sound source voiced: pulse unvoiced: noise

fundamental frequency

air flow

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

10 of 79

Source-filter model Source excitation part

Vocal tract resonance part

pulse train e(n)

white noise

excitation

linear time-invariant system h(n)

speech x(n) = h(n) ∗ e(n)

x(n) = h(n) ∗ e(n) ↓ Fourier transform

X(ejω ) = H (ejω )E(ejω )

 H ejω should be defined by HMM state-output vectors e.g., mel-cepstrum, line spectral pairs Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

11 of 79

Parametric models of speech signal

Autoregressive (AR) model K

H(z) = 1−

M X

Exponential (EX) model M X H(z) = exp c(m)z −m m=0

c(m)z −m

m=0

Estimate model parameters based on ML c = arg max p(x | c) c

• p(x | c): AR model → Linear predictive analysis [5]

• p(x | c): EX model → (ML-based) cepstral analysis [6] Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

12 of 79

80

80

60

60

Log magnitude (dB)

Log magnitude (dB)

Examples of speech spectra

40 20 0 -20

0

1

2 3 4 Frequency (kHz)

(a) ML-based cepstral analysis

Heiga Zen

5

40 20 0 -20

0

1

2 3 4 Frequency (kHz)

5

(b) Linear prediction

Statistical Parametric Speech Synthesis

June 9th, 2014

13 of 79

HMM-based speech synthesis [4]

SPEECH Speech signal DATABASE Excitation

parameter extraction Excitation parameters

TEXT

Text analysis Excitation parameters

Synthesis part Heiga Zen

Spectral parameter extraction Spectral parameters

Training HMMs

Labels

Labels

Training part

Context-dependent HMMs & state duration models Parameter generation from HMMs Spectral parameters

Excitation Excitation Synthesis generation Filter

Statistical Parametric Speech Synthesis

SYNTHESIZED SPEECH June 9th, 2014

14 of 79

Structure of state-output (observation) vectors ot ct Spectrum part

Excitation part

Heiga Zen

Mel-cepstral coefficients

D ct

D Mel-cepstral coefficients

D2c t

DD Mel-cepstral coefficients

pt

log F0

δpt

D log F0

δ 2 pt

DD log F0

Statistical Parametric Speech Synthesis

June 9th, 2014

15 of 79

Hidden Markov model (HMM)

a11 π1

1

a22 a12

b1 (ot ) Observation sequence State sequence

Heiga Zen

O o1 o2 o3 o4 o5 Q

2 b2 (ot )

a33 a23

3 b3 (ot )

... . . ...

1 1 1 1 2 ...

Statistical Parametric Speech Synthesis

2 3 ...

oT

3

June 9th, 2014

16 of 79

Multi-stream HMM structure ot bj (ot ) Spectrum

o1t

b2j (o2t ) b3j (o3t ) b4j (o4t )

4

b1j (o1t )

D2c t

Excitation Heiga Zen

D ct

3

s=1

bj (ot )

ct

Stream 1 2

S Y ¡ s s ¢ws = bj (ot )

pt

o2t

δ pt

o3t

δ 2 pt

o4t

Statistical Parametric Speech Synthesis

June 9th, 2014

17 of 79

Training process data & labels

Compute variance floor (HCompV)

Reestimate CD-HMMs by EM algorithm (HERest)

Estimate CD-dur. models from FB stats (HERest)

Initialize CI-HMMs by segmental k-means (HInit)

Decision tree-based clustering (HHEd TB)

Decision tree-based clustering (HHEd TB)

Reestimate CI-HMMs by EM algorithm (HRest & HERest)

Reestimate CD-HMMs by EM algorithm (HERest)

Copy CI-HMMs to CD-HMMs (HHEd CL)

Untie parameter tying structure (HHEd UT)

monophone (context-independent, CI) Heiga Zen

Estimated dur models Estimated HMMs

fullcontext (context-dependent, CD) Statistical Parametric Speech Synthesis

June 9th, 2014

18 of 79

Context-dependent acoustic modeling • • • • • • • • • • • • •

{preceding, succeeding} two phonemes Position of current phoneme in current syllable # of phonemes at {preceding, current, succeeding} syllable {accent, stress} of {preceding, current, succeeding} syllable Position of current syllable in current word # of {preceding, succeeding} {stressed, accented} syllables in phrase # of syllables {from previous, to next} {stressed, accented} syllable Guess at part of speech of {preceding, current, succeeding} word # of syllables in {preceding, current, succeeding} word Position of current word in current phrase # of {preceding, succeeding} content words in current phrase # of words {from previous, to next} content word # of syllables in {preceding, current, succeeding} phrase

...

Impossible to have all possible models Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

19 of 79

Decision tree-based state clustering [7] k-a+b t-a+n L=voice?

R=silence? yes

L="w" ? yes

yes

no

no

yes

no

R=silence? no yes

L="gy" ? no

leaf nodes

synthesized states

w-a+t

w-a+sil

Heiga Zen

gy-a+sil

w-a+sh

g-a+sil

gy-a+pau

Statistical Parametric Speech Synthesis

June 9th, 2014

20 of 79

Stream-dependent tree-based clustering

Decision trees for mel-cepstrum Decision trees for F0 Spectrum & excitation can have different context dependency → Build decision trees individually Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

21 of 79

State duration models [8] t1

i

t0

1

2

3

4

5

6

7

T=8

t

Probability to enter state i at t0 then leave at t1 + 1 χt0 ,t1 (i) ∝

X

αt0 −1 (j)aji atii1 −t0

t=t0

j6=i

→ estimate state duration models

Heiga Zen

t1 Y

bi (ot )

X

aik bk (ot1 +1 )βt1 +1 (k)

k6=i

Statistical Parametric Speech Synthesis

June 9th, 2014

22 of 79

Stream-dependent tree-based clustering

State duration model HMM Decision trees for mel-cepstrum

Decision tree for state dur. models

Decision trees for F0 Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

23 of 79

HMM-based speech synthesis [4] SPEECH Speech signal DATABASE Excitation

parameter extraction Excitation parameters

TEXT

Text analysis Excitation parameters

Synthesis part Heiga Zen

Spectral parameter extraction Spectral parameters

Training HMMs

Labels

Labels

Training part

Context-dependent HMMs & state duration models Parameter generation from HMMs Spectral parameters

Excitation Excitation Synthesis generation Filter

Statistical Parametric Speech Synthesis

SYNTHESIZED SPEECH

June 9th, 2014

24 of 79

Speech parameter generation algorithm [9] Generate most probable state outputs given HMM and words ˆ oˆ = arg max p(o | w, λ) o X ˆ = arg max p(o, q | w, λ) o

∀q

ˆ ≈ arg max max p(o, q | w, λ) o

q

ˆ (q | w, λ) ˆ = arg max max p(o | q, λ)P o

Heiga Zen

q

Statistical Parametric Speech Synthesis

June 9th, 2014

25 of 79

Speech parameter generation algorithm [9] Generate most probable state outputs given HMM and words ˆ oˆ = arg max p(o | w, λ) o X ˆ = arg max p(o, q | w, λ) o

∀q

ˆ ≈ arg max max p(o, q | w, λ) o

q

ˆ (q | w, λ) ˆ = arg max max p(o | q, λ)P o

q

Determine the best state sequence and outputs sequentially ˆ qˆ = arg max P (q | w, λ) q

ˆ ˆ λ) oˆ = arg max p(o | q, o

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

25 of 79

Best state sequence

a11 π1

1

a22 a12

b1 (ot ) Observation sequence

Heiga Zen

O o1 o2 o3 o4 o5

State sequence

Q

State duration

D

2 b2 (ot )

a23

3 b3 (ot )

... . . ...

1 1 1 1 2 ... 4

a33

10

Statistical Parametric Speech Synthesis

2 3 ...

oT

3

5

June 9th, 2014

26 of 79

Best state outputs w/o dynamic features

Mean

Variance

oˆ becomes step-wise mean vector sequence

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

27 of 79

Using dynamic features State output vectors include static & dynamic features

£ ¤ > > ot = c> t , D ct M

D ct = ct − ct−1 c t-2

c t-1

ct

c t+1

c t+2

Dct-2

Dc t-1

Dc t

Dc t+1

Dct+2

M

2M

Relationship between static and dynamic features can be arranged as 

Heiga Zen

o .. .



   ct−1   ot−1 D ct−1     ct   o t D c   t    ct+1   ot+1 D ct+1    .. .



· · · · · ·  · · ·  · · · =  · · ·  · · ·  · · ·  ···

.. . 0 −I 0 0 0 0 .. .

W .. . I I 0 −I 0 0 .. .

.. . 0 0 I I 0 −I .. .

.. . 0 0 0 0 I I .. .

Statistical Parametric Speech Synthesis

 · · · · · ·  · · ·  · · ·  · · ·  · · ·  · · ·  ···

c 

 .. .   ct−2    ct−1     ct    ct+1    .. . June 9th, 2014

28 of 79

Speech parameter generation algorithm [9]

Introduce dynamic feature constraints ˆ ˆ λ) oˆ = arg max p(o | q, o

Heiga Zen

subject to

Statistical Parametric Speech Synthesis

o = Wc

June 9th, 2014

29 of 79

Speech parameter generation algorithm [9]

Introduce dynamic feature constraints ˆ ˆ λ) oˆ = arg max p(o | q, o

subject to

o = Wc

If state-output distribution is single Gaussian ˆ = N (o; µ ˆ qˆ) ˆ λ) ˆ qˆ, Σ p(o | q,

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

29 of 79

Speech parameter generation algorithm [9]

Introduce dynamic feature constraints ˆ ˆ λ) oˆ = arg max p(o | q, o

subject to

o = Wc

If state-output distribution is single Gaussian ˆ = N (o; µ ˆ qˆ) ˆ λ) ˆ qˆ, Σ p(o | q, ˆ qˆ)/∂c = 0 ˆ qˆ, Σ By setting ∂ log N (W c; µ ˆ −1 W c = W > Σ ˆ −1 µ W >Σ qˆ qˆ ˆ qˆ

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

29 of 79

Speech parameter generation algorithm [9] Σ−1 qˆ

c

1 0 0 ... 0 0 0 ...

c1 c2

0 1 0 ... -1 1 0 ...

...

W 0

cT

...

1 0 0 ... 1 -1 0 ...

0 1 0 ... 0 1 -1 ...

... 0 1 0 ... 0 1 -1

...

... 0 0 1 ... 0 0 0

W>

... 0 1 0 ... -1 1 0 ... 0 0 1

0

... 0 -1 1

Σ−1 qˆ

µqˆ 0

1 0 0 ... 1 -1 0 ...

0 1 0 ... 0 1 -1 ...

... 0 1 0 ... 0 1 -1

...

=

... 0 0 1 ... 0 0 0

W>

µq1 µq2

0 µqT Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

30 of 79

Dynamic

Static

Generated speech parameter trajectory

Mean

Heiga Zen

Variance

Statistical Parametric Speech Synthesis

c

June 9th, 2014

31 of 79

HMM-based speech synthesis [4] SPEECH Speech signal DATABASE Excitation

parameter extraction Excitation parameters

TEXT

Text analysis Excitation parameters

Synthesis part Heiga Zen

Spectral parameter extraction Spectral parameters

Training HMMs

Labels

Labels

Training part

Context-dependent HMMs & state duration models Parameter generation from HMMs Spectral parameters

Excitation Excitation Synthesis generation Filter

Statistical Parametric Speech Synthesis

SYNTHESIZED SPEECH

June 9th, 2014

32 of 79

Waveform reconstruction

Generated excitation parameter (log F0 with V/UV)

Generated spectral parameter (cepstrum, LSP)

pulse train e(n)

white noise

Heiga Zen

excitation

linear time-invariant system h(n)

Statistical Parametric Speech Synthesis

synthesized speech x(n) = h(n) ∗ e(n)

June 9th, 2014

33 of 79

Synthesis filter

• Cepstrum → LMA filter

• Generalized cepstrum → GLSA filter • Mel-cepstrum → MLSA filter

• Mel-generalized cepstrum → MGLSA filter • LSP → LSP filter

• PARCOR → all-pole lattice filter • LPC → all-pole filter

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

34 of 79

Characteristics of SPSS • Advantages − Flexibility to change voice characteristics ◦ Adaptation ◦ Interpolation − Small footprint [10, 11] − Robustness [12] • Drawback − Quality • Major factors for quality degradation [3] − Vocoder (speech analysis & synthesis) − Acoustic model (HMM) − Oversmoothing (parameter generation) Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

35 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Adaptation (mimicking voice) [13]

Average-voice model

Training speakers

Adaptive Training

Adaptation Target speakers

• Train average voice model (AVM) from training speakers using SAT • Adapt AVM to target speakers

• Requires small data from target speaker/speaking style → Small cost to create new voices Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

37 of 79

Interpolation (mixing voice) [14, 15, 16, 17] λ2 λ1 I(λ0 , λ2)

I(λ0 , λ1)

λ : HMM set

I(λ0 , λ ) : Interpolation ratio

λ0 I(λ0 , λ3) I(λ0 , λ4)

λ3

λ4

• Interpolate representive HMM sets

• Can obtain new voices w/o adaptation data

• Eigenvoice / CAT / multiple regression → estimate representative HMM sets from data Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

38 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Vocoding issues • Simple pulse / noise excitation Difficult to model mix of V/UV sounds (e.g., voiced fricatives) pulse train e(n)

white noise

excitation Unvoiced

Voiced

• Spectral envelope extraction Harmonic effect often cause problem Power [dB]

80 40

0 0

2

4

6

8 [kHz]

• Phase Important but usually ignored Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

40 of 79

Better vocoding

• Mixed excitation linear prediction (MELP)

• STRAIGHT

• Multi-band excitation

• Harmonic + noise model (HNM) • Harmonic / stochastic model • LF model

• Glottal waveform

• Residual codebook • ML excitation

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

41 of 79

Limitations of HMMs for acoustic modeling

• Piece-wise constatnt statistics Statistics do not vary within an HMM state • Conditional independence assumption State output probability depends only on the current state • Weak duration modeling State duration probability decreases exponentially with time None of them hold for real speech

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

42 of 79

Better acoustic modeling

• Piece-wise constatnt statistics → Dynamical model − Trended HMM − Polynomial segment model − Trajectory HMM • Conditional independence assumption → Graphical model − Buried Markov model − Autoregressive HMM − Trajectory HMM • Weak duration modeling → Explicit duration model − Hidden semi-Markov model Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

43 of 79

Oversmoothing • Speech parameter generation algorithm

− Dynamic feature constraints make generated parameters smooth − Often too smooth → sounds muffled

0 4 8 Frequency (kHz)

Generated

4 8 Frequency (kHz)

Natural

0

• Why? − Details of spectral (formant) structure disappear − Use of better AM relaxes the issue, but not enough Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

44 of 79

Oversmoothing compensation

• Postfiltering

− Mel-cepstrum − LSP

• Nonparametric approach − Conditional parameter generation − Discrete HMM-based speech synthesis • Combine multiple-level statistics − Global variance (intra-utterance variance) − Modulation spectrum (intra-utterance frequency components)

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

45 of 79

Characteristics of SPSS • Advantages − Flexibility to change voice characteristics ◦ Adaptation ◦ Interpolation / eigenvoice / CAT / multiple regression − Small footprint − Robustness • Drawback − Quality • Major factors for quality degradation [3] − Vocoder (speech analysis & synthesis) − Acoustic model (HMM) → Neural networks − Oversmoothing (parameter generation) Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

46 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Linguistic → acoustic mapping • Training Learn relationship between linguistc & acoustic features

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

48 of 79

Linguistic → acoustic mapping • Training Learn relationship between linguistc & acoustic features • Synthesis Map linguistic features to acoustic ones

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

48 of 79

Linguistic → acoustic mapping • Training Learn relationship between linguistc & acoustic features • Synthesis Map linguistic features to acoustic ones • Linguistic features used in SPSS − Phoneme, syllable, word, phrase, utterance-level features − e.g., phone identity, POS, stress, # of words in a phrase − Around 50 different types, much more than ASR (typically 3–5) Effective modeling is essential

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

48 of 79

HMM-based acoustic modeling for SPSS [4]

Acoustic space yes yes yes

no no

no yes

...

no yes

no

• Decision tree-clustered HMM with GMM state-output distributions

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

49 of 79

DNN-based acoustic modeling for SPSS [18] Acoustic features y

h3 h2 h1

Linguistic features x

• DNN represents conditional distribution of y given x • DNN replaces decision trees and GMMs Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

50 of 79

Framework Binary features

Duration prediction

Input features including binary & numeric features at frame T

...

Waveform synthesis

Spectral features

Output layer

...

SPEECH

Heiga Zen

...

...

...

Duration feature Frame position feature

Hidden layers

TEXT

Statistics (mean & var) of speech parameter vector sequence

Numeric features

Text analysis

Input features including binary & numeric features at frame 1

Input layer

Input feature extraction

Excitation features V/UV feature

Parameter generation

Statistical Parametric Speech Synthesis

June 9th, 2014

51 of 79

Advantages of NN-based acoustic modeling

• Integrating feature extraction − Can model high-dimensional, highly correlated features efficiently − Layered architecture w/ non-linear operations → Integrated feature extraction to acoustic modeling

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

52 of 79

Advantages of NN-based acoustic modeling

• Integrating feature extraction − Can model high-dimensional, highly correlated features efficiently − Layered architecture w/ non-linear operations → Integrated feature extraction to acoustic modeling • Distributed representation − Can be exponentially more efficient than fragmented representation − Better representation ability with fewer parameters

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

52 of 79

Advantages of NN-based acoustic modeling

• Integrating feature extraction − Can model high-dimensional, highly correlated features efficiently − Layered architecture w/ non-linear operations → Integrated feature extraction to acoustic modeling • Distributed representation − Can be exponentially more efficient than fragmented representation − Better representation ability with fewer parameters • Layered hierarchical structure in speech production − concept → linguistic → articulatory → waveform Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

52 of 79

Framework

Is this new? . . . no • NN [19]

• RNN [20]

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

53 of 79

Framework

Is this new? . . . no • NN [19]

• RNN [20] What’s the difference? • More layers, data, computational resources • Better learning algorithm

• Statistical parametric speech synthesis techniques

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

53 of 79

Experimental setup Database Training / test data Sampling rate Analysis window Linguistic features Acoustic features HMM topology DNN architecture Postprocessing

Heiga Zen

US English female speaker 33000 & 173 sentences 16 kHz 25-ms width / 5-ms shift 11 categorical features 25 numeric features 0–39 mel-cepstrum log F0 , 5-band aperiodicity, ∆, ∆2 5-state, left-to-right HSMM [21], MSD F0 [22], MDL [23] 1–5 layers, 256/512/1024/2048 units/layer sigmoid, continuous F0 [24] Postfiltering in cepstrum domain [25]

Statistical Parametric Speech Synthesis

June 9th, 2014

54 of 79

Example of speech parameter trajectories

5-th Mel-cepstrum

w/o grouping questions, numeric contexts, silence frames removed

Natural speech HMM (α=1) DNN (4x512)

1

0

-1 0

100

200

300

400

500

Frame

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

55 of 79

Subjective evaluations Compared HMM-based systems with DNN-based ones with similar # of parameters • Paired comparison test

• 173 test sentences, 5 subjects per pair • Up to 30 pairs per subject • Crowd-sourced HMM (α) 15.8 (16) 16.1 (4) 12.7 (1)

Heiga Zen

DNN (#layers × #units) 38.5 (4 × 256) 27.2 (4 × 512) 36.6 (4 × 1 024)

Neutral 45.7 56.8 50.7

Statistical Parametric Speech Synthesis

p value < 10−6 < 10−6 < 10−6

z value -9.9 -5.1 -11.5

June 9th, 2014

56 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Limitations of DNN-based acoustic modeling y2 Data samples NN prediction

y1

• Unimodality − Human can speak in different ways → one-to-many mapping − NN trained by MSE loss → approximates conditional mean

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

58 of 79

Limitations of DNN-based acoustic modeling y2 Data samples NN prediction

y1

• Unimodality − Human can speak in different ways → one-to-many mapping − NN trained by MSE loss → approximates conditional mean • Lack of variance − DNN-based SPSS uses variances computed from all training data − Parameter generation algorithm utilizes variances Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

58 of 79

Limitations of DNN-based acoustic modeling y2 Data samples NN prediction

y1

• Unimodality − Human can speak in different ways → one-to-many mapping − NN trained by MSE loss → approximates conditional mean • Lack of variance − DNN-based SPSS uses variances computed from all training data − Parameter generation algorithm utilizes variances Linear output layer → Mixture density output layer [26]

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

58 of 79

Mixture density network [26] w2 (x1 ) w1 (x1 )

σ2 (x1 )

σ1 (x1 ) µ1 (x1 )

µ2 (x1 )

y

w1 (x1 ) w2 (x1 ) µ1 (x1 ) µ2 (x1 )σ1 (x1 ) σ2 (x1 )

Inputs of activation function 4 X zj = hi wij i=1

: Weights → Softmax activation function w1 (x) = P2

exp(z1 )

m=1 exp(zm )

w2 (x) = P2

exp(z2 )

m=1

exp(zm )

: Means → Linear activation function

1-dim, 2-mix MDN

µ1 (x) = z3

µ1 (x) = z4

: Variances → Exponential activation function σ1 (x) = exp(z5 )

σ2 (x) = exp(z6 )

NN + mixture model (GMM) → NN outputs GMM weights, means, & variances

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

59 of 79

TEXT

DMDN-based SPSS [27]

w2 (x1 ) w1 (x1 )

σ2 (x1 )

µ1 (x1 )

µ2 (x1 )

σ1 (x2 )

y

...

σ2 (x2 )

µ1 (x2 )

µ2 (x2 )

y

σ1 (xT )

µ1 (xT )

w1 (x1 ) w2 (x1 ) µ1 (x1 ) µ2 (x1 ) σ1 (x1 ) σ2 (x1 ) w1 (x2 ) w2 (x2 ) µ1 (x2 ) µ2 (x2 ) σ1 (x2 ) σ2 (x2 )

w2 (xT ) σ2 (xT ) µ2 (xT )

Input feature extraction

x1

x2

...

Statistical Parametric Speech Synthesis

xT

June 9th, 2014

SPEECH

Duration prediction

Waveform synthesis

Heiga Zen

y

w1 (xT ) w2 (xT ) µ1 (xT ) µ2 (xT ) σ1(xT ) σ2 (xT )

Parameter generation

Text analysis

σ1 (x1 )

w1 (xT )

w1 (x2 ) w2 (x2 )

60 of 79

Experimental setup

• Almost the same as the previous setup

• Differences:

DNN architecture DMDN architecture

Optimization

Heiga Zen

4–7 hidden layers, 1024 units/hidden layer ReLU (hidden) / Linear (output) 4 hidden layers, 1024 units/ hidden layer ReLU [28] (hidden) / Mixture density (output) 1–16 mix AdaDec [29] (variant of AdaGrad [30]) on GPU

Statistical Parametric Speech Synthesis

June 9th, 2014

61 of 79

Subjective evaluation • 5-scale mean opinion score (MOS) test (1: unnatural – 5: natural)

• 173 test sentences, 5 subjects per pair • Up to 30 pairs per subject • Crowd-sourced

HMM DNN

DMDN (4×1024)

Heiga Zen

1 mix 2 mix 4×1024 5×1024 6×1024 7×1024 1 mix 2 mix 4 mix 8 mix 16 mix

3.537 3.397 3.635 3.681 3.652 3.637 3.654 3.796 3.766 3.805 3.791

± ± ± ± ± ± ± ± ± ± ±

Statistical Parametric Speech Synthesis

0.113 0.115 0.127 0.109 0.108 0.129 0.117 0.107 0.113 0.113 0.102 June 9th, 2014

62 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Limitations of DNN/DMDN-based acoustic modeling

• Fixed time span for input features − Fixed number of preceding / succeeding contexts (e.g., ±2 phonemes/syllable stress) are used as inputs − Difficult to incorporate long time span contextual effect • Frame-by-frame mapping − Each frame is mapped independently − Smoothing using dynamic feature constraints is still essential

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

64 of 79

Limitations of DNN/DMDN-based acoustic modeling

• Fixed time span for input features − Fixed number of preceding / succeeding contexts (e.g., ±2 phonemes/syllable stress) are used as inputs − Difficult to incorporate long time span contextual effect • Frame-by-frame mapping − Each frame is mapped independently − Smoothing using dynamic feature constraints is still essential Recurrent connections → Recurrent NN (RNN) [31]

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

64 of 79

Basic RNN Output y

y t-1

yt

y t+1

Input x

xt-1

xt

xt+1

Recurrent connections

• Only able to use previous contexts → bidirectional RNN [31] • Trouble accessing long-range contexts − Information in hidden layers loops through recurrent connections → Quickly decay over time − Prone to being overwritten by new information arriving from inputs → long short-term memory (LSTM) RNN [32] Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

65 of 79

Long short-term memory (LSTM) [32] • RNN architecture designed to have better memory • Uses linear memory cells surrounded by multiplicative gate units bi

Input gate

h t-

bo

sigm

Output gate

it

bc xt

xt

xt

h t-

Input gate: Write

sigm

Output gate: Read

Memory cell

ct

tanh

tanh

ht

Forget gate: Reset

h t-

sigm

Block

bf Heiga Zen

xt

Forget gate

h t-

Statistical Parametric Speech Synthesis

June 9th, 2014

66 of 79

y2

... ...

x1

x2

...

Statistical Parametric Speech Synthesis

SPEECH

Heiga Zen

yT

...

Waveform synthesis

Duration prediction

Input feature extraction

y1

Parameter generation

Text analysis

TEXT

LSTM-based SPSS [33, 34]

xT

June 9th, 2014

67 of 79

Experimental setup Database Train / dev set data Sampling rate Analysis window Linguistic features Acoustic features DNN

LSTM Postprocessing Heiga Zen

US English female speaker 34632 & 100 sentences 16 kHz 25-ms width / 5-ms shift DNN: 449 LSTM: 289 0–39 mel-cepstrum log F0 , 5-band aperiodicity (∆, ∆2 ) 4 hidden layers, 1024 units/hidden layer ReLU (hidden) / Linear (output) AdaDec [29] on GPU 1 forward LSTM layer 256 units, 128 projection Asynchronous SGD on CPUs [35] Postfiltering in cepstrum domain [25]

Statistical Parametric Speech Synthesis

June 9th, 2014

68 of 79

Subjective evaluations

• Paired comparison test

• 100 test sentences, 5 ratings per pair • Up to 30 pairs per subject • Crowd-sourced

DNN w/ ∆ w/o ∆ 50.0 14.2 – – 15.8 – 28.4 –

Heiga Zen

LSTM w/ ∆ w/o ∆ – – 30.2 15.6 34.0 – – 33.6

Stats Neutral 35.8 54.2 50.2 38.0

Statistical Parametric Speech Synthesis

z 12.0 5.1 -6.2 -1.5

p < 10−10 < 10−6 < 10−9 0.138

June 9th, 2014

69 of 79

Samples • DNN (w/o dynamic features)

• DNN (w/ dynamic features)

• LSTM (w/o dynamic features)

• LSTM (w/ dynamic features)

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

70 of 79

Outline

Background HMM-based statistical parametric speech synthesis (SPSS) Flexibility Improvements Statistical parametric speech synthesis with neural networks Deep neural network (DNN)-based SPSS Deep mixture density network (DMDN)-based SPSS Recurrent neural network (RNN)-based SPSS Summary Summary

Summary Statistical parametric speech synthesis • Vocoding + acoustic model • HMM-based SPSS − Flexible (e.g., adaptation, interpolation) − Improvements ◦ Vocoding ◦ Acoustic modeling ◦ Oversmoothing compensation • NN-based SPSS − Learn mapping from linguistic features to acoustic ones − Static network (DNN, DMDN) → dynamic ones (LSTM) Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

72 of 79

References I [1]

E. Moulines and F. Charpentier. Pitch synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Commun., 9:453–467, 1990.

[2]

A. Hunt and A. Black. Unit selection in a concatenative speech synthesis system using a large speech database. In Proc. ICASSP, pages 373–376, 1996.

[3]

H. Zen, K. Tokuda, and A. Black. Statistical parametric speech synthesis. Speech Commun., 51(11):1039–1064, 2009.

[4]

T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In Proc. Eurospeech, pages 2347–2350, 1999.

[5]

F. Itakura and S. Saito. A statistical method for estimation of speech spectral density and formant frequencies. Trans. IEICE, J53–A:35–42, 1970.

[6]

S. Imai. Cepstral analysis synthesis on the mel frequency scale. In Proc. ICASSP, pages 93–96, 1983.

[7]

J. Odell. The use of context in large vocabulary speech recognition. PhD thesis, Cambridge University, 1995.

[8]

T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Duration modeling for HMM-based speech synthesis. In Proc. ICSLP, pages 29–32, 1998.

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

75 of 79

References II [9]

K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura. Speech parameter generation algorithms for HMM-based speech synthesis. In Proc. ICASSP, pages 1315–1318, 2000.

[10] Y. Morioka, S. Kataoka, H. Zen, Y. Nankaku, K. Tokuda, and T. Kitamura. Miniaturization of HMM-based speech synthesis. In Proc. Autumn Meeting of ASJ, pages 325–326, 2004. (in Japanese). [11] S.-J. Kim, J.-J. Kim, and M.-S. Hahn. HMM-based Korean speech synthesis system for hand-held devices. IEEE Trans. Consum. Electron., 52(4):1384–1390, 2006. [12] J. Yamagishi, Z.H. Ling, and S. King. Robustness of HMM-based speech synthesis. In Proc. Interspeech, pages 581–584, 2008. [13] J. Yamagishi. Average-Voice-Based Speech Synthesis. PhD thesis, Tokyo Institute of Technology, 2006. [14] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Speaker interpolation in HMM-based speech synthesis system. In Proc. Eurospeech, pages 2523–2526, 1997. [15] K. Shichiri, A. Sawabe, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Eigenvoices for HMM-based speech synthesis. In Proc. ICSLP, pages 1269–1272, 2002. [16] H. Zen, N. Braunschweiler, S. Buchholz, M. Gales, K. Knill, S. Krstulovic, and J. Latorre. Statistical parametric speech synthesis based on speaker and language factorization. IEEE Trans. Acoust. Speech Lang. Process., 20(6):1713–1724, 2012.

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

76 of 79

References III [17] T. Nose, J. Yamagishi, T. Masuko, and T. Kobayashi. A style control technique for HMM-based expressive speech synthesis. IEICE Trans. Inf. Syst., E90-D(9):1406–1413, 2007. [18] H. Zen, A. Senior, and M. Schuster. Statistical parametric speech synthesis using deep neural networks. In Proc. ICASSP, pages 7962–7966, 2013. [19] O. Karaali, G. Corrigan, and I. Gerson. Speech synthesis with neural networks. In Proc. World Congress on Neural Networks, pages 45–50, 1996. [20] C. Tuerk and T. Robinson. Speech synthesis using artificial network trained on cepstral coefficients. In Proc. Eurospeech, pages 1713–1716, 1993. [21] H. Zen, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. A hidden semi-Markov model-based speech synthesis system. IEICE Trans. Inf. Syst., E90-D(5):825–834, 2007. [22] K. Tokuda, T. Masuko, N. Miyazaki, and T. Kobayashi. Multi-space probability distribution HMM. IEICE Trans. Inf. Syst., E85-D(3):455–464, 2002. [23] K. Shinoda and T. Watanabe. Acoustic modeling based on the MDL criterion for speech recognition. In Proc. Eurospeech, pages 99–102, 1997. [24] K. Yu and S. Young. Continuous F0 modelling for HMM based statistical parametric speech synthesis. IEEE Trans. Audio Speech Lang. Process., 19(5):1071–1079, 2011.

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

77 of 79

References IV [25] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura. Incorporation of mixed excitation model and postfilter into HMM-based text-to-speech synthesis. IEICE Trans. Inf. Syst., J87-D-II(8):1563–1571, 2004. [26] C. Bishop. Mixture density networks. Technical Report NCRG/94/004, Neural Computing Research Group, Aston University, 1994. [27] H. Zen and A. Senior. Deep mixture density networks for acoustic modeling in statistical parametric speech synthesis. In Proc. ICASSP, pages 3872–3876, 2014. [28] M. Zeiler, M. Ranzato, R. Monga, M. Mao, K. Yang, Q.-V. Le, P. Nguyen, A. Senior, V. Vanhoucke, J. Dean, and G. Hinton. On rectified linear units for speech processing. In Proc. ICASSP, pages 3517–3521, 2013. [29] A. Senior, G. Heigold, M. Ranzato, and K. Yang. An empirical study of learning rates in deep neural networks for speech recognition. In Proc. ICASSP, pages 6724–6728, 2013. [30] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, pages 2121–2159, 2011. [31] M. Schuster and K. Paliwal. Bidirectional recurrent neural networks. IEEE Trans. Signal Process., 45(11):2673–2681, 1997. [32] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

78 of 79

References V

[33] Y. Fan, Y. Qian, F. Xie, and F. Soong. TTS synthesis with bidirectional LSTM based recurrent neural networks. In Proc. Interspeech, 2014. (Submitted) http://research.microsoft.com/en-us/projects/dnntts/. [34] H. Zen, H. Sak, A. Graves, and A. Senior. Statistical parametric speech synthesis using recurrent neural networks. In UKSpeech Conference, 2014. [35] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Ng. Large scale distributed deep networks. In Proc. NIPS, 2012.

Heiga Zen

Statistical Parametric Speech Synthesis

June 9th, 2014

79 of 79

Statistical Parametric Speech Synthesis - Research at Google

Jun 9, 2014 - Text analysis. Model training x y x y λˆ. • Large data + automatic training. → Automatic voice building. • Parametric representation of speech.

5MB Sizes 8 Downloads 546 Views

Recommend Documents

Statistical Parametric Speech Synthesis Using ... - Research at Google
ing data. Deep neural networks have achieved large improvements ... analysis. Input feature extraction ... Statistics (mean & var) of speech parameter vector ...

Statistical Parametric Speech Synthesis: From ... - Research at Google
Jul 9, 2015 - Statistical Parametric Speech Synthesis: From HMM to LSTM-RNN. July 9th, 2015 ... Large data + automatic learning. → High-quality ... generation. Speech synthesis. Text analysis. Speech analysis. Text analysis. Model.

Building statistical parametric multi-speaker ... - Research at Google
Procedia Computer Science 00 (2016) 000–000 .... For the recordings we used an ASUS Zen fanless laptop with a Neumann KM 184 microphone, a USB converter ... After a supervised practice run of 10–15 minutes, the remainder.

Building Statistical Parametric Multi-speaker ... - Research at Google
While the latter might be done with simple grapheme-to-phoneme rules, Bangla spelling is sufficiently mismatched with the pronunciation of colloquial Bangla to warrant a transcription effort to develop a phonemic pro- nunciation dictionary. Consider

Acoustic Modeling for Speech Synthesis - Research at Google
prediction. Vocoder synthesis. Text analysis. Vocoder analysis. Text analysis. Model .... Impossible to have enough data to cover all combinations ... Training – ML decision tree-based state clustering [6] ...... Making LSTM-RNN-based TTS into prod

Deep Learning in Speech Synthesis - Research at Google
Aug 31, 2013 - Heiga Zen. Deep Learning in Speech Synthesis. August 31st, 2013. 6 of 50 ..... w/ vs w/o grouping questions (e.g., vowel, fricative). − Grouping ...

Large Vocabulary Automatic Speech ... - Research at Google
Sep 6, 2015 - child speech relatively better than adult. ... Speech recognition for adults has improved significantly over ..... caying learning rate was used. 4.1.

DIRECTLY MODELING SPEECH WAVEFORMS ... - Research at Google
statistical model [13], and mel-cepstral analysis-integrated hidden ..... Speech data in US English from a female professional speaker was used for the ...

Speech and Natural Language - Research at Google
Apr 16, 2013 - clearly set user expectation by existing text app. (proverbial ... develop with the users in the loop to get data, and set/understand user ...

Accuracy of Contemporary Parametric Software ... - Research at Google
parametric software estimation models, namely COCOMO II, ... [10] reports on a similar study that ... collected by companies, non-profit organizations, research.

Soft 3D Reconstruction for View Synthesis - Research at Google
Progress of rendering virtual views of di icult scenes containing foliage, wide baseline occlusions and reflections. View ray and ... ubiquitous digital cameras from cell phones and drones, paired with automatic ... Furthermore, we show that by desig

Learning Acoustic Frame Labeling for Speech ... - Research at Google
learning technique for sequence labeling using RNNs where the ..... aI. (j) CTC + sMBR bidirectional LSTM with phone labels. Fig. 1: Label posteriors estimated ...

Word Embeddings for Speech Recognition - Research at Google
to the best sequence of words uttered for a given acoustic se- quence [13, 17]. ... large proprietary speech corpus, comparing a very good state- based baseline to our ..... cal speech recognition pipelines, a better solution would be to write a ...

Robust Speech Recognition Based on Binaural ... - Research at Google
degrees to one side and 2 m away from the microphones. This whole setup is 1.1 ... technology and automatic speech recognition,” in International. Congress on ...

Generative Model-Based [6pt] Text-to-Speech ... - Research at Google
Speech: auto-encoder at FFT spectra [ , ] ! positive results ... Joint acoustic feature extraction & model training .... Joint training, direct waveform modelling.

Wed.O8d.03 Speech/Nonspeech Segmentation ... - Research at Google
classification approaches and features, informed by other do- mains, that are ... component GMM, the likelihood of observation o at state si is: P(o | si) = M. ∑ j=1.

RAPID ADAPTATION FOR MOBILE SPEECH ... - Research at Google
A large number of the ... number of parameters in MAP adaptation can be as large as the ..... BOARD: Telephone Speech Corpus for Research and Devel-.

Unsupervised Part-of-Speech Tagging with ... - Research at Google
Carnegie Mellon University. Pittsburgh, PA 15213 ... New York, NY 10011, USA [email protected] ... practical usability questionable at best. To bridge this gap, ...

Challenges in Automatic Speech Recognition - Research at Google
Case Study:Google Search by Voice. Carries 25% of USA Google mobile search queries! ... speech-rich sub-domains such as lectures/talks in ... of modest size; 2-3 orders of magnitude more data is available multi-linguality built-in from start.

Learning the Speech Front-end With Raw ... - Research at Google
tion in time approach that is inspired by the frequency-domain mel filterbank similar to [7], to model the raw waveform on the short frame-level timescale.

STATE-OF-THE-ART SPEECH RECOGNITION ... - Research at Google
model components of a traditional automatic speech recognition. (ASR) system ... voice search. In this work, we explore a variety of structural and optimization improvements to our LAS model which significantly improve performance. On the structural

On Rectified Linear Units for Speech Processing - Research at Google
100-dimensional principal components. All layers of our ... model replicas P has been set to 100. ... functions localized in the time-frequency domain, but others are more ... acoustic models using distributed hessian-free optimiza- tion,” in ...