SPEAKING RATE ADAPTATION USING CONTINUOUS FRAME RATE NORMALIZATION Stephen M. Chu

Daniel Povey1

IBM T. J. Watson Research Center Yorktown Heights, New York 10598, USA [email protected]

Microsoft Research Redmond, Washington 98052, USA [email protected]

ABSTRACT This paper describes a speaking rate adaptation technique for automatic speech recognition. The technique aims to reduce speaking rate variations by applying temporal warping in front-end processing so that the average phone duration in terms of feature frames remains constant. Speaking rate estimates are given by timing information from unadapted decoding outputs. We implement the proposed continuous frame rate normalization (CFRN) technique on a state-of-the-art speech recognition architecture, and evaluate it on the most recent GALE broadcast transcription tasks. Results show that CFRN gives consistent improvement on all four separate systems and two different languages. In fact, the reported numbers represent the best decoding error rates of the corresponding test sets. It is further shown that the technique is effective without retraining, and adds little overhead to the multi-pass recognition pipeline found in state-of-the-art transcription systems. Index Terms – CFRN, speaking rate adaptation, speech recognition, frame rate normalization. 1. INTRODUCTION An essential problem in automatic speech recognition (ASR) is how to adapt to the myriad types of variability in human speech, from low-level acoustic disparities to speaker differences to higher-level linguistic diversities. In fact, most current ASR systems rely on a collection of normalization and adaptation techniques to tackle such variations, e.g., cepstral mean normalization (CMN), vocal tract length normalization (VTLN), and maximum likelihood linear regression (MLLR). One significant source of variation is the speaking rate, which measures how fast a speech segment is spoken. The overall speaking rate varies at both the speaker level and the utterance level. It may also fluctuate within a sentence. It has been shown that variance in speaking rate has a clear negative impact on speech recognition performance [1]. The hidden Markov model (HMM) framework inherently takes care of the problem to some extent by allowing certain degree of freedom in the temporal axis. However, in practice HMMs often model duration inadequately, especially when the speaking rate variance is large. Efforts have been made to address the shortcoming from the modeling perspective, ranging from explicit duration 1

The author was at IBM Research while this work was performed.

modeling in HMM to more drastic departures from the HMM paradigm such as segment-based recognition [2]. This work aims to address speaking rate variation in the feature space. In particular, we propose a normalization technique for speaking rate that can be used under the standard HMM framework, as an additional tool in conjunction with existing normalization and adaptation procedures in the recognition pipeline. The proposed technique normalizes different speaking rates by adjusting both the frame rate and frame size in the acoustic feature extraction stage. Different from prior work in [3]-[7], instead of using a small number of predetermined discrete rates, we allow the frame rate to vary continuously. For speaking rate detection, the work bypasses the low-level signal processing approach [3]-[4], and relies on the ASR system itself for estimation and adaptation. Furthermore, many proposed adaptation schemes require additional training, sometimes of multiple systems [5]-[7]. In contrast, the method presented here is shown to be effective even without matched training. Finally, most results on this subject are reported on small tasks or limited systems. In this paper, we implement the proposed continuous frame rate normalization (CFRN) technique in a state-of-the-art ASR architecture, and evaluate it on the most recent GALE broadcast transcription tasks. Results show that CFRN gives consistent improvement on four separate systems for two languages. In fact, these results represent the best error rates reported on the corresponding test sets. The remainder of this paper is organized as follows. In Section 2, we discuss speaking rate adaptation and introduce CFRN. The system architecture is given in Section 3. Section 4 presents the experimental results, followed by conclusions in Section 5. 2. SPEAKING RATE ADAPTATION WITH CFRN 2.1. Speaking Rate and Frame Rate Most HMM-based ASR systems use a fixed frame rate and window size in front-end processing. This is based on the assumption that the non-stationary speech signal can be approximated by a piecewise quasi-stationary process. The common choice of 25 ms window and 10 ms step size is a compromise between data rate and resolution determined empirically to give a reasonable performance on average. Using a fixed frame rate is not optimal on two levels. First, the fixed rate might be too slow to adequately capture the dynam-

ics in transient speech events while generating highly correlated redundant observations for stationary speech units [3]. This has motivated efforts to adjust frame rate based on the duration of lowlevel speech units. In [4], a very short step size of 2.5ms is used to process the speech signal and an entropy measure is used to extract the appropriate frames from the sequence. In [8], signal processing is done at a fixed frame rate, and frames are later dropped or inserted (inferred from existing frames) to normalize the duration of all phones, resulting in an observation sequence with effectively a varying frame rate. In general, using variable frame rates to equalize low-level temporal dynamics often requires dramatic changes from the conventional value. Second, the overall speed of speaking is variable, and most of the time does not match the optimal operating point of a fixedframe-rate system. Both the literature and our own experiments clearly indicate that the further the speaking rate drifts, the higher the error rate. It is this overall shift in speaking rate that we aim to compensate for in this work. In [5], utterances are grouped into fast and slow speech, and two different frame rates are chosen. Similarly, three discrete sets of frame rates and window sizes are used in [7]. Fig. 1 shows a typical distribution of speaking rate 140

120

# utternace

100

80

60

40

20

0

0.04

0.05

0.06 mean phone duration

0.07

0.08

Fig. 1 Distribution of utterances over speaking rates (measured in average phone duration) on GALE eval’08 Arabic set. observed in broadcast speech (consisting of regular broadcast news and the more spontaneous broadcast conversations). Here the speaking rate is defined as the average phone duration in an utterance. The graph confirms that the speaking rate does vary. Further, the speaking rate appears to vary continuously, thus making it difficult to justify any arbitrary thresholds for fast and slow speech. Lastly, the dynamic range of the speaking rate is actually moderate. This suggests that a modest adjustment might be enough to reduce the mismatch between speech rate and frame rate. 2.2. Speaking Rate Detection The reliable detection of speech rate is key to an adaptation scheme. Note that our definition of speaking rate is slightly different from the commonly used number of syllables per second (minute) in Linguistics. Our objective is to normalize the average

speed of pronounced speech, hence, any inter-word silence and other non-speech segments, though important to the perceived rhythm and speed, must be discarded from the calculation. Our measure of the speaking rate is an average of phone durations in a speech segment. It is therefore important to determine at which level the average should be taken, so that the subsequent adaptation is the most effective. In the context of broadcast speech transcription, the possible levels include per show, per speaker, per utterance, or within utterance. Both the speaker-level and the utterance-level options are implemented. It is found that per utterance adaptation gives consistently higher gains in recognition performance. Thus, given an utterance i , composed of a sequence of N i words: [wi,1 , wi,2 K wi,N i ] , the speaking rate f (i) is, Ni

f (i) = ∑ t(wi, j ) j =1

Ni

∑ n(w j =1

i, j

)

(1)

where t(wi, j ) is the duration of the word wi, j and n(wi, j ) is the number of phones in the word. Transcripts and actual word durations are needed to compute f (i) . For training, the timing information can be obtained by forced alignment using existing acoustic models. For test utterances, we propose to use the decoding hypotheses from the unadapted system to compute an estimate of the speaking rate, fˆ (i) . Given phone-level alignment, the definition given in (1) can be modified to compute average vowel duration or average syllable duration, both reasonable alternatives for measuring speaking rate. There also exist ways to detect speaking speed directly from the speech using signal processing methods [2]-[4]. One potential drawback of the ASR-based approach is that it requires an additional decoding pass. However, this does not pose a significant problem for most modern ASR systems as using multipass decoding to take advantage of adaptation techniques such as MLLR has become standard. In a practical system with n decoding passes, decoding output from the n-1’th pass can usually be used for speaking rate estimation, with the final pass replaced by CFRN decoding. Thus, the overall number of decoding passes remains unchanged. 2.3. Continuous Frame Rate Normalization As discussed above, the fixed frame rate in an ASR system is chosen to give the best overall performance for different speaking rates, and is expected to perform well if a specific utterance’s measured speaking rate matches the average. Fig. 2(a) shows a scatter of per utterance cumulative phone duration vs. total number of phones, essentially the numerator and denominator terms in (1), over a large Arabic speech set. Clearly, if the speaking rate f (i) were constant, the scatter would reduce to a straight line, and 1/slope is the universal speaking rate. The objective of speaking rate adaptation is to normalize f (i) so that it remains constant for all utterances. Instead of directly normalizing the speaking rate by manipulating the speech signal, we apply normalization to the frame rate so that the average phone duration in terms of feature frames remains constant. We define the target speaking rate Φ to be the global average phone duration:

3. SYSTEM ARCHITECTURE

700

600

The proposed CFRN technique is implemented on a state-of-the-art multi-pass speech recognition architecture for broadcast transcription. Details of the architecture can be found in [10]. Here we only give a brief summary. The input audio is sampled at 16 KHz, and coded using 13dememsional PLP features; 25 ms windows and 10 ms shift are used before CFRN. Nine consecutive frames are spliced and projected to 40 dimensions using LDA and MLLT. The acoustic models are continuous mixture density HMMs with context-dependent states conditioned on cross-word quinphone context. The acoustic models are refined through multiple training stages. The final ML speaker adapted system contains VTLN, fMLLR, and MLLR adaptations. The discriminative system is built with both feature and model space training using either the MPE or MMI criterion. A diagram of the baseline decoding pipeline is shown in Fig. 3.

# phones

500

400

300

200

(a)

100

0

0

5

10

15 20 25 30 total phone duration (sec)

35

40

45

1.2

4. EXPERIMENTS

warp factor

1.1

4.1. Experimental Setup

1.0

Experiments are carried out on four setups: a. Mandarin speaker independent (SI) system, b. Mandarin speaker adapted (SA) system, c. unvowelized Arabic system, and d. vowelized Arabic system.

0.9

0.8

(b) 0.7 0

5

10

15 20 25 30 total phone duration (sec)

35

40

45

Fig. 2 (a) Scatter plot of number of phones vs. cumulative phone duration per utterance over three GALE Arabic test sets. (b) Distribution of the corresponding CFRN warp factors. M

Ni

Φ = ∑∑ t(wi, j ) i = i j =1

M

Ni

∑∑ n(w i = i j =1

i, j

)

(2)

where M is the number of utterances in the entire set. The warping factor for utterance i , warp(i) is the ratio of the speaking rate estimate and the target speaking rate: ⎧ fˆ (i) ≤ min warp ⎪min warp , if Φ ⎪ ˆ f (i) ⎪ ≥ max warp warp(i) = ⎨max warp , if Φ ⎪ ˆ ⎪ f (i) otherwise ⎪ Φ , ⎩

Mandarin systems Acoustic models are built on 1.7K hours of data released by LDC for the GALE program. The SI system has 10K quinphone states modeled by 300K Gaussian densities. The SA system has 15K states and 800K Gaussians. The decoding LM is built by interpolating 20 back-off 4-gram models using modified Kneser-Ney smoothing, and has a107K vocabulary. Arabic systems Acoustic models are built on 1.5K hours of transcribed GALE data PLP features

segmentation speaker clustering

SI decoding

(3)

VTLN

VTLN

fMLLR

fMLLR fMPE/fMMI

SA decoding

Lower and upper limits min warp and max warp are necessary to prevent improbable warping factors caused by unstable speaking rate estimates. Finally, the frame rate is normalized by warping the step size and the window size in the front-end:

tstep (i) = warp(i) ⋅ Tstep , twin (i) = warp(i) ⋅ Twin

(4)

where Tstep and Twin are the original fixed step and window sizes, and tstep (i) and twin (i) are adapted values for utterance i .

SA decoding

segmentation models

SI models

MLLR

MLLR

ML SA models

MPE/MMI SA models

Fig. 3. Baseline decoding pipeline of the IBM broadcast speech transcription system.

4.2. Experimental Results Table 1 shows the SI decoding results on the Mandarin dev’08 set defined by the GALE consortium. Three approaches are compared: a. the baseline with fixed frame rate, b. variable frame rate with Table 1. SI decoding results on GALE mandarin dev’08. CER substitutions deletions insertions

fixed FR

discrete FR

cfrn

16.9% 13.1% 2.6% 1.2%

16.6% 12.9% 2.5% 1.2%

16.3% 12.6% 2.5% 1.2%

three discrete warping factors, and c. CFRN. The results confirm that speaking rate adaptation through frame rate adjustment is indeed viable, and continuous warping is more effective, achieving 0.6% absolute reduction in character error rate (CER). Two more sets are considered in the full SA decoding tests, shown in Table 2. P-2.5 and eval’08 consist of the un-sequestered data from the GALE phase 2.5 and phase 3 evaluations, respecTable 2. SA decoding results on three GALE mandarin test sets. baseline

+ cfrn

dev’08

p-2.5

eval’08

7.9% 7.7%

7.5% 7.4%

10.5% 10.2%

tively. Speaking rates are estimated using the baseline output, and decoding is carried out subsequently. The results suggest that the gain from CFRN is consistent. Table 3 summarizes results on the two Arabic systems. Similar to the Mandarin case, dev’08 is a consortium-defined test set, while eval’07 and eval’08 contain un-sequestered portions of the corresponding GALE evaluation sets. For baseline decoding, the unvowelized system is cross-adapted (with fMLLR/MLLR) on the CFRN

Table 3. Decoding results on GALE Arabic test sets. unvowelized

+ cfrn vowelized

+ cfrn

dev’08

eval’07

eval’08

13.3% 13.2% 13.3% 13.2%

14.7% 14.6% 14.7% 14.4%

10.9% 10.7% 10.8% 10.6%

SA decoding output of the vowelized system, and vice versa. CFRN decoding, the SA output from the vowelized system is

For used for speaking rate estimation in both cases. The results show that CFRN is able to consistently give an additional gain on top of the best baselines. Finally, a plot of WER as a function of speaking rate on eval’08 is given in Fig. 4. Corresponding baseline and

30 baseline normalized 25

word error rate (%)

from LDC and 85 hours of FBIS and TDT-4 audio. The unvowelized system uses straightforward graphemic models, whereas the vowelized system uses phonetic models and requires inferring short vowels and diacritics missing in written Arabic. The unvowelized system has 5K quinphone states modeled by 400K Gaussian densities. The vowelized system has 6K states and 400K Gaussians. Both systems share an interpolated and smoothed 4-gram LM with a 774K-word vocabulary.

20

15

10

5

0

.040

.045

.050 .055 .060 .065 mean phone duration (sec)

.070

.075

Fig. 4. Error distribution over speaking rate on eval’08 (Arabic). Higher error reduction is observed on fast speech. CFRN WERs are compared. Two observations can be made from the plot. First, off-central speaking rates indeed lead to higher WERs. Second, more consistent gain is made on fast speech.

5. CONCLUSIONS This work considers frame rate normalization for speaking rate adaptation in ASR. Applied at test time without retraining, the proposed CFRN technique is shown to consistently reduce error rates over optimized baselines with minimal overhead. In future work, we will investigate nonlinear frame rate warping and matched CFRN training. REFERENCES [1] M. A. Siegler, and R. M. Stern, “On the effects of speech rate in large vocabulary speech recognition systems,” in Proc ICASSP’95, pp. 612-615, May 1995. [2] J. R. Glass, “A probabilistic framework for segment-based speech recognition,” Computer Speech and Language, vol. 17, pp. 137-152, 2003. [3] Q. Zhu and A. Alwan, “On the use of variable frame rate analysis in speech recognition,” in Proc. IEEE ICASSP’00, pp. 1783-1786, June 2000. [4] H. You, Q. Zhu, and A. Alwan, “Entropy-based variable frame rate analysis of speech signals and its application to ASR,” in Proc. ICASSP’04, pp. 549-552, May 2004. [5] H. Nanjo, K. Kato, and T. Kawahara, “Speaking rate dependent acoustic modeling for spontaneous lecture speech recognition,” in Proc. EUROSPEECH’01, pp.2531--2534, 2001. [6] K. Okuda, T. Kawahara, and S. Nakamura, “Speaking rate compensation based on likelihood criterion in acoustic model training and decoding,” in Proc. ICSLP’02, pp. 2589--2592, 2002. [7] V. R. Gadde, K. Sonmez, and H. Franco, “Multirate ASR models for phone-class dependent N-best list rescoring,” in Proc. ASRU’05, pp.157161, November 2005. [8] J. P. Nedel and R. M. Stern, “Duration normalization for improved recognition of spontaneous and read speech via missing feature methods,” in Proc. ICASSP’01, pp. 313-316, May 2001. [9] S. M. Chu et al., “Recent advances in the IBM GALE Mandarin transcription system,” in Proc. ICASSP’08, pp. 4329-4332, March 2008.

speaking rate adaptation using continuous frame ... - Semantic Scholar

This paper describes a speaking rate adaptation technique for auto- matic speech ... additional training, sometimes of multiple systems [5]-[7]. In con- trast, the ...

358KB Sizes 0 Downloads 244 Views

Recommend Documents

speaking rate adaptation using continuous frame rate ...
Most HMM-based ASR systems use a fixed frame rate and window ..... rd e rro r rate. (%. ) baseline normalized. Table 1. SI decoding results on GALE mandarin ...

Space Frame Structures - Semantic Scholar
13.1 Introduction to Space Frame Structures. General Introduction • Definition of the Space Frame • Basic. Concepts• Advantages of Space Frames• Preliminary Planning. Guidelines. 13.2 Double Layer Grids. Types and Geometry • Type Choosing â

Fast Speaker Adaptation - Semantic Scholar
Jun 18, 1998 - We can use deleted interpolation ( RJ94]) as a simple solution ..... This time, however, it is hard to nd an analytic solution that solves @R.

Fast Speaker Adaptation - Semantic Scholar
Jun 18, 1998 - where we use very small adaptation data, hence the name of fast adaptation. ... A n de r esoudre ces probl emes, le concept d'adaptation au ..... transform waveforms in the time domain into vectors of observation carrying.

Self-Adaptation Using Eigenvoices for Large ... - Semantic Scholar
However, while we are able to build models for, say, voice ... the system, then for all , we can write ... voice model on an arbitrary concatenation of speech seg-.

Evolving developmental programs for adaptation ... - Semantic Scholar
School of Computer Science, University of Birmingham, Birmingham, UK, B15 2TT ... Genetic programming is a paradigm in which computer program are ...

Domain Adaptation: Learning Bounds and ... - Semantic Scholar
samples for different loss functions. Using this distance, we derive new generalization bounds for domain adaptation for a wide family of loss func- tions. We also present a series of novel adaptation bounds for large classes of regularization-based

Violation of Continuous-Variable Einstein ... - Semantic Scholar
3Rochester Theory Center, University of Rochester, Rochester, New York 14627, USA. (Received 13 ... local realism in the EPR paradox [2], a task whose diffi- culty grows .... of entanglement (which we call symmetric EPR steering) sufficient to ...

Domain Adaptation with Coupled Subspaces - Semantic Scholar
With infinite source data, an optimal target linear pre- ... ward under this model. When Σt = I, adding ... 0 as the amount of source data goes to infinity and a bias.

mStress: Supporting Continuous Collection of ... - Semantic Scholar
All of this functionality occurs entirely on the mobile phone without any help .... a call from the boss when at home leads to excessive stress. [11]. Last, but not ...

Continuous extremal optimization for Lennard ... - Semantic Scholar
The optimization of a system with many degrees of free- dom with respect to some ... [1,2], genetic algorithms (GA) [3–5], genetic programming. (GP) [6], and so on. ..... (Color online) The average CPU time (ms) over 200 runs vs the size of LJ ...

A Semantic Policy Sharing and Adaptation ... - Semantic Scholar
Apr 16, 2012 - ously providing a more customized control of home appli- ances and environments ... munications (telco) specific information like identity, loca- tion, etc. in ..... Access Control Framework for Secure Collaborations in Pervasive ...

A Semantic Policy Sharing and Adaptation ... - Semantic Scholar
Apr 16, 2012 - nified when limitations of ontology development in OWL are taken into account. ... in rule based semantic applications in the use cases of smart ... for mobile users. ... Semantic Web, Rules, Semantic Policy, Rule Interchange,.

Mutation Rate Inferred From Synonymous ... - Semantic Scholar
Aug 1, 2011 - 1999). Both methods have limitations. The former requires knowledge of the ... Supporting information is available online at http://www.g3journal.org/lookup/ · suppl/doi:10.1534/g3.111.000406/-/DC1 .... mated point-mutation rates for th

using rapd markers - Semantic Scholar
based on this, cluster analysis was done using minimum variance algorithm. Cluster analysis showed two major groups. Each sub-group was characterized ...

using rapd markers - Semantic Scholar
RAPD data were used to calculate a Squared Euclidean Distance matrix, and based on this, cluster ... Africa, South-East, Asia, U.S.A, Brazil, Australia and. Turkey. In some ... homogenate was cooled to room temperature and extracted with 5 ...

frame-rate up-conversion using transmitted true motion ...
coding). Then, it is recovered by the decoder and is used not only for motion compen- sated predictions but also used to reconstruct missing data. It is shown that ...

Frame Discrimination Training of HMMs for Large ... - Semantic Scholar
is either increasing or decreasing, and does not alternate between the two. ... B that are increasing, the inequality of Equation 10 can be proved from the facts that ..... and the normalised log energy; and the first and second differentials of thes

Learning the Inter-frame Distance for ... - Semantic Scholar
introduces an approach for learning the inter-frame distance of a. TM keyword ... text of computer vision [10]. These choices yield ..... An effective online learning ...

Learning the Inter-frame Distance for ... - Semantic Scholar
introduces an approach for learning the inter-frame distance of a. TM keyword ... text of computer vision [10]. These choices yield ..... An effective online learning ...

A cross-cultural study of reference point adaptation - Semantic Scholar
Mar 25, 2010 - b Paul Merage School of Business, University of California, Irvine, CA, United ... Mental accounting .... seeking, in that a further loss will cause only a small decrease on ... equal to the reference point, and showed that assuming a

Rapid speaker adaptation in eigenvoice space - Semantic Scholar
The associate ed- itor coordinating ..... degrees of freedom during eigenvoice adaptation is equivalent ...... 1984 and the Ph.D. degree in computer science from.

language style and domain adaptation for cross ... - Semantic Scholar
is the data used to train the source SLU, and new language understanding ..... system using an open source tools such as Moses on out-of- domain parallel ...

Rapid speaker adaptation in eigenvoice space - Semantic Scholar
free parameters to be estimated from adaptation data. ..... surprising, until one considers the mechanics of ML recogni- tion. .... 150 speakers ended up being tested (this was arranged by car- ... training. During online adaptation, the system must