MULTIFRAME DEEP NEURAL NETWORKS FOR ACOUSTIC MODELING Vincent Vanhoucke, Matthieu Devin, Georg Heigold Google, Inc., USA ABSTRACT Deep neural networks have been shown to perform very well as acoustic models for automatic speech recognition. Compared to Gaussian mixtures however, they tend to be very expensive computationally, making them challenging to use in real-time applications. One key advantage of such neural networks is their ability to learn from very long observation windows going up to 400 ms. Given this very long temporal context, it is tempting to wonder whether one can run neural networks at a lower frame rate than the typical 10 ms, and whether there might be computational benefits to doing so. This paper describes a method of tying the neural network parameters over time which achieves comparable performance to the typical frame-synchronous model, while achieving up to a 4X reduction in the computational cost of the neural network activations. Index Terms— deep neural networks, acoustic modeling

approach is to quantize the networks and use fixed-point computations [3]. Another is to distribute the computation across multiple cores, or even machines [4]. To go beyond that, one might have to consider limiting the size of the networks or exploring alternative architectures. This paper introduces another approach which takes advantage of the stationarity of the speech signal, and ties neural network parameters across frames, enabling the acoustic model to be run at a reduced frame rate. Rather than separating the model description from the experiments, we will use the experiments to guide the rationale behind the approach: Section 2 describes the baseline system and shows the performance/complexity tradeoff of a typical frame-synchronous acoustic model. Section 3 describes a simple asynchronous approach which performs remarkably well. Section 4 introduces the proposed method, and shows that it compares advantageously to both baselines in terms of accuracy and complexity. Section 5 demonstrates the speedups that can be obtained using this technique.

1. INTRODUCTION

1. use a large number of shared parameters across states: while GMM parameters are only exercised when their associated state is active, DNN parameters up to the last hidden layers are shared across all states [2], 2. use wider windows of context: while GMM systems rarely benefit from using more than 10 frames (100 ms) of context around the central frame, DNNs benefit from 20 (200 ms) and up to 40 (400 ms), 3. use a larger number of output states: it has been observed that DNN systems can typically take advantage of a much larger number of output states than comparable GMM systems. The larger number of parameters that need to be evaluated at every frame has the disadvantage of making real-time inference more computationally challenging. There are several ways to mitigate this problem. One is to use GPUs, which are very effective at handling large matrix computations. Another

2. HYBRID DEEP NEURAL NETWORK SYSTEM AND COMPUTATIONAL COMPLEXITY

26%

Iberian Portuguese US English

24% Word error rate

Deep neural networks (DNNs) have become increasingly popular for acoustic modeling [1]. They make it possible to effectively use many more parameters than typical Gaussian mixture models (GMMs) in several ways:

22% 20% 18% 16% 14% 12% 100

200

300 400 500 600 700 Nodes per hidden layer

800

900

Fig. 1. Error rate against complexity of neural network acoustic models for US English, trained on thousands of hours of data, and Iberian Portuguese, trained on 100 hours of data.

For extensive background, a general introduction to hybrid DNN systems can be found in [1]. The goal of this paper is to look at the complexity/accuracy tradeoff of such a system. To explore this tradeoff, we trained a collection of systems of various complexities on two datasets: US English Voice Search [5] and voice typing, and Iberian Portuguese Voice Search, by varying the width of the hidden layers of the DNN acoustic model. The rest of the system was kept fixed: the frontend consists of 40 log-filterbank energies computed every 10 ms, stacked 20 frames in the past and 5 frames in the future to limit latency. The DNNs all have 3 hidden layers, sigmoid activations and 7969 softmax output classes for English (2960 for Portuguese), which are the leaves of a state-tying decision tree. They were trained using a distributed neural network infrastructure [4] using AdaGrad [6] and asynchronous parameter updates. The training data consists of more than 3000 hours of speech for English and approximately 100 hours for Portuguese. The evaluation is performed on 27327 held out utterances for English (11901 for Portuguese), using a fixed large-vocabulary language model. There are arguably many ways to sweep the algorithmic complexity of a system, and varying the width of the hidden layers is but one of them. Nevertheless, we find that this is an effective way to span a wide range of complexities without departing too far from the optimal operating point. Figure 1 shows how the word error rate of the evaluation set changes as we sweep the number of hidden nodes between 160 and 896 nodes per layer. Each datapoint corresponds to a doubling of the number of hidden parameters. It is evident from the graph that the performance falls off rapidly as the number of parameters decreases. The question is: can we do better?

State Predictions

Neural Network

10 ms

10 ms

Feature Frames 10 ms

Fig. 2. Frame synchronous baseline approach: each acoustic model input is a window of multiple feature frames, shifted by one frame (10 ms) over time. A prediction is issued every input frame synchronously.

3. FRAME ASYNCHRONOUS MODEL

State Predictions

Neural Network

10 ms

20 ms

Feature Frames 20 ms

Fig. 3. Frame asynchronous approach, in the case of an acoustic model running at half the frame rate of the feature stream. Predictions are simply copied every other frame. Speech is a rather stationary process when analyzed at a 10 ms frame rate. A much wider window of time (e.g. 275 ms in our experiments) is used to make a frame classification decision. The traditional approach is depicted in Figure 2: overlapping stacked frames are passed to the neural network to issue a prediction synchronously at every frame. It is natural to wonder whether one could simply use the predictions at time t − K, k = 1, 2, . . . to issue a prediction at time t. There are many models that attempt to take advantage of time correlations between feature frames. A na¨ıve, but computationally inexpensive approach is to simply copy the predictions from previous frames as depicted in Figure 3. Since the alignments used to train the networks are inherently noisy, one can expect the neural networks to be very robust to alignments being off by several frames. This works surprisingly well, as illustrated in Figure 4: the graph depicts the performance of systems running the acoustic model at 1/2 and 1/4 the frame rate compared to frame-synchronous models of the same complexity. The approach is not novel, it has been used in GMM/HMM systems to trade off performance against speed, including more sophisticated variablerate schemes [7, 8]. It is nonetheless interesting to note how well it performs in the context of a DNN on a very large task, and better so as the acoustic model and training data get larger. Note that because we copy the predictions for frame t to t + 1 (or t + 1, t + 2, t + 3 respectively), the decoder still runs at the same 10 ms frame rate in all cases. Based on the envelope of the resulting curves, it appears to always be a better tradeoff to oversize the network by a factor 2, and only compute acoustic scores every 2 frames. Computing acoustic scores every 4 frames did yield a better operating point on English, but not on Portuguese. Can we do even better?

19%

Frame sync. Frame async. (320 n/l) Frame async. (448 n/l) Frame async. (640 n/l)

Word error rate

18% 17%

State Predictions

10 ms

16% 15% Neural Network

14%

20 ms

13% Feature Frames

12% 1

26%

16

Frame sync. Frame async. (320 n/l) Frame async. (448 n/l) Frame async. (640 n/l)

25% Word error rate

2 4 8 Relative acoustic model complexity

20 ms

Fig. 5. Multiframe approach, in the case of an acoustic model running at half the frame rate of the feature stream. The neural network is trained to issue jointly a prediction for multiple consecutive frames.

24%

23%

22% 1

2 4 8 Relative acoustic model complexity

16

Fig. 4. Error rate against complexity of the neural network for US English (top) and Iberian Portuguese (bottom). Frame synchronous model complexity is controlled by doubling the network size between data points. Frame asynchronous models have a fixed size (320, 448 or 640 nodes per layer), but are computed every 1, 2 or 4 frames, resulting in complexities equivalent to, 1, 1/2 or 1/4 of the frame synchronous model complexity.

4. MULTIFRAME PREDICTION Since the last layer of a DNN can be computed on-demand at decoding time and scores can be batched [3], there are fewer efficiency gains to be obtained from running the final layer of the DNN at a lower frame-rate. This suggests that training a DNN which shares all its hidden parameters, but uses frame-synchronous output layers might be a good tradeoff. The resulting architecture is depicted in Figure 5: the DNN has the same topology as our baseline system, but in addition to a softmax regression layer that predicts the frame label at time t, it also has an output layer trained jointly for labels t − 1 up until t − K. In our experiments, we are looking at prediction of frames in the past (t − 1, . . . , t − K), and not future frames (t + 1, . . . , t + K), because the reference frame

has a much longer context window in the past (20 frames) compared to the future (5 frames), and hence past frames are provided a more balanced context than future frames. Note that this means that the effective input window for these predictions is [t − 20 + K, t + 5 + K] instead of [t − 20, t + 5]. If K is large, this will have an impact on the overall latency of the system. In practice here, we will look at a worst case delay increase of 30ms. Training such DNNs can be performed by backpropagating the errors from both output layers jointly through the network, taking into consideration that due to the increased gradient magnitudes, the overall learning rate might have to be reduced. One interesting implementation point is that, for the decoder to operate in a frame-synchronous manner, it needs to presented first with the predictor for time t, followed by t − K, t − K + 1 . . .. Table 1 compares the performance of the asynchronous and multiframe prediction approaches. The complexity comparison overlooks the fact that one has more effective parameters in the output layers than the other. The cost of these extra parameters is in practice a small increment over the cost of the rest of the model, and is very much implementation and taskdependent. The outcome is consistent with expectations: using a frame-synchronous output layer improves performance. What is somewhat surprising is that the performance of the resulting system seems to consistently be as good as the baseline system. The small performance gains against the framesynchronous baseline for some of the Portuguese experiments were not found to be statistically significant. It is possible that joint multiframe training can help regularize the training in the presence of noisy alignments, but so far the evidence of any such effect is inconclusive. In any case, this demonstrates that the multiframe prediction architecture can compete with frame-synchronous systems with far fewer parameters.

Table 1. Word error rates (%) for neural networks trained as multiframe predictors. Multiframe acoustic model’s hidden activations are computed every 2 or 4 frames, resulting in complexities approximately equivalent to 1/2 to 1/4 of the frame synchronous model complexity. Nodes / layer K = 1 (baseline) K = 2 K = 4 Frame async. 640 12.8 12.9 13.3 Multiframe 640 12.8 12.9 13.3 US Frame async. 448 13.7 13.8 14.3 English Multiframe 448 13.7 13.7 13.9 Frame async. 320 14.9 15.0 15.5 Multiframe 320 14.9 14.9 15.0 Frame async. 640 22.3 22.4 22.8 Multiframe 640 22.3 22.3 22.4 Iberian Frame async. 448 22.5 22.6 23.0 Portuguese Multiframe 448 22.5 22.4 22.2 Frame async. 320 22.9 23.0 23.3 Multiframe 320 22.9 22.6 23.0 5. DECODING SPEED

8. REFERENCES

We evaluated the performance of the approach on a serverbased recognizer running a 7-layer, 2000 nodes/layer US English system and a large vocabulary language model. The system implements the multiframe architecture, but for the purpose of benchmarking, the same output layer was used for each time step. For a system of that size trained on a large amount of data, the performance gain from training distinct layers is negligible. A system which predicts jointly 2 frames at a time achieved a 10% improvement in the query processing rate at no cost in accuracy or median latency, compared to an equivalent frame synchronous system. A system which predicts jointly 4 frames achieved a further 10% improvement in the query processing rate at a cost of a 0.4% absolute increase in word error rate. Both multiframe systems also exhibit much better tail latency characteristics.

[1] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition,” Signal Processing Magazine, 2012.

6. ACKNOWLEDGEMENTS The authors would like to thank Rajat Monga for his invaluable help.

7. CONCLUSION This paper presents a novel approach to training DNNs for hybrid systems which compares advantageously in terms of decoding complexity at equivalent accuracy to the standard approach. The method uses shared hidden layers across multiple output frames, making it possible to run the inference at a lower frame rate than the decoder while maintaining the same performance.

[2] F. Seide, G. Li, and D. Yu, “Conversational speech transcription using context-dependent deep neural networks,” in Interspeech, 2011, pp. 437–440. [3] V. Vanhoucke, A. Senior, and M.Z. Mao, “Improving the speed of neural networks on CPUs,” in Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011. [4] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Ng, “Large scale distributed deep networks,” in NIPS, 2012. [5] J. Schalkwyk, D. Beeferman, F. Beaufays, B. Byrne, C. Chelba, M. Cohen, M. Kamvar, and B. Strope, “Google Search by Voice: A case study,” Visions of Speech: Exploring New Voice Apps in Mobile Environments, Call Centers and Clinics, vol. 2, pp. 2–1, 2010. [6] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research, vol. 12, pp. 2121–2159, 2010. [7] KM Ponting and SM Peeling, “The use of variable frame rate analysis in speech recognition,” Computer Speech & Language, vol. 5, no. 2, pp. 169–179, 1991. [8] Q. Zhu and A. Alwan, “On the use of variable frame rate analysis in speech recognition,” in ICASSP, 2000.

Multiframe Deep Neural Networks for Acoustic ... - Research at Google

windows going up to 400 ms. Given this very long temporal context, it is tempting to wonder whether one can run neural networks at a lower frame rate than the ...

161KB Sizes 2 Downloads 453 Views

Recommend Documents

Deep Neural Networks for Small Footprint Text ... - Research at Google
dimensional log filterbank energy features extracted from a given frame, together .... [13] B. Yegnanarayana and S.P. Kishore, “AANN: an alternative to. GMM for ...

Deep Neural Networks for Acoustic Modeling in Speech ...
Jun 18, 2012 - Gibbs sampling consists of updating all of the hidden units in parallel using Eqn.(10) followed by updating all of the visible units in parallel using ...... George E. Dahl received a B.A. in computer science, with highest honors, from

Deep Neural Networks for Acoustic Modeling in Speech ... - CiteSeerX
Apr 27, 2012 - origin is not the best way to find a good set of weights and unless the initial ..... State-of-the-art ASR systems do not use filter-bank coefficients as the input ...... of the 24th international conference on Machine learning, 2007,

Deep Neural Networks for Acoustic Modeling in ... - Semantic Scholar
Apr 27, 2012 - His current main research interest is in training models that learn many levels of rich, distributed representations from large quantities of perceptual and linguistic data. Abdel-rahman Mohamed received his B.Sc. and M.Sc. from the El

Deep Neural Networks for Acoustic Modeling in Speech Recognition
Instead of designing feature detectors to be good for discriminating between classes ... where vi,hj are the binary states of visible unit i and hidden unit j, ai,bj are ...

Deep Neural Networks for Acoustic Modeling in Speech ... - CiteSeerX
Apr 27, 2012 - data that lie on or near a non-linear manifold in the data space. ...... “Reducing the dimensionality of data with neural networks,” Science, vol.

Compressing Deep Neural Networks using a ... - Research at Google
tractive model for many learning tasks; they offer great rep- resentational power ... differs fundamentally in the way the low-rank approximation is obtained and ..... 4Specifically: “answer call”, “decline call”, “email guests”, “fast

Multilingual Acoustic Models Using Distributed Deep Neural Networks
neural networks, multilingual training, distributed neural networks. 1. ... and is used in a growing number of applications and services such as Google Voice ...

Lower Frame Rate Neural Network Acoustic ... - Research at Google
CD-Phones is that it allowed the model to output a symbol ev- ... this setup reduces the number of possible different alignments of each .... Soft-target label class.

Convolutional Neural Networks for Small ... - Research at Google
Apple's Siri, Microsoft's Cortana and Amazon's Alexa, all uti- lize speech recognition to interact with these systems. Google has enabled a fully hands-free ...

recurrent neural networks for voice activity ... - Research at Google
28th International Conference on Machine Learning. (ICML), 2011. [7] R. Gemello, F. Mana, and R. De Mori, “Non-linear es- timation of voice activity to improve ...

DEEP MIXTURE DENSITY NETWORKS FOR ... - Research at Google
Statistical parametric speech synthesis (SPSS) using deep neural net- works (DNNs) has .... is the set of input/output pairs in the training data, N is the number ... The speech analysis conditions and model topologies were similar to those used ...

LARGE SCALE DEEP NEURAL NETWORK ... - Research at Google
ral networks, deep learning, audio indexing. 1. INTRODUCTION. More than one billion people ... recognition technology can be an attractive and useful service.

Deep Learning and Neural Networks
Online|ebook pdf|AUDIO. Book details ... Learning and Neural Networks {Free Online|ebook ... descent, cross-entropy, regularization, dropout, and visualization.

DEEP NEURAL NETWORKS BASED SPEAKER ...
1National Laboratory for Information Science and Technology, Department of Electronic Engineering,. Tsinghua .... as WH×S and bS , where H denotes the number of hidden units in ..... tional Conference on Computer Vision, 2007. IEEE, 2007 ...

Scalable Object Detection using Deep Neural Networks
neural network model for detection, which predicts a set of class-agnostic ... way, can be scored using top-down feedback [17, 2, 4]. Us- ing the same .... We call the usage of priors for matching ..... In Proceedings of the IEEE Conference on.

Large Scale Distributed Deep Networks - Research at Google
second point, we trained a large neural network of more than 1 billion parameters and .... rameter server service for an updated copy of its model parameters.

Deep Convolutional Neural Networks for Smile ...
Illustration of a convolutional neural network [4]. ...... [23] Ji, Shuiwang; Xu, Wei; Yang, Ming; Yu, Kai: 3D Convolutional Neural ... Deep Learning Tutorial.

Fine-tuning deep convolutional neural networks for ...
Aug 19, 2016 - mines whether the input image is an illustration based on a hyperparameter .... Select images for creating vocabulary, and generate interest points for .... after 50 epochs of training, and the CNN models that had more than two ...

recurrent deep neural networks for robust
network will be elaborated in Section 3. We report our experimental results in Section 4 and conclude our work in Section 5. 2. RECURRENT DNN ARCHITECTURE. 2.1. Hybrid DNN-HMM System. In a conventional GMM-HMM LVCSR system, the state emission log-lik

A Survey on Leveraging Deep Neural Networks for ...
data. • Using Siamese Networks. Two-stream networks, with shared weight .... “Learning Multi-domain Convolutional Neural Networks for Visual Tracking” in ...

Deep Neural Networks for Object Detection - NIPS Proceedings
This method combines a set of discriminatively trained .... network to predict the object box mask and four additional networks to predict four ... In order to complete the detection process, we need to estimate a set of bounding ... training data.