Spectral Intersections for Non-Stationary Signal Separation Trausti Kristjansson1 , Thad Hughes2 1

School of Science and Engineering, University of Reykjavik, Reykjavik, Iceland 2 Google Inc., Mountain View CA, USA [email protected], [email protected]

Abstract We describe a new method for non-stationary noise suppression that is simple to implement yet has performance that rivals far more complex algorithms. Spectral Intersections is a model based MMSE signal separation method that uses a new approximation to the observation likelihood. Furthermore, Spectral Intersections uses an efficient approximation to the expectation integral of the MMSE estimate that could be described as unscented importance sampling. We apply the new method to the task of separating speech mixed with music. We report results on the Google Voice Search task where the new method provides a 7% relative reduction in WER at 10 dB SNR. Interestingly, the new method provides considerably greater reduction in average WER than the Max method and approaches the performance of the more complex Algonquin algorithm. Index Terms: Speech Recognition, Noise Robustness, Noise Suppression, Spectral Subtraction, Algonquin1 .

1. Introduction With the rapid growth of speech recognition for mobile applications there is a greater need for robustness to nonstationary noise interference such as music. Traditional methods such as Spectral Subtraction[1] and Ephraim Malah[2] are effective for suppression of stationary noise, but are ill suited for non-stationary noise. Model based methods such as Max and Algonquin can perform well [3] for very non-stationary noise such as music, but are complex. A good overview of the state of the art of model based methods is provided by Hershey et al. [4]. Spectral Intersections is in the family of model based Minimum Mean Squared Error (MMSE) algorithms such as Wiener, Max[5] and Algonquin[6][7]. It offers good performance while being easy to implement and is based on a new and interesting approximation. 1 This work was performed while Trausti Kristjansson was at Google

Research.

1.1. Background and Overview In this work, we model the signals in the log spectrum domain. Starting with the time domain signal x[t], we first compute the short time Fourier transform of sequential segments of the signal X(f ) and finally the log spectrum x = log(|X(f )|). We use Gaussian Mixture Models (GMMs) for the component signals in the log spectrum domain ! ! p(x) = pi (x) = πi N (x; µi , Σi ). (1) i

i

where πi is the mixture weight, µi is the mixture mean, and Σi is the covariance matrix for mixture i. The observed signal yobs is an acoustic mixture of the target signal x1 and interference signal x2 . The MMSE estimate for the separated target signal x1 is: " x ˆ1 = E[x1 |yobs ] = z x1 · p(yobs , x1 , x2 )dx1 dx2 , !" =z x1 · p(yobs |x1 , x2 )pi (x1 )pj (x2 )dx1 dx2 , i,j

(2)

where p(yobs |x1 , x2 ) is the observation likelihood, p(x1 ) is the target prior model, p(x2 ) is the interference prior model and z = p(yobs )−1 . In this paper we will interchangeably talk about the speech model for the target model and music model for the interference model. In Section 2 we explain the method for approximating the observation likelihood by taking only the constructive and destructive combinations into account. In Section 3 we explain a lightweight approximation to the expectation integral required for the MMSE estimate. In Section 4 we report results on the real world task of recognizing Voice Search utterances mixed with music and in Section 5 we summarize and contrast the new method to Algonquin and Max methods.

2. Signal Mixing Model The model for mixed speech and music in the time domain is y[t] = x1 [t] + x2 [t] (3)

√ p(r) = [π 1 − r2 ]−1

2

Constructive and Destructive Intersections 100

1.5

90 80

1

70 0.5

0 −1

−0.5

0 r

0.5

|X2 |

60 1

50 40

Figure 1: The√ plot shows the distribution for r = cos(θ), i.e. p(r) = [π 1 − r2 ]−1 . Notice the peaks at either end of the interval (−1, 1).

Destructive Intersection

Constructive Intersection

(exp(µ1,i ), exp(µ2,j ))

30 20 10

where x1 [t] denotes the target speech signal at time t, x2 [t] denotes the interference music signal and y[t] denotes the mixed signal. In the Fourier domain, the relationship becomes Y (f ) = X1 (f ) + X2 (f )

(4)

where f designates the frequency component of the FFT. This can also be written in terms of the magnitude and the phase of each component: |Y (f )|∠Y (f ) = |X1 (f )|∠X1 (f ) + |X2 (f )|∠X2 (f ) (5) where |Y (f )| is the magnitude of Y (f ) and ∠Y (f ) is the phase and similarly for X1 and X2 . 2.1. Constructive and Destructive Mixing By the law of cosines, the relationship between the components is (dropping the dependence on the frequency f ) |Y |2 = |X1 |2 + |X2 |2 + 2|X1 ||X2 | cos(θ)

(6)

where θ is the angle between X1 and X2 . The target and interference signals are independent and it follows that θ can be treated as a uniformly distributed random √ variable. The distribution for r = cos(θ) is p(r) = [π 1 − r2 ]−1 which has sharp peaks at either end of the interval (−1, 1) as shown in Figure 1. These correspond to cases where the component signals are exactly in phase or exactly out of phase or θ = {0, π}, i.e. the constructive and destructive combinations respectively. This suggests that we can reasonably approximate this distribution by considering only the constructive and destructive combinations. If the acoustic mixing is exactly constructive, Equation (5) becomes |Y | = |X1 | + |X2 |

(7)

and if the acoustic mixing is exactly destructive, we have two cases |Y | = |X1 | − |X2 | |Y | = |X2 | − |X1 |

if |X1 | > |X2 | if |X2 | > |X1 |.

(8) (9)

0 0

20

40

60

80

100

|X1 |

Figure 2: Given Y , the plot shows the constructive (blue) |Y | = |X1 | + |X2 | and destructive (green) |Y | = |X1 | − |X2 | and |Y | = |X2 | − |X1 | lines. Also shown are the contours of the prior distribution for a particular mixture combination i, j with mode [µ1,i , µ2,j ] in the log spectrum domain corresponding to (exp(µ1,i ), exp(µ2,i )) in the magnitude spectrum domain. These equations describe valid combinations of |X1 | and |X2 |. In Figure 2 the constructive combinations are shown as the blue downward line and the destructive combinations are shown as the green lines. 2.2. Constructive and Destructive Intersections Recall that we use Gaussian mixture models for the prior models for the target signal p(x1 ) and interference signals p(x2 ). Consider a particular mixture combination i, j where i is the index for the target mixture and j is the index of interference mixture. The joint distribution for this combination, pi,j (x1 , x2 ), is a product of two Gaussian distributions with a mode [µ1,i , µ2,j ] in the log spectrum domain. In the magnitude spectrum domain, we denote this coordinate as (exp(µ1,i ), exp(µ2,j )) Consider a line from the origin (0, 0) through the coordinate (exp(µ1,i ), exp(µ2,i )). This line is shown in Figure 2 as the red line. Now consider the intersection of this line to the lines for the constructive and destructive mixing. We call the intersection points the constructive intersection and the destructive intersection. In the magnitude spectrum domain, the coordinate of the constructive intersection point con con is (|X1,i,j |, |X2,i,j |) where |Y | 1 + exp(µ2,j )/ exp(µ1,i ) |Y | con |X2,i,j |= 1 + exp(µ1,i )/ exp(µ2,j ) con |X1,i,j |=

(10) (11)

and the coordinate of the destructive intersection point is des des (|X1,i,j |, |X2,i,j |) where |Y | des |X1,i,j |= 1 − exp(µ2,j )/ exp(µ1,i ) |Y | des |X2,i,j |= , −1 + exp(µ1,i )/ exp(µ2,j )

(12) (13)

for the case |X1 | > |X2 | and similarly with signs reversed for the case |X2 | > |X1 |. In the log spectrum domain the constructive intersection is con con con (xcon 1,i,j , x2,i,j ) = (log(|X1,i,j |), log(|X2,i,j |))

(14)

and similarly for the destructive intersection.

3. Unscented MMSE Since we use Gaussian mixture models for the prior models, the exact evaluation of the MMSE estimate requires that we evaluate an integral for each combination i, j in Equation (2). Instead of computing the full integral over x1 and x2 , we use an approximation that can be seen as an extremely sparse importance sampling[8] method for finding the expectation in Equation (2). Instead of using random samples drawn from a proposal distribution, we use only two samples, i.e. the samples corresponding to the constructive and destructive intersections. The spectral intersection estimate for the target signal is ! x ˆ1 = E[x1 |yobs ] ≈ Z · πi πj · i,j

con con · {xcon 1,i,j · N (x1,i,j , µ1,i , Σ1,i )N (x2,i,j , µ2,j , Σ2,j )

des des + xdes 1,i,j · N (x1,i,j , µ1,i , Σ1,i )N (x2,i,j , µ2,j , Σ2,j )} (15) 2

where Z is a normalizing factor . Notice that this amounts to little more than evaluating the prior distribution at the constructive and destructive intersections. The way in which we chose the sample points is reminiscent of the unscented approximation [9]. Hence, this method could be called unscented importance sampling.

4. ASR Experiments We evaluated the algorithms on real utterances from the Google Voice Search system. Since the utterances are generally near field and of high SNR, we artificially added music to the data to produce noise conditions with varying SNR. We evaluated using our state of the art ASR system. A more detailed explanation of the system and discussion of the influence of the size of the music model for Max and Algonquin can be found in [3]. 2 Z −1

=

!

i,j,k

πi πj N (xk1,i,j , µ1,i , Σ1,i )N (xk2,i,j , µ2,j , Σ2,j )

4.1. Dataset characteristics The dataset consists of approximately 38,000 manually-transcribed utterances containing 38 hours of anonymized English-language spoken queries to Google Voice Search. The utterances were spoken by 296 different speakers, and range in length from 0.2 to 12.3 seconds, with a mean of 3.6 seconds. The utterances were recorded and stored in 16-bit, 16kHz uncompressed format. The dataset contains a varying amount of speech for each speaker and hence the amount of training data for each speech model is different. 4.2. Training speech models To train the speech model for each speaker the data was segmented into a low-noise training data and higher noise-test data. We compute 256-dimensional logspectral feature vectors for each of the speaker’s utterances, using 25ms frames spaced at 10ms intervals. Since this is real data, much of the cleaner speech data contains low non-stationary background noise, such as TV noise. If speech models are trained directly on this data, the majority of the model components are allocated to modeling this low non-stationary noise background. To circumvent this problem, we use a percentile based VAD to separate the low-noise condition speech into speech frames and non-speech frames. From the speech frames, we estimate a GMM with at most 200 components subject to the constraint that there are at least 20 frames per Gaussian component. From the non-speech frames we estimate a smaller 20-component GMM. These two models are then combined to form a clean-speech GMM. 4.3. Training noise models At least 30% of the data for each speaker is held out as test data. For each utterance in the test set, we select a random song from a database of 500 popular songs, and mix it with the utterance at the desired SNR. Noise models are trained on the music directly prior to the speech. Hence we included 8 seconds of musical prologue before the onset of speech in the utterance. We then compute the same 256-dimensional log-spectral feature vectors used to create the speech model, and use the feature frames from the prologue to construct 8 mixture noise GMMs. 4.4. Experimental Setup The SNR of the utterances for each speaker is first computed. Based on the SNR, they are divided into training and testing sets, where the least-noisy 70% of the data is used for training and the remaining 30% is used for testing.

Average WER reduction WER reduction at 10 dB

Max

SI

Algon.

2.7 4.6

4.0 7.6

4.8 7.0

Table 1: Average reduction in WER for all conditions, 10 dB - 20 dB. Relative WER as a function of SNR 0 −1

Relative WER %

−2 −3 −4 −5 −6 MAX SI Algonquin

−7 −8 −9

10

12

14

16

18

20

Figure 3: Comparison of WER results for the Max, Spectral Intersection and Algonquin Algorithms for a range of SNRs. 4.5. Signal separation and evaluation

but provides slightly less gain in all conditions except the noisiest 10 dB condition where it outperforms Algonquin. However, it exceeds Max in almost all conditions.

5. Discussion We have presented a new method for non-stationary noise suppression. The method is easy to implement but has performance that rivals far more complex methods. The computational complexity of the new method is the same as that of Max or Algonquin, which is dominated by the cross product of the number of mixtures in the target and interference models, i.e. O(I · J), where I is the number of mixtures in the target model and J is the number of mixtures in the interference model. However, Algonquin requires Newton iterations of a Laplace transform involving a matrix inverse and Max requires the computation of the log of the cumulative normal distribution which requires careful attention to numerical stability when implementing. In contrast, the new method involves simple line intersections and the evaluation of standard GMM distributions which are well understood and optimized.

6. References [1] S. Boll, “Suppression of acoustic noise in speech using spectral subtraction,” in IEEE Transactions on Acoustics, Speech and Signal Processing, 1979.

We then apply the Max, Algonquin and Spectral Intersections noise reduction techniques using the per-speaker speech model constructed from the speaker’s training data, and the per-utterance noise model constructed from each utterance’s prologue. The resulting cleaned feature frame sequence is then re-synthesized as a waveform using the overlap-add algorithm and sent to the speech recognizer to test the denoising quality. All speech recognition was performed with a recent version of Google’s Voice Search speech recognizer. This system uses an acoustic model with approximately 8000 context-dependent states and approximately 330k Gaussians, and is trained with LDA, STC, and MMI. The Voice Search language model used for recognition contains more than one million English words. We did not retrain the acoustic models on denoised data which would be expected to give better results. However, the relative performance of the respective methods is expected to remain the same.

[2] Y. Ephraim and D. Malah, “Speech enhancement using a minimum mean square error log-spectral amplitude estimator,” in IEEE Trans. on Acoust., Speech, Signal Processing, vol. ASSP-33, 1985, pp. 443–445.

4.6. Results

[8] S. B. M. Ferrari, “Importance sampling simulation of turbo product codes,” in ICC2001, The IEEE International Conference on Communications, 2001, pp. 2773–2777.

The average performance across noise conditions is a 4% relative reduction in error rate which approaches the performance of Algonquin while substantially outperforming the Max method. As can be seen from Table 1 and Figure 3 the Spectral Intersections method follows the trend of Algonquin

[3] T. Hughes and T. Kristjansson, “Music models for musicspeech separation,” in Proceedings of ICASSP, 2012. [4] J. R. Hershey, S. J. Rennie, P. A. Olsen, and T. T. Kristjansson, “Super-human multi-talker speech recognition : A graphical modeling approach,” in Computer Speech and Language, 2009. [5] A. Nadas, Nahamoo, and M. D., Picheny, “Speech recognition using noise-adaptive prototypes,” in Proceedings of ICASSP, 1989, pp. 1495 –?1503. [6] B. Frey, L. Deng, A. Acero, and T. Kristjansson, “Algonquin: Iterating laplace’s method to remove multiple types of acoustic distortion for robust speech recognition,” in Proceedings of Eurospeech, 2001. [7] J. Droppo, L. Deng, and A. Acero, “A comparison of three non-linear observation models for noisy speech features,” in Proceedings of Eurospeech, 2003.

[9] S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte, “A new approach for filtering nonlinear systems,” in Proceedings of the American Control Conference, 1995, pp. 1628 – 1632.

Tue.P5c.05 Spectral Intersections for Non ... - Research at Google

component signals in the log spectrum domain p(x) = ∑i .... domain the constructive intersection is .... rithm and sent to the speech recognizer to test the denois-.

156KB Sizes 1 Downloads 284 Views

Recommend Documents

SPECTRAL DISTORTION MODEL FOR ... - Research at Google
[27] T. Sainath, O. Vinyals, A. Senior, and H. Sak, “Convolutional,. Long Short-Term Memory, Fully Connected Deep Neural Net- works,” in IEEE Int. Conf. Acoust., Speech, Signal Processing,. Apr. 2015, pp. 4580–4584. [28] E. Breitenberger, “An

FACTORED SPATIAL AND SPECTRAL ... - Research at Google
on Minimum Variance Distortionless Response (MVDR) [7, 8] and multichannel Wiener ..... true TDOA and noise/speech covariance matrices are known, and (5).

Parallel Spectral Clustering - Research at Google
a large document dataset of 193, 844 data instances and a large photo ... data instances (denoted as n) is large, spectral clustering encounters a quadratic.

Efficient Spectral Neighborhood Blocking for ... - Research at Google
supply chain management, and users in social networks when ... This, however, poses a great challenge for resolving entities ... (BI) [10], and canopy clustering (CC) [11]. SN is one of the most computationally efficient blocking algorithms in.

On Sampling-based Approximate Spectral ... - Research at Google
Conference on Automatic Face and Gesture. Recognition. Talwalkar, A., Kumar, S., & Rowley, H. (2008). Large-scale manifold learning. CVPR. Williams, C. K. I. ...

Towards a calculus for non-linear spectral gaps
Non embedability of Expanders. Claim. In (n,d) graphs, most pair of points are at distance Ω(logd n) (counting argument). Therefore,. Ex,y∈V dG(x,y)2. E(x,y)∈E ...

Older Driver Failures of Attention at Intersections: Using ...
About one half of all driver fatal- ities for those 80 years of age and older are at intersections .... Macintosh G3 computer. Toolbook .... 6 m (20 feet) using Landolt Cs, and contrast sensitivity .... Degree of eccentricity hazard (0–30 degrees).

Sparse Non-negative Matrix Language Modeling - Research at Google
test data: 1.98% (out of domain, as it turns out). ○ OOV words mapped to , also part of ... Computationally cheap: ○ O(counting relative frequencies).

Sparse Non-negative Matrix Language Modeling - Research at Google
Table 4 that the best RNNME model outperforms the best SNM model by 13% on the check set. The out- of-domain test set shows that due to its compactness,.

Experience of non-adherence among six rehab ... - Research at Google
The informant were selected informally. Over one week in the Bay Area and one week in. Copenhagen, Denmark, the author used any social encounter off work to interject statements like this one as a conversation trigger, “Studies suggest that close t

Learning Non-Linear Combinations of Kernels - Research at Google
... outperform either the uniform combination of base kernels or simply the best .... semi-definite (PSD) and its boundedness forms a regularization controlling the ...

Entropic Graph Regularization in Non ... - Research at Google
of unlabeled data is known as semi-supervised learning (SSL). ... obtained, and adding large quantities of unlabeled data leads to improved performance.

SPARSE NON-NEGATIVE MATRIX LANGUAGE ... - Research at Google
Email: {ciprianchelba,noam}@google.com. ABSTRACT. The paper ..... postal code (ZIP) or designated marketing area (DMA) geo- tags, and used along the .... The computational advantages of SNM over both ME and. RNN-LM estimation are ...

Pruning Sparse Non-negative Matrix N-gram ... - Research at Google
Pruning Sparse Non-negative Matrix N-gram Language Models. Joris Pelemans1 ... very large amounts of data as gracefully as n-gram LMs do. In this work we ...

Scalable Learning of Non-Decomposable ... - Research at Google
Figure 1: Illustration of the potential difference between classification accuracy and, for example, ..... state-of-the-art model. Performance of models on this data is ...

On the Complexity of Non-Projective Data ... - Research at Google
teger linear programming (Riedel and Clarke, 2006) .... gins by selecting the single best incoming depen- dency edge for each node j. ... As a side note, the k-best argmax problem for di- ...... of research is to investigate classes of non-projective

Asymptotic Performance of the Non-Forced Idle ... - Research at Google
computer systems is that scheduling decisions must be made on-line, that is, as soon ... Time Scheduling and includes many common policies, including Best Fit, First Fit ... The online M-machine scheduling problem is one of the most widely-studied ..

RACEZ: a lightweight and non-invasive race ... - Research at Google
Permission to make digital or hard copies of all or part of this work for personal or classroom use is ...... signature-based data race detection. In S. W. Keckler and.

Sparse Non-negative Matrix Language Modeling - Research at Google
same speech recognition accuracy on voice search and short message ..... a second training stage that adapts the model to in-domain tran- scribed data. 5.

Pruning Sparse Non-negative Matrix N-gram ... - Research at Google
a mutual information criterion yields the best known pruned model on the One ... classes which can then be used to build a class-based n-gram model, as first ..... [3] H. Schwenk, “Continuous space language models,” Computer. Speech and ...

PhD research topic: Spectral Induced Polarization method for ...
PhD research topic: Spectral Induced Polarization method for monitoring and ... For additional information (including application process) and details on this ...

PhD research topic: Spectral Induced Polarization method for ...
student will focus on the geophysical methods, while getting exposed to engineering field practices. The project is a collaboration between Rutgers, Georgia ...

Spectral unmixing versus spectral angle mapper for ...
to assess the classification performance for identifying and mapping 'desert like' .... The derived spectral angle maps form a new data cube with the number of bands equal .... Interactive visualization and analysis of imaging spectrometer data.