INTERSPEECH 2007

Word Confusability - Measuring Hidden Markov Model Similarity Jia-Yu Chen1 , Peder A. Olsen2 , John R. Hershey2 1

Department of Electrical Engineering, Stanford University, 2 IBM T. J. Watson Research Center [email protected], {pederao,jrhershe}@us.ibm.com

Abstract

words in an indexed audio database, matching acoustic tags, and pronunciation variation analysis. This paper discusses the use of distance measures to predict the confusability of two words. Section 2 defines the edit distance and applies it to HMMs. Section 3 shows how to compute distances between GMMs and uses these distances as weights in the HMM weighted distance computation. Finally, Section 4 experimentally compares the distance measures.

We address the problem of word confusability in speech recognition by measuring the similarity between Hidden Markov Models (HMMs) using a number of recently developed techniques. The focus is on defining a word confusability that is accurate, in the sense of predicting artificial speech recognition errors, and computationally efficient when applied to speech recognition applications. It is shown by using the edit distance framework for HMMs that we can use statistical information measures of distances between probability distribution functions to define similarity or distance measures between HMMs. We use correlation between errors in a real speech recognizer and the HMM similarities to measure how well each technique works. We demonstrate significant improvements relative to traditional phone confusion weighted edit distance measures by use of a Bhattacharyya divergence-based edit distance. Index Terms: Bayes Error, Bhattacharyya divergence, variational methods, gaussian mixture models, unscented transformation, Kullback–Leibler distance rate.

2. Edit distance and HMM distances A word is modeled using an HMM derived from the pronunciation of the word. A word such as call may have a pronunciation K AO L, and a word such as dial a corresponding pronunciation D AY AX L. We shall use these two words to exemplify various word confusability measures throughout this paper. Figure 1 shows the HMM for dial and call. The phonemes are modeled using three-state HMMs, which we have depicted using only one state, for simplicity. K

AO

L

1. Introduction D

The problem of mathematically formulating similarity and distance1 measures between two HMMs has captured the imagination of scientists since the publication of Juang and Rabiner’s paper in 1985, [1]. The two HMMs considered, may differ in topology and transition probabilities, as well as in observation distributions. The Kullback–Leibler distance cannot be directly used because it assigns negative infinity to certain pairs of nonergodic HMMs whose topology differs. To surmount this problem, Juang and Rabiner defined the Kullback–Leibler Distance Rate (KLDR) as a measure of similarity between ergodic HMM, and they proceded to show how to extend the KLDR to nonergodic (e.g., left-to-right) HMMs such as occur in speech recognizers. The KLDR has three caveats. First, it is computationally expensive, second it is not a good measure for classification error, and to handle non-ergodic HMMs requires looping them, which is unrealistic. Many other authors have defined distance measures to compensate for these shortcomings, [2, 3, 4, 5, 6]. In this paper we define some new measures inspired by [7] and [8]. Distance measures between HMMs have been used in areas such as speech recognition, texture image classification, handwriting recognition and machine learning. In speech recognition, HMM distances have been applied to such tasks as vocabulary selection, grammar design, phoneme clustering, measuring language modeling perplexity, locating occurrences of out-of-vocabulary

AY

AX

L

Figure 1: HMMs for call with pronunciation K AO L, and dial with pronunciation D AY AX L. The simplest method to measure the word confusability is to compute the number of corrections, insertions and deletions required to turn one pronunciation into another. This measure is commonly known as the edit, but also as the Levenshtein distance, [9]. For the two example words, the number of edits required is three, as seen in Table 1. call K AO L

dial D AY AX L

edit operation correction insertion correction no operation total cost

cost 1 1 1 0 3

Table 1: Edit distance between call and dial Computing the edit distance requires finding the minimum number of edits. This can be done by finding the shortest path in the edit graph as shown in Fig. 2. The edit distance was originally introduced to do approximate string matching. Here we use it in the same way. One natural extension is to put weights on the edges in the edit graph. We

1 The term “distance” is used loosely here and should not be interpreted as a mathematical distance.

2089

August 27-31, Antwerp, Belgium

I

D 1

K 1

1

AY 1 1

1

1 AO 1

1

1

1

1

1

1 L 1

AX 1

1

1

1

1

1

1 1

1

1

1

L 1

1

1 1

1

1 1

1

1

the insertion and deletion weights in a systematic way. In the new state–based edit graph there are only substitution weights that we can compute from the underlying pair of GMMs.

1

:D

1

:D

1 1

0

1

1

K:

1

F Figure 2: The edit graph to turn the pronunciation of call into that of dial. The dashed line outlines a path that attains the edit distance. Horizontal lines correspond to insertions, vertical lines to deletions and diagonal lines to corrections. P (@|D) P (K|@)

P (@|AY ) P (K|@)

P (@|AX)

P (K|@)

P (K|AX)

P (K|L)

P (@|D)

P (@|AY )

P (@|AX)

P (@|L)

P (AO|@)

P (AO|@)

P (AO|AY )

P (AO|AX)

P (AO|L)

P (@|D)

P (@|AY )

P (@|AX)

P (@|L)

P (L|@)

P (L|@)

AO : 

:L

 : AX

:L

L:

AO : 

L:

Figure 4: The finite state transducers for dial and call.  is used as a symbol for the null string.

P (AO|@)

P (AO|D)

P (L|@)

 : AY

K:

P (K|@)

P (K|AY )

P (AO|@)

 : AX

P (@|L)

P (K|@)

P (K|D)

P (AO|@)

 : AY

P (L|@)

P (L|@)

P (L|D)

P (L|AY )

P (L|AX)

P (L|L)

P (@|D)

P (@|AY )

P (@|AX)

P (@|L)

Figure 3: A weighted edit distance between call and dial. The weights are the negative log of the probabilities in the edit graph. will use a constant insertion and deletion weight corresponding to the vertical and horizontal edges. For the diagonal correction weights we use, − log P (Φ1 |Φ2 ), where Φ1 and Φ2 are phonemes in the pronunciations of call and dial. The product of the probabilities roughly corresponds to the probability of misrecognizing dial as call. Taking the logarithm and reversing the sign makes the most likely path correspond to the shortest path in the edit distance, where the weights are added instead of multiplied. Figure 3 shows the graph corresponding to the weighted edit distance. We will refer to the edit distance with weights, − log P (Φ1 |Φ2 ), as a phoneme–based edit distance. The phoneme–based distance is purely a function of the pronunciation and does not vary with changes in the acoustic context or the underlying HMM topology for the word. There is another form of edit graph and corresponding edit distance that considers the HMM topology for the two words. The first word is the generating word that synthesizes the acoustic signal, and the second word is the acceptor word, or the recognized word. For our example words, call is the generator and dial is the acceptor. We define finite state transducers (FSTs) corresponding to the generator and acceptor HMMs as seen in Fig. 4. The cartesian product of the HMMs is the composition of the generator and acceptor that can be seen in Fig. 5. The resulting HMM composition is the state–based edit graph. It differs from the earlier phoneme–based edit graph in two respects. First, it has a number of self–loops that the original edit graph did not have. For computing the shortest (Viterbi) path, the self–loops only add to the overall cost of the path, and so can be ignored. Second, the horizontal and vertical edges are no longer insertions or deletions, but are actually modeled as substitution errors. This is an improvement, as we have no simple method to accurately estimate

3. Distances between GMMs Ultimately we desire to find the classification error or Bayes error when drawing an acoustic sample from one word and finding that the likelihood is larger for the other word. Assuming equal priors on the two word distributions, the Bayes error is defined as  def 1 Be (f, g) = min{f (x), g(x)} dx. (1) 2 For HMMs the Bayes error is difficult to estimate accurately, but for a pair of GMMs the computational difficulty can be overcome. In the previous section we reduced the problem to coming up with weights that are functions of the GMM pairs. If we think of the state–dependent edit graph as approximating the likelihood of decoding the acceptor word when given acoustics from the generator word it is natural to use the Kullback–Leibler divergence:  def D(f g) = f (x) log(f (x)/g(x)) dx. (2) If we wanted to mix the classification approach with the state dependent edit graph approach, we could simply use − log Be (f, g) as weights. It is also possible to use other divergence measures for a weight. In particular we are interested in using the Bhattacharyya measure   def f (x)g(x) dx, (3) B(f, g) =

I

D:K

AY : K

D:K

D:K D:K

AY : K

AX : K

L:K

AX : K

AY : K AX : K L:K AY : K AX : K

D : AO D : AO

AY : AO AY : AO

AX : AO

L : AO

AX : AO

D : AO AY : AO AX : AO L : AO D : AO AY : AO AX : AO D:L D:L

AY : L AY : L

AX : L

L:L

AX : L L:L F

Figure 5: The composition of the FST for dial and call.

2090

or more specifically − log B(f, g) as weights. (Here the Bhattacharyya measure is twice the Bhattacharyya error bound, which includes the priors.) The Bayes error, Bhattacharyya measure and the KL divergence can not be computed analytically for a GMM pair. We have to resort to Monte Carlo sampling to get approximations to these quantities. The Bhattacharyya and KL divergence can however be computed analytically for a pair of gaussians. This makes it possible to come up with some reasonable analytical approximations to the quantities, [10, 11]. By sampling from the distribution f , we get the following Monte Carlo approximations for the three quantities: MCBayes (f, g)

=

MCKL (f, g)

=

MCBhatt (f, g)

=

n 1 n i=1

1 2

min(f (xi ), g(xi )) f (xi )

n 1 log(f (xi )/g(xi )) n i=1  n 1  g(xi ) , n i=1 f (xi )

3.1. Loopy estimates We defined the Kullback–Leibler divergence and Bhattacharyya measure for GMMs in the previous paragraphs. We can similarly define these for probability density functions (pdfs) on sequences. In particular sequence pdfs F and G defined by the single state HMM in Fig. 6 and Fig. 7 are of particular interest as the KL divergence and Bhattacharyya measure can be computed analytically. p

Figure 6: A single state HMM with output distribution f and state transition probabilities p and p¯ = 1 − p.

(4)

q

(5)

(6)

Figure 7: A single state HMM with output distribution g and state transition probabilities q and q¯ = 1 − q.

are samples from the distribution f . where We will assume that the GMMs f and g have marginal densities that can be written  f (x) = a πa N (x; μa , Σa ) (7) g(x) = b ωb N (x; μb , Σb ).

The pdfs F and G are implicitly defined on sequences x = (x1 , . . . , xk ) of length k = 1 or greater. For the pdf F the probdef

ability for obtaining a sequence of length k is p(k) = p¯pk−1 and the specific probability for the sequence x = (x1 , . . . , xk ) is: k f (xi ). (14) F (x) = p(k)

For the Kullback–Leibler divergence we have the following variational approximation, [10],     a πa exp(−D(fa fa )) Dvar (f g) = πa log  . (8) ω exp(−D(f g )) a b b b a

i=1 def

Similarly for G the probability is q(k) = q¯q k−1 for obtaining a sequence of length k and for the specific sequence the likelihood is k G(x) = q(k) g(xi ). (15)

For the Bhattacharyya divergence we have the variational approximation 

√ φb|a ψa|b πa ωb B(fa , gb ),

i=1

The divergence between F and G can be derived using (2) as follows:

(9)

ab

  where φ and ψ satisfies the constraints a φa|b = b ψb|a = 1 and are the result of iterating the equations φb|a =  and ψa|b

ψa|b ωb B(fa , gb )2    2 b ψa|b ωb B(fa , gb )

φb|a πa B(fa , gb )2 =  2    a φa |b πa B(fa , gb )

 D(F ||G)

(10)

(11)

VICBhatt (f, g) =

1 n

i=1

f (xi )g(xi ) . h(xi )

p(k)

k

=

∞ 

+

p(k)

k

p(k) log

k ∞  

∞  k=1

=

2091

k

j=1 f (xj ) k q(k) j=1 g(xj )

f (xi ) log

p(k) q(k)

 k  i=1

 p(k)

k=1 j=1

(13)

f (xi ) log

p(k)



k i=1

 k p(k) dxi q(k) i=1 i=1 k=1

 k k  ∞  k  f (xj ) p(k) + f (xi ) log dxi g(xj ) i=1 j=1 i=1 ∞  

k=1

=



i=1

k=1

we have the variational importance sampling (VISa) estimate 

∞   k=1

=

F (x) dx G(x)

F (x) log

= =

until convergence. The details can be found in [11]. Additionally we provide an accelerated Monte Carlo method for estimating the Bhattacharyya measure. By drawing samples from the distribution √   φb|a ψa|b πa ωb fa gb h =  ab  , (12) √ φb|a ψa|b πa ωb fa gb ab

n 



pdf: g

{xi }n i=1

Bvar (f, g) =



pdf: f

p(k) log

f (xj ) log

p(k) q(k)

 +

D(p||q) + D(f ||g)/¯ p.

∞  k=1

f (xi )dxi

f (xj ) g(xj )

 dxj

kp(k)D(f ||g)

dxi

6 Negative Log Error Rate

A similar computation for the Bhattacharyya measure yields the equation √ p¯q¯B(f, g) B(F, G) = . (16) √ 1 − pqB(f, g)

4. Experiments

Method

To measure how well each method predicts recognition errors we used a test suite consisting of short words. For this we chose a spelling task, for which there were 38,921 spelling words (az) in the test suite with an average word error rate of about 19.3%. A total of 7,500 spelling errors were detected. Given the errors we estimated the probability of correct recognition P(w1 |w2 ) = C(w1 , w2 )/C(w2 ). We discarded cases where the error count was low, the total count was low, or the probability was 1. For the remaining errors, we compared the various methods, as seen in Figure 8. The figure shows that adding the KL divergence loop estimate to account for the self–loop transition is uniformly better. The Bhattacharyya loop estimate gave a small gain, but not as much as for the KL divergence loop estimate. The best method was the Bhattacharyya VISa estimate with the KL divergence loop estimate. Figure 9 shows a scatter-plot of the Bhattacharyya VISa score for each pair of letters, versus the empirical measurement. Note that similar-sounding combinations of letters appear on the lower left (e.g. "c·z"), and dissimilar combinations appear in the upper right (e.g. "a·p"). a phone edit distance b state edit distance c weighted phone edit distance

20 30 40 50 Squared Correlation ×100

2 1

a⋅e c⋅t d⋅gp⋅t d⋅pd⋅v g⋅t b⋅d b⋅pp⋅v d⋅tj⋅k f⋅sv⋅z l⋅o b⋅v m⋅n

a⋅pa⋅j

e⋅u a⋅b i⋅y a⋅ne⋅p i⋅r l⋅nd⋅e b⋅e a⋅h l⋅m e⋅v a⋅k

l⋅r e⋅t e⋅gc⋅e

c⋅z 0

10 20 Divergence Score

30

40

Figure 9: The negative log error rate for all spelling word pairs compared to the Bhattacharyya score with transition probabilities. in texture image classification,” EURASIP Journal on Applied Signal Processing, vol. 13, pp. 1984–1993, 2005. [3] Matti Vihola, Mikko Harju, Petri Salmela, Janne Suontausta, and Janne Savela, “Two dissimilarity measures for HMMs and their application in phoneme model clustering,” in Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Orlando, Florida, May 2002, vol. I, pp. 933–936. [4] Claus Bahlmann and Hans Burkhardt, “Measuring HMM similarity with the Bayes probability of error and its application to online handwriting recognition,” in Sixth International Conference on Document Analysis and Recognition (ICDAR’01), 2001, pp. 406–411.

No Self−Loop Self−Loop

[5] Maruf Mohammad and W. H. Tranter, “A novel divergence measure for hidden Markov models,” in Proceedings IEEE Southeast Conf., April 2005, pp. 240–243.

f Bhattacharyya variational g Bhattacharyya VISa (1M samples) 10

3

0

d KL divergence variational e KL divergence importance sampling (1M samples)

0

4

o⋅r

n⋅o

5

60

70

Figure 8: Squared correlation coefficient between the empirical negative log error rate, and each of the confusability scores. The squared correlation represents the percent of the empirical variance that is explained by each of the scores.

[6] Markus Falkhausen, Herbert Reininger, and Dietrich Wolf, “Calculation of distance measures between hidden Markov models,” in Proceedings of Eurospeech 1995, Madrid, 1995, pp. 1487–1490. [7] Harry Printz and Peder Olsen, “Theory and practice of acoustic confusability,” Computer, Speech and Language, vol. 16, pp. 131–164, January 2002. [8] Jorge Silva and Shrikanth Narayanan, “Average divergence distance as a statistical discrimination measure for hidden Markov models,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 3, pp. 890–906, May 2006.

5. Conclusion We have shown in this paper how we can apply the edit distance framework to HMMs and use GMM based divergence or distance measures to define an HMM based divergence score that correlates well with the type of errors the speech recognizer makes. Overall the best measure used the Bhattacharyya divergence together with the KL–based self–loop transition. This system significantly outperforms the standard weighted phone edit distance. The parameters can be estimated directly from the acoustic model, so there is no need for any training data.

[9] Vladimir I. Levenshtein., “Binary codes capable of correcting deletions, insertions and reversals,” Soviet Phys. Dokl, vol. 10, no. 8, pp. 707–710, 1966. [10] John Hershey and Peder Olsen, “Approximating the Kullback Leibler divergence between gaussian mixture models,” in Proceedings of ICASSP 2007, Honolulu, Hawaii, April 2007, to appear.

6. References

[11] Peder Olsen and John Hershey, “Bhattacharyya error and divergence using variational importance sampling,” in Proceedings of Interspeech 2007, August 2007.

[1] B.-H. Juang and L. R. Rabiner, “A probabilistic distance measure for hidden Markov models,” AT&T Technical Journal, vol. 64, no. 2, pp. 391–408, February 1985. [2] Ling Chen and Hong Man, “Fast schemes for computing similarities between gaussian HMMs and their applications

2092

Word Confusability --- Measuring Hidden Markov Model Similarity

240–243. [6] Markus Falkhausen, Herbert Reininger, and Dietrich Wolf,. “Calculation of distance measures between hidden Markov models,” in Proceedings of ...

120KB Sizes 0 Downloads 264 Views

Recommend Documents

Word Confusability --- Measuring Hidden Markov Model ...
IBM T. J. Watson Research Center ... pronunciation variation analysis. This paper discusses the ... Figure 1: HMMs for call with pronunciation K AO L, and dial.

Word Confusability --- Measuring Hidden Markov Model ...
ally expensive, second it is not a good measure for classification error, and to ... K AO L, and a word such as dial a corresponding pronunciation. D AY AX L. We ...

Implementing a Hidden Markov Model Speech ... - CiteSeerX
School of Electronic and Electrical Engineering, University of Birmingham, Edgbaston, ... describe a system which uses an FPGA for the decoding and a PC for pre- and ... Current systems work best if they are allowed to adapt to a new speaker, the ...

A Hidden Markov Model to Detect Coded ... - Gerardo Canfora
mailing lists or bug tracking systems as the sole repository of software technical documentation available. Extracting ... The automatic detection of information contained in the free text of a development email is useful for .... application in temp

Causal Hidden Markov Model for View Independent ...
Some of them applied artificial intelligence, .... 2011 11th International Conference on Hybrid Intelligent Systems (HIS) ..... A Tutorial on Hidden Markov Models.

Irish: A Hidden Markov Model to detect coded ...
precious source of information to support the development process. ...... pertaining to three Java project tags (namely Android, Hibernate, and HttpClient), ...

Implementing a Hidden Markov Model Speech ...
School of Electronic and Electrical Engineering, University of Birmingham, .... whose elements aij = P(state j at time t | state i at time t-1); these are the transition ... the pre-processed speech data can be a multi-dimensional vector or a single 

Soft Margin Estimation of Hidden Markov Model ...
measured by the generalization ability of the machine learning algorithms. In particular, large margin classification tools, such as support vector machines ...

Hidden Markov Models - Semantic Scholar
A Tutorial for the Course Computational Intelligence ... “Markov Models and Hidden Markov Models - A Brief Tutorial” International Computer Science ...... Find the best likelihood when the end of the observation sequence t = T is reached. 4.

Hidden Markov Models - Semantic Scholar
Download the file HMM.zip1 which contains this tutorial and the ... Let's say in Graz, there are three types of weather: sunny , rainy , and foggy ..... The transition probabilities are the probabilities to go from state i to state j: ai,j = P(qn+1 =

Variational Nonparametric Bayesian Hidden Markov ...
[email protected], [email protected]. ABSTRACT. The Hidden Markov Model ... nite number of hidden states and uses an infinite number of Gaussian components to support continuous observations. An efficient varia- tional inference ...

Model in Word - Microsoft
ground domain), for which large amounts of train- ing data are available, to a different domain (the adaptation domain), for which only small amounts of training ...

Model in Word - Microsoft
training data effectively. However, such training data is not always available for many search do- mains, such as non-English search markets or per- son name ...

A Hidden Markov Model to Detect Coded Information Islands in Free ...
Markov Models (HMMs), to extract coded information islands, such as source code, stack traces, and .... three different techniques: regular expressions, island parsers, and machine learning. A first approach has been ..... mvebu is a new-style Orion

A Kernel Method for Measuring Structural Similarity ... - Springer Link
a key component in various applications, including XML data mining, schema ... ERP and the IV system to be separate software components provided by different soft- ..... by NIST, nor does it imply that these products are necessarily the best ...

A Kernel Method for Measuring Structural Similarity ... - Springer Link
arise because the original component provider goes out of business, ceases to support ... 3 Kernel-Based Measurement of XML Structural Similarity .... sented in a tree structure, which provides a computational representation to deal with.

A novel method for measuring semantic similarity for XML schema ...
Enterprises integration has recently gained great attentions, as never before. The paper deals with an essential activity enabling seam- less enterprises integration, that is, a similarity-based schema matching. To this end, we present a supervised a

Measuring Similarity in Description Logics using ...
DL popularity is also related to a number of tools for knowledge acquisition and rep- resentation ... Partially supported by the Spanish Ministry of Science and Education project Next-CBR ...... knowledge-rich models for case-based planning.

Defining and Measuring Trophic Role Similarity in ... - Semantic Scholar
Aug 13, 2002 - Recently, some new approaches to the defini- tion and measurement of the trophic role of a species in a food web have been proposed, in which both predator and prey relations are considered (Goldwasser & Roughgarden, 1993). Yodzis & Wi

Defining and Measuring Trophic Role Similarity in ... - Steve Borgatti
Aug 13, 2002 - Greenville, NC 27858, U.S.A. zOrganization Studies Department, Carroll Graduate School of. Management, Boston College, Chestnut Hill, MA 02467, U.S.A. ... equivalence models uncover similarities in trophic roles at a higher level of or

variational bhattacharyya divergence for hidden markov ...
apply the variational Bhattacharyya divergence for HMMs to word confusability, the ... care to define them in a way that yields a distribution (integrates to one) over all ..... An HMM for call with pronunciation K AO L. In practice, each phoneme is