CROSS-VALIDATION BASED DECISION TREE CLUSTERING FOR HMM-BASED TTS Yu Zhang1,2 , Zhi-Jie Yan1 and Frank K. Soong1 1

Microsoft Research Asia, Beijing, China Shanghai Jiao Tong University, Shanghai, China [email protected],{zhijiey,frankkps}@microsoft.com 2

ABSTRACT In HMM-based speech synthesis, we usually use complex, context dependent models to characterize prosodically and linguistically rich speech units. It is therefore difficult to prepare training data which can cover all combinatorial possibilities of contexts. A common approach to cope with this insufficient training data problem is to build a clustered tree via the MDL criterion. However, an MDL-based tree still tends to be inadequate in its power to predict unseen data. In this paper, we adopt the cross-validation principle to build such a decision tree to minimize the generation error of unseen contexts. An efficient training algorithm is implemented by exploiting the sufficient statistics. Experimental results show that the proposed method can achieve better speech synthesis results, both objectively and subjectively, than the baseline results of the MDL-based decision tree. Index Terms— HMM-based speech synthesis, cross validation, context clustering, MDL 1. INTRODUCTION HMM-based approach has been successfully developed and applied to speech synthesis in the past two decades [1]. In this approach, the spectrum, excitation, and duration features are modeled and generated in a unified HMM framework. In building such an HMM, a large number of contextual factors are used to represent the segmental and supra-segmental information of speech (e.g., phone identity, accent, stress, break) as separate models [2]. However, because the large number of combinatorial possibilities of all contextual factors, it is impossible to obtain enough training data to estimate reliably all full context models. Therefore, a decision tree based model clustering [2, 3] is usually adopted to deal with the data sparseness problem and to predict unseen context in synthesis. This method can successfully produce more robust parameter estimates and improve their generalization capabilities. Conventional decision tree based clustering is a top-down, data driven training process, based on a greedy tree growing algorithm. The tree growth is based upon two factors, i.e., splitting criterion and stopping criterion. In HMM-based TTS, the splitting criterion is based on Maximum Likelihood (ML) principle. Since the likelihoods increase monotonically with increasing number of decision tree leaf nodes, a stopping criterion, e.g. likelihood thresholding or Minimum Description Length (MDL), needs to be used. Although the conventional method provides an effective and efficient way to build the decision tree for continuous density HMMs, it has several disadvantages: 1) the greedy search-based decision tree growing is sensitive to the training set due to interfering, irrelevant attributes The work was done during the first author’s internship in Microsoft Research Asia.

or outlier data [4]. Affected by a small variation in the training set, the algorithm may choose a split which may not be the best one; 2) likelihood threshold is set empirically and it may be dependent upon different tasks or data sets. To alleviate this problem, the minimum description length (MDL) criterion [5] which consists of a model complexity penalty term, is introduced to balance the monotonically growing likelihood. However, the MDL criterion is based on asymptotic assumption and it is not very effective when the amount of training data is not asymptotically large. In this paper, cross-validation (CV) is adopted for building a decision tree for HMM-based TTS. Cross-validation is a useful technique for many tasks encountered in machine learning, e.g. accuracy estimation, model selection or parameter tuning, etc. In previous studies, cross-validation method has been successfully applied to speech processing, including: Gaussian mixture optimization [6], automatic speech recognition [7], and tuning priors [8]. In this study, K-fold cross-validation is applied to decision tree based model clustering on Multi-space Probability Distribution (MSD) HMMs [9]. First, A cross-validation based splitting criterion is proposed to avoid the conventional greedy splitting criterion and we calculate the likelihood with different validation set with corresponding sufficient statistics. Then, because we calculate the likelihood of the unseen data with the current model parameters, tree-growing can be stopped automatically. Using the proposed splitting and stopping criteria, we are able to build a better decision tree and improve its generalization capability to synthesize unseen contexts. The cross-validation based decision tree clustering algorithm was evaluated in our HMM-based TTS system. We compared several objective and subjective measures of the synthesized speech using conventional method and the cross-validation based method. The experimental results show that the CV decision tree yields better Log Spectral Distance (LSD), root mean square error of f0 and duration model objectively than the conventional decision tree. The speech quality improvement is also confirmed by the subjective preference test results. The rest of this paper is organized as follows: In Section 2, the splitting and stopping criteria in conventional MDL-based decision tree are presented. In Section 3, the cross-validation based decision tree in TTS is introduced. In Section 4, we present the experimental results. In Section 5, we draw our conclusion. 2. MDL-BASED DECISION TREE CLUSTERING Traditionally, the ML criterion is used as node splitting criterion for tree growing. The ML criterion for splitting tree nodes is consistent with that used in training HMMs parameters. Let L(S) denote the log likelihood of generating observation frames at node S. Fig.1

D1m

shows the tree growing procedure. Suppose that node Sm is split

D2m No

Yes R-voiced?

D

m

Λ

Dm

Sm

Yes

No

Smqy

Smqn

m

m

δ(D )q

into two successor nodes, Smqy and Smqn by a binary (yes or no) question q. The increase of log likelihood by splitting Sm through a question q is [2]: =

=

L δ(Dm )M − αL log G q

.. . m DK

Yes

No

Smqy

Smqn

Dm \ D1m

Λm 1

δ(D1m )q

Dm \ D2m

Λm 2

δ(D2m )q

...... D

m

\

Λm K

m DK

m δ(DK )q

Fig. 2. Node splitting of cross validation based decision tree

L(Smqy ) + L(Smqn ) − L(Sm )

Log likelihood increases monotonically with increasing number of terminal leafs. As a result, a threshold of likelihood improvement (change) is therefore necessary to terminate the node splitting. On the other hand, the MDL criterion evaluates the splitting performance according to the description length, which consists of a likelihood term and a penalty term associated with the model complexity. We can calculate the splitting cost by the following equations [5]: DL δ(Dm )M q

Dm

Sm

R-voiced?

Fig. 1. Node splitting of MDL-based decision tree

L δ(Dm )M q

No

Yes

(1)

where G is the total number of data samples, L the increase of model parameters when splitting one node, α the scaling factor which is used to balance the likelihood and model complexity, respectively. The physical meaning of MDL aims at building a tree model which can balance data likelihood and model complexity. But there are two drawbacks of this method: 1) Splitting criterion which may be sensitive to the training set due to some irrelevant attributes or outlier data. 2) Stopping criterion of MDL is based on asymptotic assumption and it is equivalent to a likelihood threshold. In most applications, we often need to tune the penalty factor to determine an appropriate tree. 3. CROSS-VALIDATION BASED DECISION TREE CLUSTERING In order to overcome the above mentioned problems in the traditional MDL-based decision tree, it is desirable to build a decision tree that can explicitly minimize the generalization error and select the model topology (complexity) automatically. In this study we use cross validation for node splitting and tree growing stopping criteria. 3.1. Decision Tree based on Cross Validation In cross validation, we divide the training data Dm into K subsets Dim , i = 1, . . . , K at node Sm . Among the K subsets, a single subset Dkm is reserved as validation data, i.e., to test the model, and the remaining K − 1 subsets, Tk = Dm \ Dkm 1 are used as training data. The cross-validation process is then repeated K times (the 1 B\A is the set of all elements which are members of B, but not members of A.

folds), with each of the K subsets used exactly once as the validation data. Based on this procedure, we can select the question which gives the highest scores on all validation data. It is not limited to, but in this study we use the log likelihood improvement as the score function. 3.1.1. Node Splitting Criteria Fig.2 shows the node splitting procedure. By assuming the alignments are fixed during the optimization process, we can evaluate the log likelihood on each validation data as follows X m LCV P (x|Λm (2) k (Dk ) = k ) m x∈Dk

where Λk are the model parameter estimate from Tk . The increase of log likelihood by splitting Sm through the yes and no question q is given as mqy mqn m δ CV (Dkm )q = LCV ) + LCV ) − LCV k (Dk k (Dk k (Dk )

where Dkmqy = {x|x ∈ Dkm , Question(x) {x|x ∈ Dkm , Question(x) = no}.

= yes} and

Dkmqn

(3) =

In this definition we select the best question for node splitting according to its likelihood increase on all the validation data G (4) qm = arg max δ CV (Dkm )q q

k

F

Note that we can give different definitions, e.g. voting, maximizing or bagging. According to the given definitions, the best question F has P different physical interpretations. In this study, we define = . For this definition, the node splitting criterion is to reduce the bias. 3.1.2. Stopping Criteria m Because we calculate each LCV k (Dk ) on the validation data sets, the tree splitting can stop automatically when G CV m δ (Dk )qm < 0 (5) k

It’s similar to the splitting criterion. We can also combine it with MDL as G CV m δ (Dk )qm + αL log G < 0 (6) k

Eq.(6) can be used to generate different size decision tree. F P To be consistent with the node splitting criterion, we define = . In our experiments, we found that this natural stopping gives good results. 4. EXPERIMENT AND RESULTS A Chinese speech corpus of 1,000 recorded by a female speaker is used in our experiments. The recorded sentences were sampled at 16 kHz. 40th -order LSP coefficients plus gain, as well as their first and second order dynamic features are extracted. They are used to train the ML-based, decision tree-tied baseline model. HMMs of 5-states, left-to-right, no-skip topology with diagonal covariance matrix are used to build all phone models. There are 25,761 different rich context phone models seen in the training corpus. Separated development and test sets, each consisting of 50 sentences, respectively, are selected for our experiments. Parametric speech trajectories are synthesized by the conventional decision treetied models, and our new CV decision tree. Two synthesis systems based on LSP features are built for comparison: Conventional MDL-based decision tree and Cross-Validation based decision tree. We first train the model parameter by tuning the MDL parameter on the development set. Then we compare the two systems both objectively and subjectively. 4.1. Implementation Issues In cross validation method, we need to access all data in each node. To reduce effort of revisiting the data and corresponding computations, we can access all the training data once in a preprocessing stage to collect all necessary sufficient statistics. The crossvalidation likelihood can then be computed efficiently using the precomputed sufficient statistics [6]. Because of space limitation, detail description of the procedure is omitted here.

where f (t) is the fundamental frequency of frame t. 3) Root mean square error between force aligned reference and synthesis state durations s 1X (dref (s) − dgen (s))2 (9) Ddur = S s=1 where d(s) is the duration in frames of state s. 4.2.2. Determining the number of cross-validation folds In K-fold cross validation, we first need to determine the fold number K. We evaluate several K values, from 3 to 15, by using the development set. The results of log-spectral distortions are given in Table.1. We found that LSD is not sensitive to K values. Because of this result, we fix K = 10 for the rest of our experiments. K LSD (dB)

4 5.32

6 5.33

8 5.32

10 5.32

14 5.31

Table 1. The log spectral distortion for different K on the development set

4.2.3. Results Using the MDL-based decision tree splitting, and with different penalty scaling factor α, we can plot the distortion curves of all objective measures on the test set, shown as the diamond curves in Figs.3-5. In practice, we also need to determine an “operating point” along these curves, which is usually done by tuning α on a development set, or simply set α to be 1.0. In our experiments, the optimal operating points determined on the development set for spectrum, f0 and duration models are: αlsp = 0.5, αf0 = 0.5 and αdur = 0.8, respectively. Then, the distortion curves of all objective measures on the test set are plotted, as the diamond curves in Figs.3-5. We also mark these “operating points” in their corresponding figures. From the results, we can see the α values tuned on the development set also yield reasonably good but still not the best performance on the test set.

4.2. Objective Test Results 6.0000

4.2.1. Objective measures

MDL

where the Tvoiced is the number of voiced frames. NFFT is the number of frequency points of each frame. l is the value of log magnitude spectrum (in dB). 2) Root mean square error of F0 v u u 1 Tvoiced X Df0 = t (fref (t) − fgen (t))2 (8) Tvoiced t=1

Log Spectral Distance (dB)

5.9500

In this paper, we use the following objective measures to estimate the distortion between the generated (gen) and reference (ref) parameters of spectrum, f0, duration, respectively. Here we use the extracted spectrum and manually checked f0 as the reference. 1) Log Spectral Distance v Tvoiced u FFT X u 1 NX 1 t DLSD = (lref (t, i) − lgen (t, i))2 (7) Tvoiced t=1 NFFT i=1

CV

5.9000

α=1.0

5.8500 5.8000 5.7500 5.7000 5.6500

α=0.5 5.6000 5.5500

Stop automatically

5.5000 0

2000

4000

6000

8000

10000

12000

State number

Fig. 3. Performance comparison of MDL criterion (MDL) vs. crossvalidation (CV) on log spectral distance

23.4000

MDL

23.2000

RMSE of F0 (Hz/frame)

CV 23.0000

α=1.0

22.8000

α=0.5

Fig. 6. The result of preference test for two system

22.6000

22.4000

5. CONCLUSIONS AND FEATURE WORK 22.2000

Stop automatically 22.0000 0

2000

4000

6000

8000

10000

State Number

Fig. 4. Performance comparison of MDL criterion (MDL) vs. crossvalidation (CV) on F0 30.8000

α=1.0

RMSE of duraon (ms/phone)

30.6000

We propose a training algorithm for building a decision tree, which can maximize its prediction capability via cross validation and stop the tree growing automatically for the given data. Experimental results show that in comparison with MDL training, a cross-validation based decision tree yields a better synthesis performance with a similar model size. It also can find an appropriate model size from the development set. The cross-validation based new decision tree construction facilitates a better (more robust) node splitting and an automatic stopping criterion for its growth. In the future we will use larger speech databases to verify that the concept of cross validation is also extendable to different sized databases and other languages.

α=0.8

30.4000

30.2000

30.0000

MDL 29.8000

CV 29.6000

Stop automatically 29.4000 0

1000

2000

3000

4000

5000

6. REFERENCES

State Number

Fig. 5. Performance comparison of MDL criterion (MDL) vs. crossvalidation (CV) on state duration

The distortion curve using the cross-validation-based criterion is plotted as the triangle line in Figs.3-5. To get similar model size, i.e., number of model parameters, a threshold is imposed as (Eq.(6)). As we can see from the figures, 1) The cross-validation method always give better performance when the two systems have similar number of model parameters. 2) The CV decision tree stops automatically. 3) Compared with spectrum and duration, the cross-validation decision tree for f0 has significantly larger number of terminal leaves than an MDL-based decision tree. This is due to the fact that splitting of the unvoiced space in MSD-HMM can always get a marginal likelihood increase. However, since this splitting does not effect the voiced/unvoiced decision in synthesis, it has no significant effects on the final result. 4.3. Subjective Test Results In the subjective test we compare standard MDL based with the 10fold cross-validation based decision trees. A separated test set of 50 sentences is selected in our experiments for an AB comparison preference test. Eight subjects are invited to listen to randomized pairs of sentences synthesized by the two methods, and to provide their preference. The results of the preference test are given in Fig.6 where shows our method achieves a better performance.

[1] K. Tokuda, T. Kobayashi, T. Masuko, T. Kobayashi, and T. Kitamura, “Speech parameter generation algorithms for hmm-based speech synthesis,” in Proc. ICASSP, 2000, pp. 1315–1318. [2] J. J. Odell, “The use of context in large vocabulary speech recognition,” 1995. [3] S.J. Young, J.J. Odell, and P.C. Woodland, “Tree-based state tying for high accuracy acoustic modelling,” 1994. [4] L. Rokach and O. Maimon, Data Mining with Decision Trees: Theory and Applications, World Scientific, 2008. [5] K. Shinoda and T. Watanabe, “Acoustic modeling based on the mdl principle for speech recognition,” in Proc. EuroSpeech, 1997, pp. 99–102. [6] T. Shinozaki, S. Furui, and T. Kawahara, “Aggregated crossvalidation and its efficient application to gaussian mixture optimization,” in Proc. Interspeech, 2008, pp. 2382–2385. [7] I. Rogina, “Automatic architecture design by likelihood-based context clustering with crossvalidation,” in Proc. Eurospeech 97, 1997, pp. 1223–1226. [8] K. Hashimoto, H. Zen, Y. Nankaku, A. Lee, and K. Tokuda, “Bayesian context clustering using cross valid prior distribution for hmm-based speech recognition,” in Proc. Interspeech, 2008, pp. 936–939. [9] K. Tokuda, T. Masuko, N. Miyazaki, and T. Kobayashi, “Multispace probability distribution hmm,” in IEICE Trans. Inf. & Syst., 2002, pp. 455–464.

Cross-validation based decision tree clustering for ...

Conventional decision tree based clustering is a top-down, data driven training process, based on a greedy tree growing .... Then we compare the two systems both objectively and subjectively. 4.1. ... the optimal operating points determined on the development set for spectrum, f0 and duration models are: αlsp = 0.5, αf0 ...

265KB Sizes 10 Downloads 215 Views

Recommend Documents

cross-validation based decision tree clustering for hmm ...
CROSS-VALIDATION BASED DECISION TREE CLUSTERING FOR HMM-BASED TTS. Yu Zhang. 1,2. , Zhi-Jie Yan. 1 and Frank K. Soong. 1. 1. Microsoft ...

Mutual Information Phone Clustering for Decision Tree ...
State-of-the-art speech recognition technology uses phone level HMMs to model the ..... ing in-house linguistic knowledge, or from linguistic liter- ature on the ...

Decision Tree State Clustering with Word and ... - Research at Google
nition performance. First an overview ... present in testing, can be assigned a model using the decision tree. Clustering .... It can be considered an application of K-means clustering. Two ..... [10] www.nist.gov/speech/tools/tsylb2-11tarZ.htm. 2961

Factoring Decision Tree
All Polynomials. Factor out Greatest. Common Factor first! Binomials. Difference of Two. Squares a2 - b2 = (a + b)(a-b). Linked to trinomials x2 - 4 = x2 + 0x - 4 =.

A Sensitive Attribute based Clustering Method for kanonymization
Abstract—. In medical organizations large amount of personal data are collected and analyzed by the data miner or researcher, for further perusal. However, the data collected may contain sensitive information such as specific disease of a patient a

Contextual Query Based On Segmentation & Clustering For ... - IJRIT
In a web based learning environment, existing documents and exchanged messages could provide contextual ... Contextual search is provided through query expansion using medical documents .The proposed ..... Acquiring Web. Documents for Supporting Know

Contextual Query Based On Segmentation & Clustering For ... - IJRIT
Abstract. Nowadays internet plays an important role in information retrieval but user does not get the desired results from the search engines. Web search engines have a key role in the discovery of relevant information, but this kind of search is us

Boosting Margin Based Distance Functions for Clustering
Under review by the International Conference ... ing the clustering solutions considered to those that com- ...... Enhancing image and video retrieval: Learning.

Evaluating Fuzzy Clustering for Relevance-based ...
meaningful groups [3]. Our motivation for using document clustering techniques is to enable ... III, the performance evaluation measures that have been used.

Ranking with decision tree
This is an online mistake-driven procedure initialized with ... Decision trees can, to some degree, overcome these shortcomings of perceptron-based ..... Research Program of Chinese Academy of Sciences (06S3011S01), National Key Technology R&D Pro- .

Deterministic Clustering Based Communication ...
network have limited energy, prolonging the network lifetime becomes the unique ... Mohammad Abu Nawar Siddique was with the Computer Science and. Engineering .... energy, degree, mobility, and distances to the neighbor or their combination. ... comp

Clustering Based Active Learning for Evolving Data ...
Clustering Based Active Learning for Evolving. Data Streams. Dino Ienco1, Albert Bifet2, Indr˙e Zliobait˙e3 and Bernhard Pfahringer4. 1 Irstea, UMR TETIS, Montpellier, France. LIRMM ... ACLStream (Active Clustering Learning for Data Streams)to bett

A Distributed Clustering Algorithm for Voronoi Cell-based Large ...
followed by simple introduction to the network initialization. phase in Section II. Then, from a mathematic view of point,. derive stochastic geometry to form the algorithm for. minimizing the energy cost in the network in section III. Section IV sho

Knowledge-based Semantic Clustering
and the concept of the “Internet of Things”. These trends bring ... is therefore more flexible, open and reusable to new applications. However, the scalability of a ...

An Effective Tree-Based Algorithm for Ordinal Regression
Abstract—Recently ordinal regression has attracted much interest in machine learning. The goal of ordinal regression is to assign each instance a rank, which should be as close as possible to its true rank. We propose an effective tree-based algori

Locally Scaled Density Based Clustering
the density threshold based on the local statistics of the data. The local maxima ... ology and geospatial data clustering [5], and earth science tasks [1]. ..... matic subspace clustering of high dimensional data for data mining applications. In SIG

Replica Placement for Route Diversity in Tree-Based Routing ...
Replica Placement for Route Diversity in Tree-Based Routing Distributed Hash Tables..pdf. Replica Placement for Route Diversity in Tree-Based Routing ...

SITE-BASED DECISION-MAKING TEAM
Mar 26, 2013 - Roll Call. Members: Sydney Travis, Kate Grindon, Renee Romaine, Chris ... approved with one correction to roll call name. ... Old Business.

SITE-BASED DECISION-MAKING TEAM
Meyzeek Middle School. SITE-BASED DECISION-MAKING TEAM. Tuesday, 26 March 2013. 5:00 PM YSC Conference Room. I. Call to Order. -The meeting was ...

SITE-BASED DECISION-MAKING TEAM
Mar 26, 2013 - email today's agenda and approved minutes from the previous meeting(s) to Ms. Shawna ... Ad Hoc i. Scheduling Committee ii. Budget Committee. VII. Speakers on Non-Action Items. VIII. Old Business. • Parent Terms of Office (Lisa Runkl

Dependency Tree Based Sentence Compression
training data. Still, there are few unsupervised meth- ods. For example, Hori & Furui (2004) introduce a scoring function which relies on such informa- tion sources as word significance score and language model. A compression of a given length which

DDD-DVRS-Employment-Decision-Tree Final.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.