Extended Partial Distance Elimination and Dynamic Gaussian Selection for Fast Likelihood Computation Ghazi Bouselmi1, Jun Cai1,2 1

Speech Group, LORIA-CNRS & INRIA, “http://parole.loria.fr/” BP 239, 54600 Vandoeuvre-les-Nancy, France 2 Dept. of Cognitive Science, Xiamen Univ., 361005 Xiamen, China [email protected], [email protected]

Abstract A new fast likelihood computation approach is proposed for HMM-based continuous speech recognition. This approach is an extension of the partial distance elimination (PDE) technique. Like PDE, the extended PDE (EPDE) approach aims at finding the most prominent Gaussian in a GMM for a given observation, and approximating the GMM’s likelihood with the identified Gaussian. EPDE relies on a novel selection criterion in order to achieve greater time efficiency at the cost of slight degradation of recognition accuracy. This novel criterion has been combined with a dynamic Gaussian selection technique for greater recognition accuracy. Tests on TIMIT corpora shows a satisfying computation time saving of 7.3% at the same error level than PDE. Compared to a baseline, the methods we propose have also achieved a significant reduction in the number of computations of 71.5% at the same error level as PDE. Index Terms: fast likelihood computation, partial distance elimination, Gaussian selection, hidden Markov models, speech recognition.

1. Introduction Nowadays, most large vocabulary continuous speech recognition (LVCSR) systems rely on continuous density HMMs (CDHMMs) for acoustic modelling of speech signals. In such systems, the triphone HMM acoustic models typically comprise 2000 to 6000 model states, each of which is described as a Gaussian mixture model (GMM) with 8 to 64 multidimensional Gaussians. Even more Gaussians can be used in each GMM in order to enhance the recognition accuracy. The large number of acoustic model parameters imply an intensive computational load and a long computation time in the process of likelihood computation for each observation. As stated in [1], the state likelihood estimation takes from 30% to 70% of the total recognition time, depending on the task in hands and the complexity of the models. Along with language model searching, likelihood computation is so time consuming that most of LVCSR systems run at several times slower than real time. To speed up the likelihood computation, we propose a new method called “Extended PDE”. It uses a novel partial elimination criterion to achieve further time saving than PDE [2, 3]. We also propose a combination of this new criterion with a dynamic Gaussian selection (DGS) technique [4] to minimize the approximation error. The article is organized as follows. In section 2, two approaches for fast likelihood computation are reviewed, namely the nearest neighbour approximation (PDE) and VQ-based

Gaussian selection. In section 3, we present our new PDE criterion for fast likelihood computation called “Extended PDE”. The combination of this new criterion with a dynamic Gaussian selection technique is presented in section 4. Section 5 reports a performance analysis of the new methods, along with recognition experiments on TIMIT corpus. Finally, the article ends in a discussion and a brief conclusion.

2. Background Originally, Chang et al. [2] developed the “partial distance elimination” technique (PDE) to fasten vector quantisation (VQ) processing by eliminating unnecessary computations. In such an issue, the goal is to classify a vector x -of dimension N- into the codeword e c verifying the equation 1. In the case of euclidean distance, the equation 1 can be expressed as in equation 2. c = argmin{D(x, c)} e

(1)

2 e c = argmin{ΣN j=1 (xj − cj ) }

(2)

c∈C

c∈C

where C = {c1 ..cH } is the set of codewords, D a distortion measure and xj (resp. cj ) is the j th element of the vector x (resp. c). If we define the partial distortion at the rank k, between a vector x and a codeword c as in equation 3, we would have the relations of equations 4 and 5. Dk (x, c) = Σkj=1 (xj − cj )2 Dk (x, c) = D(k−1) (x, c) + (xk − ck )2 , ∀1 < k ≤ N D(x, c) = DN (x, c)

(3) (4) (5)

One can easily see that the partial distortion is monotonically increasing over the dimensionality of the vectors. The main idea of the PDE approach is to utilise the monotonicity of the distortion used in the VQ. Knowing the value of the minimal e between a vector x and the codewords {c1 ..cs }, a distortion D part of the computation of the distortions between x and the rest of the codewords {cs+1 ..cH } can be truncated. Indeed, for a sub-optimal codeword b c ∈ {cs+1 ..cH } (verifying D(x, b c) > e e When the D), there exists a rank b k where Dkb (x, b c) > D. previous condition is met at a rank b k, one can predict that e and that b D(x, b c) > D c is not the closest codeword to x. Thus, the computation of partial distortions Dk+1 (x, b c)..DN (x, b c) b can be discarded. A computational time reduction is achieved without any approximation error for the minimal distortion or the classification of the vector x.

2.1. PDE in likelihood computation For CD-HMMs continuous speech recognition systems, the probability density of each HMM state S is modeled as a GMM, as in equation 6. In the latter equation, d is the number of gaussians in the GMM, ℵ(µi , Σi ) are Gaussian distributions and ωi are weights with Σdi=1 ωi = 1. The emission probability of an observation x by a HMM state S can be expressed as in equation 7. In equation 7, p(x|ℵ(µi , Σi )) is the emission probability for observation x by the Gaussian ℵ(µi , Σi ), and is expressed as in equation 8. For diagonal variance matrices Σi , the likelihood of emission of x by a Gaussian ℵ(µi , Σi ) can be expressed as in the equation 9. It can be seen that, in the equ. 9, the emission likelihood of a Gaussian is a weighted distortion measure between x and the mean of the Gaussian, where the weight is the variance of the Gaussian. Σdi=1 ωi ℵ(µi , Σi ) p(x|S) = p(x|ℵ(µi , Σi )) =

1 N

1

(2π) 2 |Σi | 2

e

1 N

1

1 N (xk − µik ) Σk=1 2 Σik

Table 1: An average of the contributions (in %) of the 10 best gaussians in the total GMM probability.1 Rank of Gaussian Contribution Rank of Gaussian Contribution

1 85.89 6 0.13

2 10.35 7 0.07

3 2.40 8 0.04

4 0.77 9 0.02

5 0.30 10 0.01

Table 2: Accuracy of best Gaussian prediction with BMP.1 Rank of BMP Percent of cases Rank of BMP Percent of cases

1 52.76 6 2.32

2 17.59 7 1.67

3 8.60 8 1.27

4 5.07 9 1.00

5 3.27 10 0.79

(7)

(x−µi )T Σ−1 −1 i (x−µi ) 2

ℓ = log(p(x|ℵ(µi , Σi ))) = Zi − where Zi = log(

(6)

Σdi=1 ωi p(x|ℵ(µi , Σi ))

approach on the HIWIRE corpus, as shown in table 2. It can seen that in 52.76% of the cases, the BMP predicts the best gaussian for the current observation.

(8)

2

(9)

) is a constant for the Gaussian i

(2π) 2 |Σi | 2

and xk (resp. µik , Σik ) is the kth element of x (resp. µi , Σi ). On the other hand, for an observation x, only few gaussians have a prominent contribution in the emission probability of a GMM. Our experiments, sketched in table 1, show that the closest Gaussian to an observation x contributes by an average of 85.9% of the total probability of the whole GMM1 . This understanding led to considering the approximation of the emission likelihood of a GMM S with the likelihood of the closest Gaussian to the observation under consideration. The likelihood of emission of an observation x by a GMM S would be expressed as in the equation 10. 1 N (xk − µik )2 Σ } 2 k=1 Σik (10) In the light of such an approximation, the likelihood computation for a GMM becomes a VQ problem where the codewords are the gaussians and the distortion measure is the likelihood of each Gaussian. Thus, likelihood computation (VQ problem as defined in section 2) can benefit from the PDE approach to reduce time consumption. For that matter, Pellom et al. [3] have used PDE for fast likelihood computation and obtained a time reduction of 4% compared to a baseline system. For further efficiency, they have proposed a best mixture prediction (BMP) approach used along with PDE. Based on the strong correlation of successive speech observations, the BMP uses best Gaussian for the last processed observation in order to predict the best Gaussian for the current observation. In evaluating the likelihood of a GMM for the current observation, starting the computation on the predicted Gaussian brings a chance to get a high likelihood immediately. Thus, likelihood computation on the next Gaussian mixtures could be largely truncated with PDE. We have performed a statistical analysis of the BMP ℓ(x|S) ≈ maxdi=1 {log(ωi ) + Zi −

1 The rank is based on the contribution of the gaussian to the hole likelihood of the GMM. Computed for 3 state english monophone models, with 128 gaussians per state and on 10 thousand observation vectors picked up from HIWIRE corpus.

2.2. VQ-based Gaussian selection If we consider the table 1 we can see that, in average, 5% of the gaussians of a GMM contribute with more than 99% of the total probability of the GMM. The likelihood computation could be restricted to the processing of only the closest gaussians (in each GMM) to the observation under consideration. The main idea of VQ-based Gaussian selection techniques is to predict the best gaussians of each GMM for a given observation [1, 5, 7]. Usually, VQ Gaussian selection is based on an acoustic space clustering. Upon the completion of the training of acoustic models, for each pair of GMM (HMM model state) and acoustic space cluster, a short list of gaussians is set up. These short lists comprise the L closest gaussians to the centroids of the cluster according to a certain distance measure. The cardinality L of the lists is a parameter depending on the application and the targeted performance. In recognition time, each incoming speech observation is quantized into one acoustic space cluster. The likelihood of each GMM is then computed only on the short list of gaussians corresponding to the couple of GMM and acoustic cluster. Thus, VQ Gaussian selection approaches are essentially two pass techniques based on an offline acoustic space clustering and precomputed best gaussians lists. While VQ-based Gaussian selection techniques might perform better in terms of time reduction and likelihood approximation accuracy, they imply a significant growth in storage and memory requirements to store the short lists.

3. Extended PDE As explained in section 2.1, PDE approximates the likelihood of a GMM for an observation x with the likelihood of the best Gaussian. The likelihood computation is reduced to a VQ quantisation problem. A proportion of the calculations for suboptimal gaussians is truncated whenever the cumulative distortion between such gaussians and the observation drops below the best distortion obtained so far. The method we propose in this article is a similar scheme to the PDE approach plus BMP. The main guide line of our method is to eliminate the likelihood computation for a sub-optimal Gaussian faster than the PDE scheme, based on an approximation in the elimination criterion. For the likelihood computation process of a Gaussian G for an observation x, we modify the

comparison of the cumulative partial distortion Dk (x|G) with e = D(x|G). e Rather, we the best distortion obtained so far D propose to compare Dk (x|G) to an earlier partial distortion bee and x : Dk+l (x|G), e where l is a look-ahead rank. For tween G ranks N − l < k ≤ N , Dk (x|G) is compared to the best dise = D(x|G). e With this criterion approximation, we tortion D aim at halting the computations for sub-optimal Gaussians at an earlier stage. The EPDE algorithm combined with BMP is described below. Algorithm: Extended PDE Input: x: an N-dimensional observation a GMM with d mixtures, Σdi=1 ωi ℵ(µi , Σi ) B: index of the best mixture for the last observation l: a look-ahead parameter e the likelihood of the best mixture Output: D: e index of the best mixture for x B: f : table of partial distortions for the best Gaussian Variables: D so far D : table of partial distortions for the current Gauss. BEGIN f0 ← log(ωB ) + ZB . D . For k = 1 to N . Do 2 1 (xk −µBk ) fk ← D ^ . . D k−1 − 2 ΣBk . End For e ←D g e←B . D ; B N . For i = 1 to d . Do . . If (i = B) Then Skip to next Gaussian . . D0 ← log(ωi ) + Zi . . For k = 1 to (N − l) . . Do 2 ik ) . . . Dk ← Dk−1 − 12 (xk −µ Σik ] . . . If (Dk < D k+l ) Then Skip to next Gaussian . . End For . . For k = (N − l + 1) to N . . Do 2 ik ) . . . Dk ← Dk−1 − 21 (xk −µ Σik g . . . If (Dk < D N ) Then Skip to next Gaussian . . End For e ← DN ; B e←i . . D f ⇔ D . . Switch tables : D . End For e B) e . Return(D, END While PDE selects the closest Gaussian to the observation, EPDE does not guarantee this optimal selection. Indeed, one can see that there could exist situations where EPDE can disb card the actual best Gaussian. The “real” best Gaussian G b < (in a GMM) for an observation x could verify Dk (x|G) e (at a partial likelihood computation step k). The Dk+l (x|G) constraint of optimality is relaxed in the favor of better computation time reduction. We can see in figure 1 the error in Gaussian selection introduced by EPDE depending on the lookahead parameter l. In this figure, we notice that narrower lookahead values l imply poorer decision on the best Gaussian. For a value of l = 3, EPDE picks the closest Gaussian to the observation in only 43.2% of the cases. A value of look-ahead of l = 13 (rep. 16, 20 and 25) results in better selection accuracy of 96.7% (rep. 98.7%, 99.7% and 99.97%).

Figure 1: Histogram of the rank of the selected Gaussian by EPDE, with different look-ahead parameters. (conditions 1 ).

4. Combined EPDE and DGS In [4] we have presented a new method of dynamic Gaussian selection (DGS) for fast likelihood computation. This method is based on the framework of PDE and aims at minimizing the recognition accuracy introduced by the likelihood approximation in PDE. For each GMM, DGS dynamically selects a list of closest gaussians to the observation vector. Contrary to PDE, the likelihood of a GMM is approximated with the sum of the likelihoods of Gaussians in the dynamically selected list. DGS uses the rank b k (see 2 and 2.1) at which the PDE stops the likelihood computation of a Gaussian G, as a hint about the distance between the observation x and G. If the stop rank b k is less than a certain threshold γ, the Gaussian G is considered very far from x compared to other gaussians in the GMM: namely the best Gaussian found so far. On the other had, if b k is greater than γ, then G is considered close to x and is selected to contribute in the total likelihood of the GMM. In this case, the likelihood computation of G is resumed (from the rank b k+1) and the full likelihood ℓ(x|G) is calculated and added to the total likelihood of the GMM. The value of γ is closely related to the dimensionality N of the models, and it should be 70% to 90% of N . It should be chosen according to the targeted accuracy and speed of the recognition. Our experiments in [4] show that DGS introduces less recognition accuracy degradation than PDE at the expense of a slight computational time increase. For that matter, we have combined DGS and EPDE for faster likelihood computation with minimal accuracy decrease. The procedure of EDGS (DGS + EPDE) is basically the same as described in the last paragraph, except that EPDE is the framework.

5. Experiments and discussion We have tested the EPDE and EDGS methods on the test part of TIMIT corpus. As we are interested mainly in the time performance of the methods, the tests we have carried out are phonetic level recognitions. The acoustic models we have used are 3-state HMM mono-phones trained on TIMIT. We chose a parametrisation of 13 MFCC coefficients (12 + energy) with their first and second derivatives. We investigated the values of the look-ahead l ∈ {3, 5, 7, 10, 13, 16, 20} for EPDE (see section 3). For EDGS, we have chose a value of the threshold γ = 35 (see section 4). Table 3 summarises the results of tests with different values of l. For comparison purposes, this table also

Table 3: Results for phonetic recognition on TIMIT, with TIMIT 3-state mono-phone models. “Time %” is the ratio of total likelihood computation time of the system vs. the baseline. “Computations %” is the ratio of the number of one-dimentional likelihood computations of the system vs. the baseline. 128 Gaussian mixtures per state: Baseline Accuracy 64.76 Time % 100.00 Computations % 100.00 Baseline Accuracy 64.76 Time % 100.00 Computations % 100.00

PDE 64.37 69.62 29.52 DGS 64.74 73.20 29.58

EPDE-3 51.80 51.18 8.88 EDGS-3 51.84 51.97 8.88

EPDE-5 57.28 53.88 11.58 EDGS-5 57.45 55.02 11.58

EPDE-7 60.64 57.18 14.46 EDGS-7 60.82 58.09 14.48

EPDE-10 63.24 59.78 17.83 EDGS-10 63.52 61.48 17.85

EPDE-13 64.11 62.56 20.69 EDGS-13 64.43 64.57 20.72

EPDE-16 64.22 65.03 22.98 EDGS-16 64.6 67.18 23.01

EPDE-20 64.34 68.12 25.52 EDGS-20 64.68 70.38 25.55

64 Gaussian mixtures per state: Baseline Accuracy 63.93 Time % 100.00 Computations % 100.00 Baseline Accuracy 63.93 Time % 100.00 Computations % 100.00

PDE 63.59 66.53 33.32 DGS 63.88 70.51 33.42

EPDE-3 52.64 48.16 10.39 EDGS-3 52.72 49.01 10.39

EPDE-5 57.86 50.85 13.41 EDGS-5 57.96 52.18 13.42

EPDE-7 60.56 53.63 16.69 EDGS-7 60.75 55.41 16.71

EPDE-10 62.56 56.93 20.53 EDGS-10 62.81 59.00 20.57

EPDE-13 63.29 59.89 23.80 EDGS-13 63.62 62.77 23.85

EPDE-16 63.50 62.49 26.39 EDGS-16 63.75 64.96 26.44

EPDE-20 63.56 65.70 29.19 EDGS-20 63.84 68.41 29.24

comprises the results of PDE and DGS approaches. As can be seen in table 3, PDE and DGS achieve around 17%-23% likelihood computational time reduction. We can see that DGS has virtually the same accuracy than the baseline system. PDE is about 5% faster (relative) than DGS, at the expense of about 0.5% recognition accuracy degradation (relative). Both PDE and DGS reduce the computations by 70.5% for 128 mixtures models (and 66.5% for 64 mixtures). As expected, a narrower look-ahead value l results in a faster likelihood calculation and a greater reduction in the computations count. But also, smaller values of l deteriorate the recognition accuracy. Comparing EPDE to PDE, with 128 (resp. 64) mixtures per state, a value of l = 20 decreases the recognition time of about 2.2% relative (resp. 1.3%) and reduces the number of computations by 13% relative (resp. 12.5%), with virtually the same recognition accuracy for both PDE and EPDE. We observe the same tendencies when comparing EDGS to DGS: with 128 (resp. 64) mixtures, a value of l = 20 decreases the recognition time of about 3.8% relative (resp. 3%) and reduces the number of computations by 13.6% relative (resp. 12.5%), with virtually the same recognition accuracy. Furthermore, we can see that for a value of l = 13, EDGS tested on 128 mixtures models (resp. 64) performs 7.3% (resp. 5.7%) faster than PDE, and requires 29.8% (resp. 28.5%) less computations, with virtually the same recognition accuracy.

combined EPDE with a dynamic Gaussian selection technique (DGS). At the same error rate, combined DGS and EPDE is 7.3% faster than PDE and necessitates 29.8% less computations. This makes it suitable for time consuming speech recognition applications. Especially, applications with limited resources or where memory access is highly penalizing -such as mobile platforms- could benefit from these approaches.

7. References [1]

[2]

[3]

[4]

6. Conclusions In this article, we have presented a new approach for fast likelihood computation which is based on a similar scheme than PDE. This extended PDE (EPDE) approach relaxes the constraint of choosing the closest Gaussian in the GMM to the observation vector in the favor of faster processing. This speed up is achieved through comparing the partial distortion (between a Gaussian and an observation) to a partial distortion of the best Gaussian, rather than the final distortion of the latter. Results show that EPDE performs faster and requires less computations than PDE, for the same error rate. We have also

[5]

[6]

[7]

M. J. F. Gales, K. M. Knill, and S. J. Young, “StateBased Gaussian Selection in Large Vocabulary Continuous Speech Recognition Using HMM’s”, IEEE Trans. on Speech and Audio Processing, Vol. 7, No. 2, pp. 152-161, March 1999. Chang-Da. Bei and R. M. Gray, “An Improvement of the Minimum Distortion Encoding Algorithm for Vector Quantization”, IEEE Transactions on Communications, Vol. 33, Issue 10, pp. 1132-1133, October 1985. B. L. Pellom, R. Sarikaya, and J. H. L. Hansen, “Fast Likelihood Computation Techniques in Nearest-Neighbor Based Search for Continuous Speech Recognition”, IEEE Signal Processing Letters, Vol. 8, No. 8, pp. 221-224, August 2001. J. Cai, and G. Bouselmi, “Dynamic Gaussian Selection Technique for Speeding Up HMM-Based Continuous Speech Recognition”, Proc. of ICASSP’08, March 2008. E. Bocchieri, “Vector Quantization for the Efficient Computation of Continuous Density Likelihood”, Proc. of ICASSP’93, Vol. 2, pp. 692-695, April 1993. X. Huang, A. Acero, and H.-W. Hon, “Spoken Language Processing: A Guide to Theory Algorithm and System Development”, Prentice Hall PTR, New Jersy, April 2001. J. Fritsch, and I. Rogina, ”The Bucket Box Intersection (BBI) Algorithm For Fast Approximation Evaluation of Diagonal Mixture Gaussians”, Proc. of ICASSP’96, Vol. 2, pp. 837-840, May 1996.

Extended Partial Distance Elimination and Dynamic ...

Along with language model search- ing, likelihood computation is so time consuming that most of ... the nearest neighbour approximation (PDE) and VQ-based. Gaussian selection. In section 3, we present our new .... likelihood of a GMM S with the likelihood of the closest Gaus- sian to the observation under consideration.

81KB Sizes 0 Downloads 232 Views

Recommend Documents

Dynamic Partial Reconfiguration
Nov 1, 2004 - Xilinx software that appeared in the version 6.3 SP3. I found two ways of solving this problem : (1) uninstall SP3 or (2) use FPGA editor.

Dynamic security deployment under partial information
Sep 24, 2008 - Dynamic Network Security. Deployment ... maximize their perceived utility. ▫ Results in an ... dependence on speed of learning 'network state'.

Partial Realization in Dynamic Justification Logic
The effect of a public announcement of statement A is represented by a formula ... membership card, which can also be attached to their luggage to make public.

Dynamic Properties of an Extended Polymer in Solution
Apr 26, 1999 - Dynamic Properties of an Extended Polymer in Solution ..... analytic model is good, and we conclude that for a Rouse polymer, the dominant ...

Minimum Distance Estimators for Dynamic Games
Oct 5, 2012 - Lewbel, Martin Pesendorfer, Carlos Santos, Marcia Schafgans, Philipp Schmidt-Dengler, Myung Hwan Seo, and seminar participants at ...

Elimination chamber 2015
Mooreanatomy pdf.Elimination chamber 2015.Elimination chamber 2015.2013 ... Every thingwork.Kung fu hustle. eng.Elimination chamber 2015 wasn'teven ...

Extended - GitHub
Jan 29, 2013 - (ii) Shamir's secret sharing scheme to divide the private key in a set of ..... pdfs/pdf-61.pdf} ... technetwork/java/javacard/specs-jsp-136430.html}.

Direct and selective elimination of specific prions and ...
May 20, 2008 - Я-sheet strands run orthogonal to the fiber axis (2). Preamyloid ...... at 337 nm, and emissions at 470 nm (8-nm bandwidth) were recorded.

Detection Elimination and Overcoming of Vampire Attacks in ... - IJRIT
... Computer Science And Engineering, Lakkireddy Balireddy College Of Engineering ... Vampire attacks are not protocol-specific, in that they do not rely on design ... are link-state, distance vector, source routing, geo graphic and beacon.

The Projection Dynamic and the Replicator Dynamic
Feb 1, 2008 - and the Replicator Dynamic. ∗. William H. Sandholm†, Emin Dokumacı‡, and Ratul Lahkar§ ...... f ◦H−1. Since X is a portion of a sphere centered at the origin, the tangent space of X at x is the subspace TX(x) = {z ∈ Rn : x

Distance Matrix Reconstruction from Incomplete Distance ... - CiteSeerX
Email: drinep, javeda, [email protected]. † ... Email: reino.virrankoski, [email protected] ..... Lemma 5: S4 is a “good” approximation to D, since.

Detection Elimination and Overcoming of Vampire Attacks in ... - IJRIT
Ad hoc wireless sensor networks (WSNs) promise exciting new applications in the near future, such as ubiquitous on-demand computing ... In the one cause of energy loss in wireless sensor network node in the idle consumption, when the nodes are not pa

Aggregating CL-Signatures Revisited: Extended Functionality and ...
Aggregate signature is a new type of PKS which enables any user to combine signatures signed by ... Types of Aggregate Signature. ○. The types of aggregate signatures are categorized as full aggregation, ..... element and one integer, and the aggre

Agenda and registration form - eXtended EudraVigilance Medicinal ...
speakers are their own opinion and not necessarily that of the organisation they ... Please charge my ❑ VISA ❑ MC ❑ AMEX ... If you do not cancel four weeks prior to the event start ... materials, publications, and website and waive any and.

memorialcup-double-elimination-bracket.pdf
Page 1 of 1. Winner's Bracket Memorial Day Weekend Soccer Cup. 20-Minute Halves - Double Elimination. West Bank A Squad 5/22/2015 - 5/24/2015.

Partial Default - Cristina Arellano
(Trade costs, Rose 2002; financial crises, Reinhart and Rogoff 2010; lawsuits and sanctions ... partial recovery of those debts. Arellano, Mateos-Planas ... Public debt data from World Development Indicators: debt in arrears and new loans.

Partial Default
Oct 7, 2013 - SDN. SEN. SEN. SEN. SLB. SLE. SLE. SLE. SLV. SYC. TGOTGO. TGO. TGO. TUR. TUR. UKR. URY. URY. URYURY. VEN. VEN. VEN. VEN. VEN. VNM. ZAR. ZMB. ZWE. ZWE. 0 .2 .4 .6 .8. 1. Defaulted. Debt / P aym en ts D ue. -20. -10. 0. 10. 20. GDP growth

Partial Insurance and Investments in Children - UCL
Jul 24, 2014 - markets, social and family networks, labor supply, and welfare ...... 19862006 Child Data and 19942006 Young Adult Data”, Center for Human ...

Content Aware Redundancy Elimination for Challenged Networks
Oct 29, 2012 - Motivated by advances in computer vision algorithms, we propose to .... We show that our system enables three existing. DTN protocols to ...

MTAP Elimination 2016.pdf
MTAP Elimination 2016.pdf. MTAP Elimination 2016.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying MTAP Elimination 2016.pdf. Page 1 of 69.