LANGUAGE IDENTIFICATION USING A COMBINED ARTICULATORY PROSODY FRAMEWORK Abhijeet Sangwan, Mahnoosh Mehrabani, John H. L. Hansen Center for Robust Speech Systems (CRSS), Eric Jonsson School of Engineering, University of Texas at Dallas, Richardson, Texas, U.S.A. ABSTRACT This study presents new advancements in our articulatory-based language identification (LID) system. Our LID system automatically identifies language-features (LFs) from a phonological features (PFs) based representation of speech. While our baseline system uses a static PF-representation for extracting LFs, the new system is based on a dynamic PF representation for feature extraction. Interestingly, the new LFs outperform our baseline system by 11.8% absolute in a difficult 5-way classification task of South Indian Languages. Additionally, we incorporate pitch and energy based features in our new system to leverage prosody in classification. In particular, we employ a Legendre polynomial based contour-estimation to capture shape parameters which are used in classification. Additionally, the fusion of PF and prosody-based LFs further improves the overall classification result by 16.5% absolute over the baseline system. Finally, the proposed articulatory language ID system is combined with a PPRLM (parallel phone recognition language model) system to obtain an overall classification accuracy of 86.6%. Index Terms: Language Identification, Articulatory Features, Phonological Features, Prosodical Features 1. INTRODUCTION In this study, we present new advancements in our articulatory framework for language identification (LID). Our LID system is based on extracting articulatory traits or language features (LFs) from a phonological features (PFs) based representation of speech. Here, the PFs are automatically extracted from the speech signal and can be thought of as parallel streams of articulatory knowledge where the dynamics of various speech articulators are tracked over time. For example, the PF-stream for PF-type ”tongue-height” would track the vertical component of tongue motion (between low, mid, and high values) over time. Furthermore, the process of extracting LFs amounts to identifying discriminatory patterns within these PF-streams. This strategy is very similar in principle to the process of extracting acoustic features such as MFCC (Mel-frequency cepstral coefficients) from the time-frequency representation of speech. In our LID system, the LFs are analogous to acoustic features, and the space of PF-types and time is analogous to time-frequency. Our PF-based LID system is motivated by a fundamental hypothesis that different languages can be distinguished based on articulatory traits. The justification for this hypothesis is that (i) some articulatory traits could be dominant in one language and rare in another, and (ii) some articulatory traits could be exclusively present only in certain languages and absent in others. Interestingly, LFs (language features) or articulatory traits can be extracted from the PF-streams is a number of meaningful ways. For example, some This project was supported in part by USAF under a subcontract to RADC, Inc. under FA8750-09-C-0067.

LFs could solely focus on tongue movements while others can simultaneously track all articulator values, and yet others might consider long-term or short-term articulatory events in the PF streams. Therefore, it is readily seen that each strategy of extracting LFs from a PF-based representation captures unique aspects of articulation. Here, it is reasonable to assume that LFs drawn from different extraction strategies will yield varying classification results. Additionally, combining LFs drawn from several distinct extraction strategies may lead to an increasingly powerful set of discriminatory features for LID. This forms the motivation for this study where we examine new ways of extracting LFs from a PF-based representation. Our previous work had focused on extracting LFs from PFstreams separately for consonants, vowels, and consonant-vowel segments. Our new approach extracts LFs from dynamic PFstreams, (i.e., it draws LFs that are based on the changes in the PF-type values alone). The new technique also combines articulatory changes that occur in the short-term within the PF-type time space (such as lips being rounded while the excitation moves from aspirated to voiced, etc.). In this manner, while our earlier technique relied on dominant articulatory positions to distinguish between languages, our new technique leverages dominant articulatory changes for LID. The new technique is extremely effective as it improves the LID classification accuracy by 11.8% absolute over our baseline system [1]. Additionally, the newly proposed LID system also incorporates knowledge from prosody elements of pitch and energy contours into the system. Particularly, the pitch and energy contour shapes are estimated at a pseudo-syllable level (consonant-vowel clusters), and included in the system as LFs. The combined articulatory prosody LID system achieves an additional 4.7% absolute gain in classification accuracy. This study also compares the proposed articulatory LID system to a PPRLM system. Our experimental results show that while the PPRLM system outperforms the proposed articulatory LID system, the fusion of the two systems provides the best overall results in terms of accuracy. 2. PHONOLOGICAL FEATURES BASED LID SYSTEM 2.1. Baseline LID System Our baseline LID system utilizes the hybrid features (HFs, [1]) definition of PFs. As shown in Fig.1 (1), the LID system initially estimates the parallel sequence of HF values from the speech utterance. Particularly, the following articulatory characteristics are captured: (i) height-of-tongue (height), (ii) frontness-of-tongue (frontness), (iii) lip-rounding (rounding), (iv) nasalization (nasality), (v) excitation (glottal), (vi) place-of-articulation (place), and (vii) manner-of-articulation (degree). In this manner, each frame of speech assumes a set of 7 articulator values (for the 7 HFs) and this represents a static snapshot of the overall articulatory state. In this static articulatory representation, a number of contiguous neighboring frames contain the same set of articulator values and these repetitions are removed by the LID system. These steps constitute

Fig. 1. Proposed LID System: (1) Hybrid-Features (HFs) are decoded from speech, (2) Consonant (C) and Vowel (V) segments are identified from HFs, and C, V, & CV clusters constructed, (3) speech is segmented using CV-clusters and pitch/energy contour shapes are estimated, (4) static-LFs are extracted from static PF-representation, and (5) dynamic-LFs from dynamic PF-representation. PF pre-processing, and at this point the processed PF-representation is ready for LF extraction. The process of extracting LFs in our baseline system is briefly explained below. As shown in Fig.1 (2), first the vowel and consonant segments are identified within the processed PF representation. Thereafter, adjacent consonant (C) and vowel (V) clusters, as well as consonant-vowel (CV) segment are clustered together. Identifying higher level structures such as CV clusters in the PFrepresentation provides the capability of looking at longer-term production traits. The motivation behind this step is that articulatory features at phone and/or syllable levels will provide the necessary discrimination required in a LID task. Using this aggregated representation of speech production, a number of language-features (LFs) are drawn out from the articulatory state sequences. For example, (i) “fricative:alveolar:non-nasal:non-rounded:voiced” is a consonant LF feature, where the manner-of-articulation is “fricative”, place-of-articulation is “alveolar”, nasalization and rounding are “absent”, and the sound is “voiced”, and (ii) “low:back:rounded” is a vowel feature, where the lips are “rounded”, and the tongue is positioned is a “low-back” fashion. A combination of these two features, (i.e., “fricative:alveolar:non-nasal:non-rounded:voiced + low:back:rounded”’) forms the corresponding consonant-vowel (CV) feature. Since these LFs are extracted from a static PFrepresentation, we term them as static-LFs. Our LID system utilizes a maximum entropy (ME) classifier which is implemented as follows. Let Li , i = 1, . . . , N represent the ith language in a closed set of N languages. Also, we refer to each LF as evidence, and denote the lth evidence by el . Furthermore, let the complete set of evidence be denoted by E. Now, let p(Li |E) be the likelihood that Li is the unknown language in the utterance given the articulatory evidence. Next, the most likely language in the utterance is estimated as, Lunknown = arg max p(Li |E), i = 1, . . . , N. i

(1)

Here, the maximum entropy (ME) modeling technique automatically learns the conditional probabilities. Now, let the ith ME feature be given by µi where the ME-features are binary operators on the evi-

dence el (or LFs),  1 µi (el ) = 0

if language-feature el is present, otherwise.

Finally, the likelihood of language Li conditioned on the evidenceset E is given by, L

p(Li |E) =

X 1 λil µi (el )), exp ( Zλ (E)

(2)

l=1

where Zλ (E) is a normalization term, and λil are the weights assigned to the ME feature. The weights correspond to the importance of a feature in estimating the likelihood in question. 2.2. New LF Extraction Technique The new LF extraction technique is illustrated in Fig.1 (1, 4, and 5) and explained below. As in the baseline system, the PFrepresentation of the speech signal is first obtained in (1). Hereafter, the static PF-representation is converted into a dynamic PFrepresentation as shown in (5). In the dynamic representation, only the changes in the values of PF-types are of interest. Particularly, both the nature of the change as well as the order in which it occurs are important. For example, if the value of the PF-type “height” changes from “mid” to “high” then the event is captured as “height = low:high”. In this manner, the changes in all PF-types are collected for the entire PF-stream. These changes are termed as PF-events. This process of identifying PF-events is shown in Fig.1 (4 and 5). In (4), a zoomed-in portion of the static PF-representation in (2) is shown, and (5) shows the corresponding dynamic-representation of the static-representation in (4). The PF-events are shown by the dots in the PF-types and time space. The newly proposed LFs are extracted from the dynamic PFrepresentation. First, the PF-events are extracted in order as shown in Fig. 1 (5). The extraction process first collects all change-events for all PF-types belonging to the same frame before moving to the next speech frame. It is important to note that the PF-events for each speech frame are always collected in the same order (the choice of

Table 1. Classification using PF-based LFs (a) Static Language Features KAN TEL MAL TAM MAR KAN 40.7 21.3 15.5 15.1 7.4 TEL 26.2 49.2 9.4 9.0 6.1 MAL 7.0 3.4 61.2 6.9 21.5 TAM 11.3 12.2 7.0 69.2 0.3 MAR 5.7 4.8 13.5 0.8 75.2 (b) Dynamic Language Features KAN TEL MAL TAM MAR KAN 49.9 28.3 4.8 13.9 3.1 TEL 10.5 74.2 3.5 8.2 3.6 MAL 4.5 6.7 70.6 5.8 12.4 TAM 7.5 9.9 3.7 78.6 0.4 MAR 1.8 4.3 7.2 0.6 86.1 (c) Static + Dynamic Language Features KAN TEL MAL TAM MAR KAN 63.5 16.0 7.3 10.4 2.9 TEL 19.8 66.2 5.5 5.1 3.5 MAL 5.8 3.5 73.5 4.8 12.3 TAM 7.7 7.4 4.4 80.5 0.1 MAR 2.6 2.9 6.9 0.3 87.3

Table 2. Classification using Prosody-based LFs (a) Pitch-based Language Features KAN TEL MAL TAM MAR KAN 23.8 21.7 20.1 19.7 14.7 TEL 13.4 35.3 14.7 18.3 18.3 MAL 15.6 17.6 32.4 16.2 18.2 TAM 12.4 22.3 15.3 39.4 10.6 MAR 5.4 12.4 12.7 11.7 57.8 (b) Energy-based Language Features KAN TEL MAL TAM MAR KAN 33.1 16.5 15.0 22.4 13.0 TEL 16.6 32.8 15.1 17.8 17.6 MAL 19.1 13.8 33.1 17.5 16.3 TAM 21.6 13.3 19.0 38.2 7.9 MAR 8.7 12.0 13.9 12.0 53.2 (c) Pitch + Energy-based Language Features KAN TEL MAL TAM MAR KAN 36.9 18.6 15.4 19.2 9.7 TEL 17.1 38.8 12.3 19.1 12.7 MAL 15.7 12.8 42.1 13.8 15.5 TAM 16.4 17.4 11.2 49.9 5.0 MAR 5.6 9.9 10.5 7.1 66.9

order is arbitrary but consistency is maintained). This process creates the dynamic event stream as shown in Fig.1 (5). Hereafter, unigram, brigram, and trigram LFs are extracted by combining adjacent PF-events in the dynamic event stream. We term these features as dynamic-LFs since they are extracted from the dynamic-PF representation.

[−1, +1], pitch contour segments are mapped into this interval in order to be approximated. The same approximation method is also used for energy contours. Legendre polynomials have previously been applied in LID [2] and speaker verification [4] for pitch/energy contour approximation. However, the modeling and classification technique we use here are different. First, the coefficients obtained from every segment of all 5 languages are used to train a GMM. Next, for each language, the coefficient vectors are quantized using means of the GMM as cluster centroids. In this manner, the pitch/energy contours for all CV clusters in an utterance are now represented by codebook vectors. Ngram combinations of these pitch/energy codebook vector elements incorporate the prosodic LFs to be later used in the proposed LID system.

2.3. Prosody Features for LID Perception tests show that prosodic cues are informative for humans to distinguish between languages. Prosodic information have also been successfully applied in automatic language identification systems [2]. However, effective modeling of prosody for languages remains a challenge. Our approach is based on prosodic information extracted from pitch and energy contours. In many approaches that apply prosody to either LID or speaker recognition, extracted features are based on statistics of pitch/energy contour segments, or piecewise linear stylization of pitch/energy contour [3]. In this study, each pitch/energy contour segment is approximated with Legendre polynomial bases in the least square sense. In this way, the unique characteristics of pitch/energy contour shape for individual speech segments are modeled for each language. First, the Robust Algorithm for Pitch Tracking (RAPT) [1] is used for extracting pitch contours from every utterance of each language. The step size used for pitch extraction is 10 ms. Next, for each utterance pitch is normalized by division using the mean pitch value. Next, pitch contours are segmented according to CV clusters, and log-pitch for each segment is approximated as: f (t) =

M X

ci Pi (t).

(3)

3. PPRLM SYSTEM Parallel Phone Recognition followed by Language Modeling (PPRLM) has become a standard approach for LID [5, 6, 7]. The PPRLM technique is based on phonotactic analysis of languages, where multiple phone decoders are used to tokenize each utterance before classification. In this study, a PPRLM system is constructed based on the Brno University of Technology phoneme recognizers. Particularly, we employ German, Hindi, Japanese, and Mandarin recognizers within the system. Additionally, the CMU Statistical Language Modeling Toolkit is used for n-gram language modeling of the tokenized utterances.

i=0

4. RESULTS AND DISCUSSION

where f (t) is the pitch contour segment to be approximated, Pi (t) is the ith order Legendre polynomial, and M is the highest order polynomial (in this study, M = 3). Note that each pitch contour segment is a finite dimensional vector and the approximation is in the discrete domain. In other words, Pi is the sampled Legendre polynomial, where the number of samples depend on the size of each pitch segment vector. Since Legendre polynomials are defined in the interval

In this work, the proposed LID system is evaluated on the South Indian Language (SInL) corpus [1]. The SInL corpus contains data from 5 major languages spoken in South and South-Central India, namely, Malayalam (MAL), Kannada (KAN), Tamil (TAM), Telegu (TEL), and Marathi (MAR). In this study, we use the read-speech part of the corpus which consists of 75 hours of speech. The SInL corpus is divided into train and test sets consisting of 65 and 10 hours

Table 3. Classification using Articulatory Prosody feature-set KAN TEL MAL TAM MAR KAN 66.6 13.1 8.0 10.4 2.0 TEL 19.6 65.8 5.7 5.3 3.5 MAL 5.7 2.6 77.1 4.4 10.3 TAM 7.7 6.2 3.8 82.2 0.1 MAR 2.9 2.5 8.1 0.2 86.3 Table 4. Classification using PPRLM System KAN TEL MAL TAM MAR KAN 67.1 12.3 14.2 3.2 3.2 TEL 14.4 77.3 6.0 1.2 1.1 MAL 2.7 2.5 90.3 1.4 3.0 TAM 3.2 5.6 2.8 88.1 0.2 MAR 0.4 0.8 7.8 0.3 90.7

of data (13 and 2 hours of train and test per language). Each train and test utterance consists of about 3-4s of speech. Table 1 (a), (b), and (c) shows the confusion matrices for the 5way classification using static-LFs, dynamic-LFs, and combination of static and dynamic LFs. The overall classification accuracies for the 3 feature-sets along with the number of ME features utilized are shown in Table 6. It may be recalled that the static-LFs represent the baseline system. As seen from the results, the dynamic-LFs outperform the static-LFs by about 11.8% absolute. The confusion matrices show consistent improvement in accuracy for all languages. The biggest reduction in confusion is observed for KAN-TEL pair. Furthermore, the fusion of static and dynamic LFs provides 15.1% gain in accuracy over baseline. Table 2 (a), (b), and (c) shows the confusion matrices for pitch LFs, energy LFs, and combined pitch and energy LFs. The overall classification accuracies for the 3 featuresets is shown in Table 6. The classification accuracy of pitch and energy LFs is about 38% each, and the fusion of the two featuresets yields an accuracy of 47%. Additionally, the fusion is beneficial to all languages with MAL, TAM, and MAR showing most gains (10-13% improvement). Next, the prosody (pitch + energy LFs) are combined with the PF-based LFs (static + dynamic LFs) to form the phonological-prosody LID system. Table 3 shows the confusion matrix for the fusion system, and the overall classification accuracy is shown in Table 6. It is observed that addition of prosody information improves classification accuracy as the prosody-PF system achieves an accuracy of 75.6% as opposed to 74.2% (best PF-based system). Finally, Table 4 shows the confusion matrix for the PPRLM system. It is observed that overall accuracy of PPRLM system (82.7%) exceeds that of the articulatory LID system (75.6%). However, the best classification accuracy of 86.6% is obtained when the two systems are combined. The confusion matrix of the combined system in Table 5 shows that the classification accuracy of all languages improve as a result of system combination. It is noted that the fusion of the two systems was performed at scoring level where the likelihood of the two systems were combined linearly. The results showcase the complimentary information present in the proposed articulatory system, and highlight the potential of phonological features in capturing language-related information in speech. 5. CONCLUSION This study has developed a combined articulatory prosody based system for LID (language identification), and evaluated on a difficult 5-way classification of closely-related South Indian languages. In particular, the work introduces new technique of extracting LFs (language features) from PFs (phonological features). The new tech-

Table 5. Classification using Fusion System KAN TEL MAL TAM MAR KAN 77.2 8.6 9.3 3.5 1.5 TEL 13.7 79.1 4.8 1.4 0.9 MAL 2.6 1.6 91.2 0.1 3.0 TAM 2.4 3.3 1.8 92.6 0 MAR 0.5 0.4 6.1 0.1 93.0

Table 6. Classification Accuracy of Articulatory/Prosody LFs Language-Features Employed Feature-Set Size Accuracy Static LFs (baseline) 3075 59.1% Dynamic LFs 7000 71.9% Static + Dynamic LFs 10075 74.2% Pitch LFs 4231 37.8% Energy LFs 4337 38.1% Pitch + Energy LFs 8568 47.0% Phonological + Prosody LFs 18643 75.6% PPRLM System 82.7% Fusion System 86.6%

nique relies on the dynamic representation of PFs, which emphasizes the changes in articulatory information within speech. This is in contrast to our earlier methodology which extracted LFs from a static representation of PFs. Our experiments show that the two techniques yield complementary information as the combination outperforms the baseline accuracy by about 9.4% absolute. Futhermore, our new system also incorporates prosody information into classification by introducing pitch and energy contour shape parameters into classification. Addition of this new information increases the classification accuracy to about 10.8% better than the baseline system. The results obtained in this study are extremely encouraging and motivate further research into PF-based LF-extraction techniques. Presently, we are working towards developing more varied LF-extraction techniques, and using more diverse languages for classification. 6. REFERENCES [1] A. Sangwan, M. Mehrabani, and John H.L. Hansen, “Automatic language analysis and identification based on speech production knowledge,” in ICASSP, 2010. [2] C.-Y. Lin and H.-C. Wang, “Language identification using pitch contour information,” in Proc. IEEE ICASSP, Philadelphia, PA, Mar. 2005, pp. 601–604. [3] A. G. Adami, R. Mihaescu, D. A. Reynolds, and J. J. Godfrey, “Modeling prosodic dynamics for speaker recognition,” in ICASSP, Apr. 2003, vol. 4, pp. 788–791. [4] N. Dehak, P. Dumouchel, and P. Kenny, “Modeling prosodic features with joint factor analysis for speaker verification,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 15, no. 7, pp. 2095–2103, Sep. 2007. [5] E. Singer, P. A. Torres-Carrasquillo, T. P. Gleason, W. M. Campbell, and D. A. Reynolds, “Acoustic, phonetic, and discriminative approaches to automatic language identification,” Sept. 2003. [6] P. Schwarz, “Phoneme Recognition based on Long Temporal Context, PhD Thesis,” Brno University of Technology, 2009. [7] M.A. Zissman, “Comparison of four approaches to automatic language identification of telephone speech,” IEEE Transactions on Speech and Audio Processing, vol. 4, no. 1, pp. 31–44, 1996.

LANGUAGE IDENTIFICATION USING A COMBINED ...

over the baseline system. Finally, the proposed articulatory language. ID system is combined with a PPRLM (parallel phone recognition language model) system ...

245KB Sizes 1 Downloads 461 Views

Recommend Documents

Automatic Language Identification using Long ... - Research at Google
applications such as multilingual translation systems or emer- gency call routing ... To establish a baseline framework, we built a classical i-vector based acoustic .... the number of samples for every language in the development set (typically ...

Efficient Language Identification using Anchor Models ...
2Department of Computer Science, Bar-Ilan University, Israel ... Language identification (LID) systems typically try to extract ..... cation confusion error-rates.

Automatic Language Identification using Deep ... - Research at Google
least 200 hours of audio are available: en (US English), es. (Spanish), fa (Dari), fr (French), ps (Pashto), ru (Russian), ur. (Urdu), zh (Chinese Mandarin). Further ...

Fuzzy Markup Language for RealWorld Applications(Combined ...
Fuzzy Markup Language for RealWorld Applications(Combined)-03272017-2.pdf. Fuzzy Markup Language for RealWorld Applications(Combined)-03272017-2.

Species Identification using MALDIquant - GitHub
Jun 8, 2015 - Contents. 1 Foreword. 3. 2 Other vignettes. 3. 3 Setup. 3. 4 Dataset. 4. 5 Analysis. 4 .... [1] "F10". We collect all spots with a sapply call (to loop over all spectra) and ..... similar way as the top 10 features in the example above.

AUTOMATIC LANGUAGE IDENTIFICATION IN ... - Research at Google
this case, analysing the contents of the audio or video can be useful for better categorization. ... large-scale data set with 25000 music videos and 25 languages.

a novel pattern identification scheme using distributed ...
Jul 6, 2009 - macroblock temporal redundancy (ITR) (see Fig 1) which is static in successive frames. Indeed, few bits are used to signal zero residual error and zero motion for ITR to decoder, which is obviously significant when a sequence is encoded

A SPARSE SYSTEM IDENTIFICATION BY USING ...
inversion in each time-step whose computational cost is usually not accepted in adaptive ... i=0 and an initial estimate h0 (see, the right of Fig. 1). 3. PROPOSED ...

Gene name identification and normalization using a ...
Oct 8, 2004 - creation of high quality text mining tools that can be ap- plied to specific ..... the system developer with no particular annotation experience ...

speaker identification and verification using eigenvoices
approach, in which client and test speaker models are confined to a low-dimensional linear ... 100 client speakers for a high-security application, 60 seconds or more of ..... the development of more robust eigenspace training techniques. 5.

Sparse-parametric writer identification using ...
grated in operational systems: 1) automatic feature extrac- tion from a ... 1This database has been collected with the help of a grant from the. Dutch Forensic ...

Sparse-parametric writer identification using heterogeneous feature ...
Retrieval yielding a hit list, in this case of suspect documents, given a query in the form .... tributed to our data set by each of the two subjects. f6:ЮаЯвбЗbзбйb£ ...

Identification Using Stability Restrictions
algebra and higher computational intensity, due to the increase in the dimen- sion of the parameter space. Alternatively, this assumption can be motivated.

Multipath Medium Identification Using Efficient ...
proposed method leads to perfect recovery of the multipath delays from samples of the channel output at the .... We discuss this connection in more detail in the ...

Sparse-parametric writer identification using heterogeneous feature ...
The application domain precludes the use ... Forensic writer search is similar to Information ... simple nearest-neighbour search is a viable so- .... more, given that a vector of ranks will be denoted by ╔, assume the availability of a rank operat

Sparse-parametric writer identification using ...
f3:HrunW, PDF of horizontal run lengths in background pixels Run lengths are determined on the bi- narized image taking into consideration either the black pixels cor- responding to the ink trace width distribution or the white pixels corresponding t

SPEAKER IDENTIFICATION IMPROVEMENT USING ...
Air Force Research Laboratory/IFEC,. 32 Brooks Rd. Rome NY 13441-4514 .... Fifth, the standard error for the percent correct is zero as compared with for all frames condition. Therefore, it can be concluded that using only usable speech improves the

Electromagnetic field identification using artificial neural ... - CiteSeerX
resistive load was used, as the IEC defines. This resistive load (Pellegrini target MD 101) was designed to measure discharge currents by ESD events on the ...

Discriminative Score Fusion for Language Identification
how to fuse the score of multi-systems is growing to be a researching ... lowed by SVM (GMM-SVM)[6], Parallel phone recognizer fol- ..... friend home. OHSU.

Frame by Frame Language Identification in ... - Research at Google
May 20, 2014 - Google Services. Reported results .... Figure 1: DNN network topology. EM iterations .... eral Google speech recognition services such as Voice.

COMBINED GRADES
Apr 12, 2016 - Combined grade classrooms provide an opportunity for students of different ages and levels of development to cooperate and learn from each ...