8th International Seminar on Speech Production

Modeling Motor Pattern Generation in the Development of Infant Speech Production

Ian Spencer Howard1 & Piers Messum2 1

Computational & Biological Learning Laboratory, Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK 2 Centre for Human Communication, University College London, London WC2E 6BT, UK E-mail: [email protected], [email protected]

Abstract We previously proposed a non-imitative account of learning to pronounce, implemented computationally using discovery and mirrored interaction with a caregiver. Our model used an infant vocal tract synthesizer and its articulators were driven by a simple motor system. During an initial phase, motor patterns develop that represent potentially useful speech sounds. To increase the realism of this model we now include some of the constraints imposed by speech breathing. We also implement a more sophisticated motor system. Firstly, this can independently control articulator movement over different timescales, which is necessary to effectively control respiration as well as prosody. Secondly, we implement a two-tier hierarchical representation of motor patterns so that more complex patterns can be built up from simpler sub-units. We show that our model can learn different onset times and durations for articulator movements and synchronize its respiratory cycle with utterance production. Finally we show that the model can pronounce utterances composed of sequences of speech sounds.

sounds). Imitative exchanges between the infant and caregiver, involving reformulations of the infant’s output, lead to the association of the infant’s motor actions to the adult judgment of their linguistic value expressed in a vocal form. This solves the correspondence problem for the sub-word units that are used in learning the pronunciation of words (with the exception of the very first words). This enables the infant to learn words by imitation, and it is then taught the names of objects by the caregiver. Infant model

Explore Motor Action Output

Sensory Input A) Proprioception

B) External sensory consequences C) Communication D) Reformulations paths

1 Introduction Our previous work modeled a non-imitative account of the development of infant speech production [1-3]. The model runs firstly as a stand alone system and then interacts naturally with a caregiver. We showed that rewarded exploration of the vocal tract leads to the discovery of potentially useful vocal motor schemes [4] (potential speech

Reward

Action/ Reaction from Mother

Figure 1: Agent model of an infant showing the important signal flow paths with a female caregiver.

This earlier work concentrated on the most important principles in our account, such as the

ISSP 2008

1

8th International Seminar on Speech Production

reinforcement and reformulation signal flow paths between the infant’s motor output and his sensory inputs (figure 1). It employed a Maeda vocal tract synthesizer [5] and used simple mechanisms of perception, production and association. In the motor generator a basic motor pattern was defined in terms of starting and ending articulator target positions. The dynamics of the vocal tract system determined the path of the movement between the targets, and the trajectories were computed by interpolating between sequential targets using critical damping [6]. Although sufficient for the purposes of our initial demonstration, the motor generator suffered from some limitations which we address in this paper. We now also begin to model speech breathing.

2 Modeling speech breathing Previously we ignored the role of the respiratory system in speech production but we have now taken the first steps towards accounting for its effects. Voluntary control of speech breathing is implemented by specifying a lung compression force to determine the direction and rate of airflow. At any given time, lung volume and the limits of normal lung volume excursion constrain how much air is available to support sound production. We take the vital capacity of the infant’s lungs to be about 25% of adult values. In adults, exhalation is largely passive, driven by the elasticity of the pulmonary/chest wall unit, but this is not the case in infants [7], so we do not include a spring term in our model. When lung volume reaches the upper or lower limits, airflow is set to zero. Voicing is only permitted during the expiratory phase. Breathing thus affects the utterances that can be generated and speech production will be disrupted if it is not synchronized with articulator movements. It is therefore one task of the learning mechanism to synchronize breathing with the movement of the articulators so that useful utterances can be reliably produced.

3 Articulator movements In our previous work all the articulators moved synchronously between defined targets. In reality

speakers are able to control different articulator movements independently over different timescales, starting points and durations, as demonstrated by the movements used to generate syllables compared with those which create intonation contours, which span multiple syllables. We now give each articulator its own starting offset time and duration of movement. This ability to independently control aspects of the speech production apparatus is necessary for us to be able to introduce speech breathing into the model.

4 Construction of sequences A second extension of the motor system involves the ability to learn and produce temporal sequences of sub-sounds. This is an important capability since words (and sentences) are constructed from smaller sub-patterns. The model automatically segments input speech syllables spoken by a male subject provided they are separated by silence (allowing segmentation to be carried out on the basis of acoustic power). By recognizing these sub-sounds and using the corresponding sequence of motor actions, the model could learn to produce multisyllable words by imitation.

5 Results We re-ran our original experiments to discover simple potential speech sounds based on salience, and then established a linguistic value for some of these through caregiver reformulations. This time we included constraints due to breathing and we incorporating the improved motor system using motor patterns that independently specify independent timings for each articulator transition. Figure 2 shows a speech sound found by exploration, together with the associated salience and effort terms that were used to determine the reward for the optimization procedure. Asynchronous articulator movement was apparent in an increase in the variety of basic sounds discovered. The ability to asynchronously move articulators is manifested in their trajectories and the air flow.

ISSP 2008

2

8th International Seminar on Speech Production

asynchronous articulator control over different timescales on different articulators.

Figure 2. Example of discovered speech sound and its evaluation in terms of salience and reward.

Figure 3 shows an example of a good speech utterance discovered by the model. Notice also the asynchronous movement of the articulators and the breathing trajectories. In this example, expiration coincides with when voicing is useful, resulting in a good speech sound. In the lower plot the air flow gates the preliminary voicing-without-accountingfor-airflow signal (VxWAF) resulting in a reduced duration of voicing control to the synthesizer (Vx). However it does so in a fashion consistent with generating a useful utterance. That is, the model breathes in before a syllable and out during it. There is also a no-flow pause in-between when the lungs have filled to the upper limit of their normal excursion. Good breathing synchronization does not always occur and some sounds generated are not useful speech utterances because of the inappropriate coordination of articulator movement and the air flow, as shown in figure 4. Here the articulations show poor synchronization with breathing.

6 Discussion We added a model of speech breathing to an articulatory synthesizer. Speech breathing plays an important role in the development of pronunciation [3, 9] but is often omitted from computational models of speech production. In addition we built a more sophisticated motor system which implemented

Figure 3. Trajectories for Maeda synthesizer parameters and breathing, for a good utterance

The ability to control movements over independent time scales is important because breathing operates over a much longer time scale than the movements involved in the realization of phonetic contrasts. The value of this was demonstrated in the model’s ability to coordinate breathing control with articulator movement. In the future we will look at the issues arising from the coordination of breathing with multi as well as single syllable utterances without our model having to independently relearn all breathing timings for each different case. This will also involve the control mechanism taking notice of the different airflow requirements of different syllables.

ISSP 2008

3

8th International Seminar on Speech Production

development [3, 9] and our model's modified motor system can now begin to deal with these issues. Additional information including examples of the model’s speech output is available online at: www.ianhoward.info/issp_2008.htm

7 References [1]

[2]

[3]

[4]

[5] Figure 4. Maeda synthesizer and breathing trajectories for a poor utterance

Furthermore, the more sophisticated hierarchical temporal pattern generator we implemented enables our model to learn multi-syllable words by stringing together single syllables in sequence (see online supplementary information for examples as .WAV files). Prosody (the timing, stress and intonation of speech) is important in human communication since it conveys some of the speaker's linguistic meaning as well as his emotional state. Prosodic aspects of speech manifest themselves over a longer timescale than phonetic contrasts, so long scale coordination of motor activity is required to control them. There is increasing understanding of the role played by embodiment in the development of complex systems [8]. Taking note of this, if we are to model speech development we should recognize that an infant's vocal apparatus is not simply a scaled down version of an adult's. Speech breathing and speech aerodynamics play an important role in infant speech

[6]

[7]

[8]

[9]

ISSP 2008

Howard, I.S. and P.R. Messum, Modeling infant speech acquisition using action reinforcement and association. In Speech and Computer, SPECOM 2007. Moscow Linguistics University, 2007. Howard, I.S. and P. Messum, A computer model that learns to pronounce using caregiver interactions. Forthcoming. Messum, P.R., The Role of Imitation in Learning to Pronounce, PhD thesis, London University, 2007. McCune, L. and M.M. Vihman, Vocal Motor Schemes. In Papers and Reports in Child Language Development, Stanford University Department of Linguistics 26, 72-79. 1987. Maeda, S., Compensatory articulation during speech: evidence from the analysis and synthesis of vocal tract shapes using an articulatory model. In Speech production and speech modelling. W.J. Hardcastle and A. Marchal, Editors. Kluwer Academic Publishers: Boston, 1990. Markey, K.L., The sensorimotor foundation of phonology; A computational model of early childhood articulatory development. PhD thesis, University of Colorado: Boulder-Colorado, 1994. Netsell, R., et al., Developmental patterns of laryngeal and respiratory function for speech production. J Voice, 8(2), 1994. Sirois, S., et al., Precis of neuroconstructivism: how the brain constructs cognition. Behav Brain Sci 31(3), 2008. Messum, P., Embodiment, not imitation, leads to the replication of timing phenomena. In Proceedings of Acoustics '08, 2405-2410 Paris: ASA, 2008.

4

Modeling Motor Pattern Generation in the Development of Infant ...

Centre for Human Communication, University College London, London WC2E ... support sound production. .... Howard, I.S. and P. Messum, A computer model.

176KB Sizes 2 Downloads 203 Views

Recommend Documents

Generation of novel motor sequences-the neural correlates of musical ...
Generation of novel motor sequences-the neural correlates of musical improvisation.pdf. Generation of novel motor sequences-the neural correlates of musical ...

Nested Incremental Modeling in the Development of Computational ...
In the spirit of nested incremental modeling, a new connectionist dual process model (the CDP .... The model was then tested against a full set of state-of-the-art ...... Kohonen, T. (1984). Self-organization and associative memory. New. York: Spring

M607 Evolutionary Computation and Parametric Pattern Generation ...
M607 Evolutionary Computation and Parametric Pattern ... for Airport Terminal Design by Chatzikonstantinou.pdf. M607 Evolutionary Computation and ...

Love Apples - Knitting Pattern - Firefly-Inspired Infant Earflap Hat.pdf ...
needle and thread needle through live sts. Pull very tight ... with a k and 2 sts rem), K2tog. ... Love Apples - Knitting Pattern - Firefly-Inspired Infant Earflap Hat.pdf.

Adolescent development of motor imagery in a visually ...
Dec 29, 2006 - a Behavioural and Brain Sciences, Institute of Child Health, University College London, 30 Guilford Street, ... Available online at www.sciencedirect.com .... Adolescent participants were from secondary schools in the London area ... T

On the modeling and generation of service-oriented ...
tool support for the different phases of the system life cycle. 1 A system .... software. In global engineering settings, the development tools are distributed.

Modeling the development of human body representations
development of a body schema in infants through a self-touch behavior. .... The “hardware-software” boundaries in biological systems are much more blurred ...

generation and recall of motor plans
Apr 8, 2004 - to express the grasp height in centimeters from the floor, the known length of the cylinder was .... A program written in Matlab (Mathworks, Inc.) was used to record the click ... in conditions A-D, respectively. Table 1 and Fig.

Prenatal and infant development..pdf
Write the difference between seriation and transitivity with suitable example. SECTION – B. Answer any seven of the following : (7×5=35). 8. Write a short note on ...

EO 156 - Restructuring of the Motor Vehicle Development Prog.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. EO 156 ...

EO 156 - Restructuring of the Motor Vehicle Development Prog.pdf ...
continued application of AICO scheme as maybe adopted by the Association of. Southeast Asian Nations (ASEAN) consistent with the implementation of the.

Modeling and Practise of Integral Development in rural Zambia, case ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Modeling and ... 12-poster.pdf. Modeling and P ... 012-poster.pdf.

Nested Incremental Modeling in the Development ... - Semantic Scholar
lower threshold that people may use when reading aloud lists of nonwords only. ...... database (CD-ROM). Philadelphia, PA: Linguistic Data Consortium,.

In the studies of computational motor learning in ...
r was the reward at the trial k, γ was the discount rate. The learner approximated the value function ( , ). V z s , where z was the weight vectors and the s was the state of the system (i.e. reach angel). The action was made by. ( , ) a Aws n. = +

Systems in Development: Motor Skill Acquisition ... - UCLA Baby Lab
habituated to a limited-view object and tested with volumetrically complete and incomplete .... only the sides seen in the limited view) that now rotated a full 360°.

An Optimal Content-Based Pattern Generation Algorithm
Experimental results show that ASPVC outperforms the existing PVC with predefined regular-shaped patterns when each embedded into H.264 as an extra mode. II. CONTENT-BASED PATTERN GENERATION. The MR of an MB in the current frame is obtained using the