Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

CORTICAL MAPS AS TOPOLOGY-REPRESENTING NEURAL NETWORKS APPLIED TO MOTOR CONTROL: ARTICULATORY SPEECH SYNTHESIS Pietro Morasso1, Vittorio Sanguineti1, Francesco Frisone2 Dept. of Informatics, Systems and Telecommunications, University of Genova, Italy 2 Istituto Idrografico della Marina, Genova, Italy 1

Summary. Substantial advances have been achieved since the pioneering work in the 50's and 60's by Mountcastle, Hubel, Wiesel, and Evarts, among others, thus gaining an understanding of the cortex as a continuously adapting system, shaped by competitive and co-operative interactions. However, the greatest part of the effort has been devoted to the investigation of the receptive-field properties of cortical maps, whereas relatively little attention has been devoted to the role of lateral connections and the cortical dynamic processes that are determined by the patterns of recurrent excitation (Amari 1977, Kohonen 1982, Grajski and Merzenich 1990, Reggia et al 1992, Martinetz and Schulten 1994, Sirosh and Miikkulainen 1997, Sanguineti et al 1997a, Levitan and Reggia 1999, 2000). In this chapter, we explore the hypothesis that lateral connections may actually be used to build topological internal representations and propose that the latter are particularly well suited for the processing of high-dimensional "spatial" variables and solving complex problems of motor control that involve sensorimotor information. In particular, we apply the methods to the case of speech motor control in which acoustic and articulatory variables are typically high-dimensional, and describe an approach to articulatory speech synthesis that is based on the dynamic interaction of two computational maps.

1. Lateral connections in cortical maps It has been observed that cortical areas can be seen as a massively interconnected set of elementary processing elements (the so-called cortical "columns"), which constitute what is called a "computational map" (Knudsen et al 1987). The "somatotopic" or "ecotopic" layout of many cortical areas has long suggested a kind of topologic organisation, which has been often associated with a dimensionality reduction of the representational space (Durbin and Mitchison 1990). This also served as inspiration for a large family of neural network models. From its beginning, however, the effort has been affected by a number of misconceptions, partly due to the over-emphasis in the neurophysiological community on the receptive field properties of cortical neurons. Only recently, a new understanding of the cortex has emerged as a dynamical system, which has focused the attention on the competitive and cooperative effects of lateral connections (Sirosh and Miikkulainen 1997). Moreover, it has been shown that cortico-cortical organisation is not static but changes with ontogenetic development together with patterns of thalamo-cortical connections (Katz and Callaway 1992). From the modelling point of view, the most common misconceptions about cortical functionality can be reduced to the following three items: (i) "flatness" of cortical maps (related to the locality of lateral connections); (ii) fixed lateral

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

connections (versus plastic thalamo-cortical connections, which determine the receptive-field properties); (iii) Mexican-hat patterns of lateral interactions (it implies a significant amount of recurrent inhibition for the formation of localised responses by lateral feedback). The "flatness" assumption that characterises the classic map models (Amari 1977, Kohonen 1982) is contradicted by the fact that the structure of lateral connections is not genetically determined, but depends mostly on electrical activity during development. More precisely, the connections have been observed to grow exuberantly after birth and reach their full extent within a short period; during the subsequent development, a pruning process takes place so that the mature cortex is characterised by a well defined pattern of connectivity, which includes a large amount of non-local connections. In particular, the superficial connections to non-adjacent columns are organised into characteristic patterns: a collateral of a pyramidal axon typically travels a characteristic lateral distance without giving off terminal branches and then it produces tightly terminal clusters (possibly repeating the process several times over a total distance of several millimetres). Such characteristic distance is not an universal cortical parameter and is not distributed in a purely random fashion, but is different in different cortical areas (Gilbert and Wiesel 1979, Schwark and Jones 1989, Calvin 1995, Das and Gilbert 1999). As the development of lateral connections depends on the cortical activity caused by the external inflow, they may be used to capture and represent the (hidden) correlation in the input channels. Each individual lateral connection is "weak" enough to go virtually unnoticed while mapping the receptive fields of cortical neurons but the total effect on the overall dynamics of cortical maps can be substantial, as is revealed by cross-correlation studies (Singer 1995). Lateral connections from superficial pyramids tend to be recurrent (and excitatory) because 80% of synapses are with other pyramids and only 20% with inhibitory interneurons, most of them acting within columns (Nicoll and Blakemore 1993). Recurrent excitation is likely to be the underlying mechanism that produces the synchronised firing observed in distant columns. The existence (and preponderance) of massive recurrent excitation in the cortex is in contrast with what could be expected, at least in primary sensory areas, considering the ubiquitous presence of peristimulus competition (or "Mexican-hat pattern") which has been observed time ago in many pathways, like the primary somatosensory cortex, and has been confirmed by direct excitation of cortical areas as well as correlation studies; in other words, in the cortex there is a significantly larger amount of long-range inhibition than expected from the density of inhibitory synapses. In general, "recurrent competition" has been assumed to be the same as "recurrent inhibition", with the goal of providing an antagonistic organisation that sharpens responsiveness to an area smaller than would be predicted from the anatomical funneling of inputs. Thus, an intriguing question is in which manner long-range competition can arise without long-range inhibition and a possible solution is a mechanism of gating inhibition based on a competitive distribution of activation (Reggia et al. 1992, Morasso and Sanguineti 1996).

2. A neural network model A basic feature of cortical maps is their modular organisation into columns,

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

communicating each other by means of reciprocal lateral connections (intraconnectivity); reciprocal connectivity patterns also characterise the interaction among different cortical areas (cross- or intra-connectivity), directly or via cortico-thalamic loops, thus providing physical support for carrying out complex sensorimotor transformations. A computational map is a set F of processing elements (PE) or filters, which model cortical columns. A map is completely specified by the following entities: • A matrix C of "lateral connections", in which cij indicates the strength of the connection between the elements i,j∈F. We will assume that each PE is characterised by a non-empty neighbourhood set Ni (the set of PE's to which it is laterally connected), so that cij = 1 if and only if j∈ Ni. Moreover, we also assume that cij=cij . This means that lateral connections are reciprocal and excitatory. • An internal state Ui, for all i∈F (1≥Ui ≥0). • An external input Ei, for all i∈F, transmitted to the map from thalamic nuclei or other cortical maps • A set of recurrent equations τ

dU i + U i = f ( Ei , {U j ; j ∈ N i }) dt

(1)

where f(.) is a suitable mixing function that combines all the inputs to a given PE. This is a simple distributed model that allows to manipulate input patterns in different ways, according to the intrinsic dynamics induced by lateral connections and the structure of the mixing function. In any case, the symmetry of recurrent connections provides asymptotic stability of the whole state. In the classical model proposed y by Amari (1977) f(.) is linear and the distribution of recurrent connection weights is organised according to the so called "Mexican hat" layout: this map performs edge sharpening on the input, i.e. the point attractor of the map is characterised by the fact that the output pattern {Ui} is a high-pass filtered version of the input pattern {Ei}. The inhibitory connections required by the Mexican hat model are not consistent with the preponderance of excitatory lateral connections in the cerebral cortex, but a similar sharpening effect can be obtained by different non linear mechanisms that only use excitatory connection weights. The competitive distribution of activation (Reggia et al 1992) is an example and in fact we adopted a simplified version of it, expressed by the following equation: f ( I i , {U j ; j ∈ N i ) = c p ∑ j c ij

Uj

+ ce Ei (2) ∑k c jkU k In the following sections we describe a further modification of this model that is necessary for adding to the pattern sharpening capability also a feature of pattern selection and trajectory formation.

3. Spatial maps as internal representations for motor planning The thalamic input establishes a relationship between the internal state of the map and the external world; let us model it as a simple mapping: E i = Ti ( x ) i ∈ F (3) where x∈X is a sensory quantity. As a consequence, the equilibrium state {Ui} will also depend on x. Equation 2 determines, for each PE, a receptive field, defined as

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

the subset of X for which the equilibrium value Ui is non-zero. The value of x that maximises Ui at equilibrium will be referred to as the preferred input, ~xi , or the sensory prototype stored in that PE. In probabilistic terms, it is also possible to interpret Ui as a measure of the probability that x = ~xi . In particular, this allows us to compute an optimal estimate of x by means of the following formula: ∑ ~x iU i x ≈ xe = i (4) ∑ jU j In other words, {Ui} is the population code of x and xe is the population vector associated with F. It has been noted (Sanger 1994, 2000) that such a code has the important property of being independent of the coordinate system in which x is measured, i.e. the population code is coordinate free. The properties of the implicit and bi-directional mapping between X and F are entirely determined by the thalamo-cortical transformation (eq. 3) and the pattern of lateral connections {cij}, i.e. the "topology" of the map. As regards the thalamocortical mapping, we will assume that Ti(x) is a decreasing function of the distance between ~xi and x, for instance a Gaussian G ( x − ~x i ) . As for the map geometry, it has often been suggested that cortical maps are "topologically continuous", i.e. adjacent PE's have "similar" sensory prototypes. A more stringent concept has been formulated by Martinetz and Schulten (1994) by requiring a bidirectional topological continuity: PE's vs. prototypes as well as prototypes vs. PE's. This leads to the so called TRN (Topology Representing Network ) model, in which the matrix of lateral connections, C, reflects the topology of X: if X is a n-dimensional manifold, the map F should be organised as a n-th order lattice. To this regard, it has been pointed out (Braitenberg 1984) that lattices are effective ways to represent multiple dimensional spaces (with their associated topology) on a 2-dimensional sheet (like the cortical surface). A remarkable feature of these maps, if associated with the recurrent dynamics described above, is that despite their discrete nature the trajectory of the internal state {Ui(t)} will always be continuous as well as the associated xe(t).

3.1 DYNAMIC BEHAVIOR OF SPATIAL MAPS A number of experimental observations have put into evidence the dynamic behaviour of spatial maps: for example, the continuous modification of the population vector - which is believed to code the direction of the planned movement - in the primary motor cortex during mental rotation experiments (Georgopoulos et al 1989); the observed continuous update of the representation of the target - in retinal coordinates - in the superior colliculus during gaze shift. The observed phenomena have suggested (Munoz et al 1991) a continuous movement of the peak of activation in the corresponding maps and in fact attempts have been made (Droulez and Berthoz 1991, Lukashin and Georgopoulos 1994) to model such a moving-hill mechanism of continuous remapping in terms of recurrent neural networks. In the case of a sensory topology-representing map, the dynamic behaviour described by equations 1 and 2 can be interpreted as keeping the internal state "in register" with the incoming sensory information, while carrying out a function of edge-sharpening. The same model can be modified in a simple way, in order to allow

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Figura 1. Dynamic remapping in a 1-dimensional map. Evolution of the internal status of the map at different time steps (top) and time course of the population vector (bottom). The external input pattern (dashed) has a maximum for x = 2; the population vector corresponding to the initial status equals xe = 0.5.

it to carry out a function of dynamic remapping as well, by adding a non-linear shunting interaction (Grossberg 1973) of the input with the internal state: τ

Uj dU i + U i = c p ∑ j cij + c eU i E i dt ∑ k c jkU k

(5)

cp, ce, τ are the three parameters of the model that have to be fitted to the data. The equilibrium state of the map is again a sharp pattern, but it has a single peak, centered on a local maximum of the input pattern, and the position of this peak depends on the initial state of the map. In other words, the effect of the shunting action on the input pattern is to select a feature of the input (a "target") and cluster the map activity around it. The transient behaviour is also important and, as shown in figure 1, is

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

characterised by a kind of moving-hill, converging to the "selected target", i.e. the point in the map in which {Ei} has its maximum value. The map in the figure is a monodimensional lattice with N = 60 PE's. It should be noted that the "moving-hill" becomes broader as the speed of the trajectory becomes higher and this means that there is a trade-off with the static and dynamic precision of this collective computational mechanism. Moreover, if we consider higher-order lattices it is possible to show that the trajectory of the population vector will reflect the topology of F. For instance, if the momodimensional map of figure 1 represents a circle embedded in a plane, then the time course of the population vector in the plane will also be circular.

3.2 FUNCTION APPROXIMATION BY INTERCONNECTED MAPS In this section we show how coordinate transformations can be computed directly in terms of population coding. For instance, let us consider a continuous, smooth function y(x), with x ∈ X ⊂ R n , y ∈ Y ⊂ R m and in general m ≤ n . Let us also consider two maps, Fx and Fy, which represent X and Y, respectively, and can be obtained from a suitable learning process, of competitive type (Martinetz and Schulten 1994). After learning, any training sample (x, y) is represented in the maps Fx and Fy by two independent population codes, { U ix } and { U iy }. The structure of the mapping between the two codes can be captured by a set of cross-connections { c ijxy } that link bidirectionally PE's of one map with PE's of the other. These connections can be learned in a very similar way to the intra-connections, i.e. the Hebbian rule implemented in the TRN model: for each given sample (x,y), the c ijxy element is set to 1 when the i-th and the j-th PE's happen to be winners in the corresponding maps. Given Fx and Fy, their prototypes and the inter-connectivity matrix, an estimator of y, given {x}, is expressed by: ∑i ~y i U i

y

ye =

y ∑ jU j

(6)

where the quantity U iy = ∑ j cijxy U xj can be interpreted as the projection of the population code of x onto the output map. The quantity ye is the population vector corresponding to the population code { U iy }. Therefore, the inter-connectivity matrix has the effect of transforming a distributed representation of x, { U ix }, into the corresponding population code of y, { U iy }. As an example, let us consider the mapping of figure 2, corresponding to the function y = sin2πx + sin π / 3 x + sin π / 5 x + sin π / 7x . The input map has N=60 neurons and the output map has M=20 neurons: the corresponding prototype values, at the end of two separate training phases, are indicated by open circles on the two axes. A subsequent combined training phase builds the inter-connections: filled circles in the figure indicate the "virtual" prototypes, implicitly defined by the inter-connections that are visualised as horizontal and vertical segments, respectively. It should be noted that the architecture is completely bi-directional or symmetric, in the sense that

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Figure 2. Inter-connectivity matrix and "virtual" prototypes. Open circles: neurons; filled circles: virtual prototypes.

it does not require to specify what is the input and what is the output (thus resembling an associative memory). Figure 3 shows how, through the inter-connections, the population codes are mapped back and forth: { U ix } into { U iy } (top-panel) and { U iy } into{ U ix }(bottom panel). In the latter case, the set of peaks of { U ix } clearly identifies all possible inverses, i.e. the set of x-values matching a specific y. In the next section we show in which way the intrinsic map dynamics carries out the actual inversion, in fact selecting a specific inverse solution. In summary, the proposed approximation scheme has a number of distinctive features: (i) it operates directly on population codes, different from standard artificial neural network models like RBF (Radial Basis Function) or NGBF (Normalised Gaussian Basis Function), and this feature has been observed in biological sensorimotor networks (Zipser and Andersen 1988, Salinas and Abbott 1995); (ii) the architecture is symmetric or bi-directional, thus performing both forward and inverse transformations; (iii) the scheme is efficient and modular, because a given map or representation layer might be shared by different computational modules, each implementing a specific transformation or association, in a complex network of maps.

3.3 DYNAMIC INVERSION Let us no suppose that a target y in Fy is specified (identified by its population code on the vertical axis of the bottom panel of figure 3). The horizontal axis of the same panel shows five possible solutions: the peaks of the population code projected from Fy to Fx. How can the network choose one of them and which is the criterion? In fact, the selection is automatically performed by equation 5, applied to map Fy, with the input patter {Ei} given by the projected population code. Figure 4 illustrates the process. The dashed curve on the horizontal axis is the (arbitrary) initial state of the map Fx. The reached equilibrium is the continuous curve

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Figure 3. Top: Forward projection of a population code { U ix } (which codes x= 1.5) onto Fy via the cross-connections. { U iy } is the projected activation, plotted versus the y-prototypes. Bottom: Inverse projection of a population code { U iy } (which codes y= 2) onto Fx via the cross-connections. { U ix } is the projected activation, plotted versus the x prototypes.

centered around x=1, which in fact is the "closest" solution to the initial state. In qualitative terms, the process can be described as follows: - at t=0 the pattern of activity in Fy suddenly changes, thus selecting a new target { U iy }; this is mapped onto Fx by way of the inter-connections thus setting a -

pattern of excitation on the x-map: { E xj }; the population code in Fx starts diffusing around its initial peak of activation as a consequence of the recurrent excitatory connections, thus flattening the shape and expanding it in all directions throughout the map;

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

-

the waveform is amplified as soon as it reaches the local peak in the input pattern { E xj } which happens to be the "closest one" in relation with the wavefront; - this non-linear behaviour tends to capture the activity in the whole network, designating the corresponding neuron as the "winner". Thus the criterion of selection embedded in the network model is a criterion of minimum distance in the map. The solution is reached in a continuous fashion, as a moving hill from the dashed peak to the continuous peak. In other words, the neural architecture operates at the same time as a regularisation mechanism of the inversion process and as a mechanism of "trajectory formation". In fact, { E iy } identifies a population vector on Fy that can be interpreted as a target in an end-point control dynamics; the final population vector of Fx will correspond to the ill-posed inversion problem of the y = y (x) mapping (see figure 4).

Figure 4. Dynamic inversion of the y (x ) mapping. A target is selected by setting up a population code { U iy } in the y-map. This generates a multi-bump pattern of input stimulation in the x-map via the cross-connections between the two maps. The initial population code { U ix } in the x-map (dashed curve, centered in x=0.5) is transformed into the final population code (continuous curve, centered in x=1) by way of the network dynamics, thus solving the ill-posed inversion problem.

4. Application of cortical maps to articulatory speech synthesis Like all motor activities, the process of speech synthesis can be investigated in a computational framework: the plant is the whole oral cavity (jaw, lips, tongue, and larynx) plus the vocal cords and the lungs and it must be carefully controlled by the brain in order to produce speech sequences (Sanguineti et al 1997b, 1998). More specifically, the CNS (Central Nervous System) must be able to generate a

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

continuous flow of motor commands (i.e. muscle activations) resulting in movements of the oral cavity or articulatory gestures that allow to track a desired sequence of acoustic targets. The problem is also relevant from an engineering point of view: with respect to state-of-art techniques, articulatory speech synthesis is expected to provide a much greater degree of adaptivity to different speech rates, stress conditions, geometry of speaker's oral cavity, etc. than conventional, purely acoustic methods. One major problem that the CNS has to solve is put into evidence by variability studies, suggesting that each phonetic "target" constrains but does not uniquely specify a configuration of the oral cavity; for instance, a vowel identifies a very narrow set of resonant frequencies of the vocal tract, or formants}, whereas most consonants are essentially specified in terms of the position in the oral cavity of a constriction (labial, dental, palatal, or velar). In both cases, the generation of articulatory trajectories implies a kind of coordinate transformation, which is not uniquely determined because of redundancy, in the sense that there are several configurations corresponding to the same audible sound. On the other hand, the human articulatory system is able to fully exploit redundancy, as demonstrated by the phenomenon of compensation in the classical bite-block and the lip-tube experiments: if the jaw (or the lips) are mechanically constrained, a subject is still able to generate acoustically intelligible phonemes (i.e. to fully "reach" the phonetic targets), by slightly modifying the contributions of the other articulators (Lindblom et al 1979). Compensation suggests that the inverse acoustic-to-articulatory transformation is not defined a-priori, i.e., a large set of inverses is accessible to the controller, and only one is selected at run-time. Moreover, redundancy is critically important in the process of chaining a sequence of phonemes: it has been observed that the realisation of a particular phoneme is heavily influenced by its "context", thus suggesting that each phoneme is prepared in advance by exploiting the excess degrees of freedom of the oral cavity in order to generate trajectories that are more "economic" (in terms of smoothness and energy expenditure); this phenomenon is known as coarticulation (Ohman 1967). Laboissière (1992) adapting an approach originally suggested by Jordan and Rumelhart (1992) proposed a computational model where the controller is synthesised by inverting a forward model of the oral cavity1 with the global regularisation constraint of minimum jerk (Flash and Hogan 1985). However, such a geometric or static model does require a separate dynamic mechanism to account for trajectory formation with an high degree of adaptativity: in fact, it should be able to account for the effect of different speech rates and different stress conditions (the phenomenon of vowel reduction) on a given sequence of phonemes. The issue of sequence generation also puts into evidence that targets have both a spatial and a temporal aspect. In fact, it has been observed that vowels and consonants have a somehow dual nature: vowels are well specified spatially (in the acoustic space), whereas consonants are more accurately specified in time (in the sense that the time interval in which they are "active", or context, is smaller than in vowels). This has suggested that the concept of target should be generalised to that of constraint (or landmark), with both spatial (articulatory and/or acoustic) and temporal aspects. Thus, potential fields acting on spatial maps are natural mechanisms for capturing the generality of motor control problems outlined above. 1

A forward speech model transforms vocal tract geometry into the corresponding set of formants.

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

4.1 CORTICAL CONTROL OF SPEECH MOVEMENTS Let us now examine a framework for adapting the cortical map model to the specific needs of speech motor control. A minimal computational architecture, limited to vowel production, consists of (i) an articulatory map, representing the space X of articulatory gestures, i.e. the geometric configurations that the vocal tract can possibly assume during speech movements, and (ii) an acoustic map, accounting for the corresponding audible sounds, Y (see figure 5). The intra-connections code the topological information associated with both X and Y, whereas the inter-connections contain the information on the relationship among them. The architecture can be extended to the case of consonants by means of an additional map, which represents the set of localised constrictions of the vocal tract, i.e. the space C of consonantal landmarks. As noted in the previous section, by its nature an architecture of this kind is completely symmetric, so it can be considered at the same time as a forward as well as an inverse model in the context of speech motor control. In particular, the articulatory map can be interpreted as an internal model or body scheme (Morasso and Sanguineti 1995) of the oral cavity; it has been shown (Laboissière et al 1995) that its structure and dimensionality is closely related to the geometric arrangement of

Fig. 5. The minimal computational architecture for cortical control of speech movements. Fx: articulatory map; Fy: acoustic map; Fc: constriction map. Ex is the external field mapped onto the articulatory map by the "targets" in the acoustic and constriction maps. The trajectory of the population code in Fx as it relaxes to Ex is the overall articulatory gesture that realises the acoustic goal.

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

muscles, therefore suggesting that it can spontaneously emerge from muscle proprioceptive information. In figure 5 the conceptual layout of the three maps is shown, together with intra- and inter-connections. The figure also shows how the motor output can be generated in order to reach vowel and/or consonant targets, indicated by their corresponding population codes in the two maps: these distributions are mapped onto the articulatory map via the inter-connections, thus inducing the field E x that drives the trajectory of the vocal tract configuration in agreement with Eq. 5.

4.2 AN EXPERIMENTAL STUDY In the following we describe a computer simulation, based on real data, which is limited to the case of vowels for simplicity. The simulations are based on a purely geometric description, x, of the vocal tract shape, derived from the articulatory model of Maeda (1988) and characterised by ten parameters (Badin et al 1995): LH -- Lip Height LP -- Lip Protrusion JH -- Jaw Height TB -- Tongue Body TD -- Tongue Dorsum TT -- Tongue Tip TA -- Tongue Advance LY -- Larynx VH -- Velum Height LV -- Lips Vertical so that x = [ LH , LP, JH , TB , TD, TT , TA, LY ,VH , LV ] . The acoustic output is coded by a vector of 5 formants, which can be computed from the recorded speech sounds and correspond to the resonant frequencies of the vocal tract y = [ F1 , F2 , F3 , F4 , F5 ] . The two parametrisation schemes have been chosen for computational convenience and have no special biological significance. In fact, it is quite likely that the auditory system uses a more complex spectral analysis of sounds and the biological articulatory system is certainly much more complex than what is described in the Maeda model. Nonetheless the two schemes, which are both redundant with respect to natural data, capture the computational essence of the sensorimotor problem: to reduce the complexity of the sensorimotor patterns in a substantial manner and exploit the resulting redundancy as a safety mechanism. Thus, the aim of the following sections is to demonstrate the computational power of the approach and its "coordinate-free" nature.

4.2.1 The training procedure The available data set consists of about 5000 digitised cineradiographic X-ray images of the sagittal view of the vocal tract (the sampling frequency was 50 Hz) of a male French speaker, pronouncing VVV and VCV sequences (Badin et al 1995). The vector prototypes in the articulatory and acoustic maps (1000 and 500 neurons, respectively) were trained separately by means of the soft-competitive learning

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

algorithm (Benaim 1994), each based on 10 presentations of the whole training set; the learning rate varied from 0.4 to 0.01. The intra-connections were constructed by means of the TRN algorithm (Martinetz and Schulten 1994), which is very sensitive to the size of the training set (at most, each sample can set up one new connection, therefore a small training set would result in an underestimation of the number of intra-connections). However, since the soft-competitive learning algorithm does guarantee (Benaim and Tomasini 1991) that a sum of Gaussians centered on the learned prototypes optimally approximates the probability density function of original data set, such a distribution was exploited in order to generate a synthetic training set of suitable size (about 100 times the number of neurons). The statistics of the number of intra-connections over the whole map provides a way to estimate the inherent dimensionality of the input space (Morasso et al 1998): more specifically, it has been shown that topologically continuous lattices behave like regular sphere packings (Conway and Sloane 1993) of the same dimension, and in particular the mean number of intra-connections equals the kissing number k(n) (i.e. the number of spheres that are in contact or "kiss" a given one) for that dimension; therefore, from the kissing number it is possible to estimate the lattice dimension. The histograms representing the number of intra-connections for the articulatory and the acoustic map are shown in figure 6; in the articulatory map the mean number of intra-connections is 25.35, suggesting a dimensionality of 4-5 (k(4)=24, k(5)=40), whereas in the acoustic map there are 12.22 intra-connections, resulting in a dimensionality of 3-4 (k(3) = 12). Both maps can be represented as graphs in the space of prototypes: the nodes of the graphs identify the receptive field centers of the neurons in the maps and the links identify the corresponding sets of intra-connections. Visualisation of highdimensional maps is not trivial but is important to have a feeling of the internal structure. For the acoustic map, which is embedded in the 5-dimensional space of formants, figure 7 proposes a simple projection into the two main planes: F1-F2 and F2-F3. The former projection, in particular, clearly outlines the well-known "triangle of vowels". The exploration of the articulatory map, which is embedded in the 10dimensional articulatory space, is more complex because there is no ranking among the coordinates.

Fig. 6. Histograms of the number of intra-connections of each neuron after learning: acoustic map (left), articulatory map (right).

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Fig. 7. Projections of the acoustic map in the F1-F2 plane (top) and in the F2-F3 plane (bottom). The center of the receptive field of each neuron in the map is a point in the 5dimensional formant-space. This is graphically represented by a set of nodes in the pair of graphs and the links correspond to the learned set of intra-connections.

However, it is possible to carry out a PCA2 of the map in the neighbourhood of a given node and simulate "gestures" of the vocal apparatus along the principal 2

PCA is the Principal Component Analysis: it corresponds to a change of coordinates of a given data vector x in such a way that the new coordinate axes are the eigenvectors of the covariance matrix of x. The new coordinates are ranked according to the corresponding eigenvalues. In particular, the first principal component is the projection of x on the eigenvector with the largest eigenvalue.

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Fig. 8. Visualisation of principal articulatory "gestures". The four boxes correspond to the first four principal components of the articulatory map, in the neighbourhood of the central point of the map.

directions. In our case we could estimate that four principal components capture over 90% of the variance, in agreement with the estimation of the intrinsic dimensionality of the map performed above. Figure 8 illustrates the corresponding gestures related to a point in the middle of the map.

4.2.3 Field representation of phonemic targets Each vowel phoneme V in an articulatory sequence can be described as a timevarying potential field in the acoustic space: EVy ( y , t ) = EVy ( y ) f (t ) , where EVy ( y ) is a bell-shaped distribution over the acoustic map that defines the acoustic target or landmark and f (t) is the target activation function. Moreover, if we normalise the distribution ( ∫ E Vy ( y )dy = 1 ) we can interpret it in a probabilistic sense: EVy ( y ) = p( y | V ) . The rather limited acoustic variability of vowels in speech samples

suggests that they can be described by a Normal distribution with a small variance: EVy ( y ) = G ( y − ~y i / σ 2 ) . A similar description can be applied to consonant phonemes in the corresponding map. The weighting function f (t) is intended to operate at the same time as a "selector mechanism" (the corresponding target is "selected" only

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Fig. 9. Approximate sub-map of 'non-audible gestures' corresponding to /u/, displayed in the plane JH-LH. This means that coordinated movements of the two articulators inside this sub-map do not alter significantly the acoustic output.

when f (t ) > 0 ) and a modulator of the target "importance". Therefore it defines how the field varies with time, and determines how the basic phonemes are chained in a complex speech sequence. The activation intervals of different phonemes are limited and can overlap, thus accounting for coarticulation phenomena. The role of this activation function is somehow similar to that of the GO-function described by Bullock and Grossberg (1989) and can be linked to recurrent thalamo-cortical circuits.

4.2.4 Non-audible gestures and compensation The specification of vowel phonemes in terms of fields is particularly suitable for exploring the compensatory mechanisms that exploit the redundancy of the articulatory-acoustic mapping. For instance, it is known that in rounded vowels like /u/ the jaw articulator JH can compensate the effect of lip height LH (Lindblom et al 1979). This effect can be displayed by projecting the /u/ field back to the articulatory map: Eux ( ~x i ) = ∑ j c ijxy Euy ( ~y j ) . We can now display in the JH-LH plane the subset of the articulatory map corresponding to non-zero values of E ux ( ~x i ) : we have a direct picture of the set of non-audible gestures that correspond to the same acoustic output /u/ (figure 9). In general, this mechanism allows to investigate the "no-motion manifold" of a redundant mapping y = y ( x). The field mechanism can also put into evidence the acoustic effect of constraints in articulatory space, like the application of a bite block or a lip tube. For instance, the bite block constraint JH=1.0 can be expressed by the following field: { E ix = U ix ( x | JH = 1) }. Let us project it back onto the acoustic map, thus setting a distribution of activation { E iy }. In a similar way to the case of non-audible gestures,

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Fig. 10. Reduced acoustic map, corresponding to the bite block constraint. F1-F2 plane (top) and F2-F3 plane (bottom).

we can consider the (reduced) acoustic map which corresponds to non-zero values of the projected field: it displays the range of acoustic behaviours that can be obtained by the application of the bite block constraint. Figure 10 shows, for example, that

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Fig. 11. Reduced acoustic map, corresponding to the lip tube constraint. F1-F2 plane (top) and F2-F3 plane (bottom).

with this constraint it is not possible to generate the phoneme /i/ which is situated in the upper-left corner of the "triangle of vowels" (see the plane F1-F2 of figure 7): in fact that part of the triangle is missing in figure 10 and thus the network dynamics can

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

only approximate the designated target in this case. In a similar way, we can examine the lip tube constraint, which can be expressed by setting the LH variable to a constant value. Figure 11 shows the corresponding reduced map.

4.2.5. Generation of VVV... sequences The relaxation dynamics described in the section on dynamic inversion provides a

Fig. 12. Articulatory synthesis of the simple /ae/ sequence. The left part of the figure shows the activation functions of the /a/ and /e/ phonemes and the corresponding trajectory in the acoustic space of formants. The right part shows the trajectory in the articulatory space generated by equation 5. Parameters of the simulation: τ = 50 ms, cp = 10 (gain of the recurrent input), ce = 30 (gain of the external input).

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

suitable computational framework for the synthesis of articulatory commands corresponding to a desired sequence of phonemes. The simulations described in the following are limited to the vocalic case for simplicity but the same approach can be applied to the more general case without major changes. The sequence of vocalic targets (V1, V2, …) is identified by a corresponding

Fig. 13. Articulatory synthesis of the simple /aεeiyuo/ sequence. The left part of the figure shows the activation functions of the seven phonemes and the corresponding trajectory in the acoustic space of formants: experimental data (dashed line); simulated data (continuous line). The right part shows the trajectory in the articulatory space generated by equation 5 (continuous line) and the corresponding experimental data. Parameters of the simulation: τ = 50 ms, cp = 10 (gain of the recurrent input), ce = 30 (gain of the external input).

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Fig. 14. Simulation of the /aia/ sequence displaying the phenomenon of vowel reduction. Stressed (τ = 0.05 s, left) and unstressed (τ = 0.15 s, right) conditions; same speech rate.

set of phonemic fields ({ EVy }, { EVy }, …) and activation functions ( f1 (t ), f 2 (t ), ... ). 1

2

In the simulations the activation functions were partially overlapping, truncated Gaussians. The variance values, which identify the phonemic durations, and the delay shifts, which determine the degree of interphonemic overlap or coarticulation of consecutive phonemes, were chosen in order to fit the data. The time varying phonemic field is obtained by a simple linear combination: y y ~ ~ EVVV (7) ... ( y i , t ) = ∑ s EV ( y i ) f s (t ) s

and then is projected onto the articulatory map via the set of inter-connections, setting up a corresponding articulatory field. Equation 5 drives the relaxation of the population code to such field, thus generating a smooth articulatory trajectory. This is shown in figure 12 for a simple /ae/ sequence. It should be noted that the smoothness of this trajectory formation process is an implicit side-effect of relaxation dynamics and does not require any explicit minimisation task, as suggested for example by the theory of minimum jerk (Flash and Hogan 1985). Since the driving field only specifies the desired set of targets, the mechanism is a kind of generalised End-Point control in configuration space; in other words, the inversion process takes place at the level of target representation (from acoustic space back to articulatory space). The shape of the trajectory is determined by the inherent dynamics as well as by the topology of intra- and inter-connections that affect the propagation of the activation field. At the same time, this is a very flexible mechanism which allows all sorts of effects by modulating in time the phonemic fields.

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

We tested the computational power of this prototype mechanism of articulatory speech synthesis by fitting its parameter to experimentally observed VVV... sequences. Figure 13 shows an example, related to a sequence /aεiyuo/ for which we had the time course of both the formants and the articulators. First we adapted the parameters of the phonemic fields (position and spread of each phoneme in the formant space) and the corresponding activation functions (position and spread along the time axis): the left part of the figure shows the real and the simulated trajectories in the space of formants and the set of activation functions. Then we simulated the dynamic cortical model expressed by equation 5 and tuned its parameters (the time constant τ and the two gain parameters ce and cp). The final fit is visualised in the right part of the figure and is reasonably good. Similar results were obtained with other VVV… sequences. It should be noted that the model is able to synthesise a given sequence at different speech rates and under different stress conditions: the speech rate is controlled by simply scaling the distances in time among consecutive target fields, whereas the level of stress can be controlled by varying the time constant τ of map relaxation. To illustrate this aspect, the same sequence, /aia/, was simulated with different values of τ, corresponding to different stress conditions, while keeping the speech rate constant. The results are shown in figure 14: in the stressed case the full /i/ is reached whereas in the unstressed case a different vocalic sound is produced, which is between /a/ and /i/.

5. Conclusions A model of cortical maps organisation has been described, which is particularly suited for the planning of complex sensorimotor tasks and has some elements of biological plausibility. This line of research can be pursued in at least two directions: (i) using the model as a tool of analysis of the dynamic behaviour of cortical activity, when dealing with modern techniques of functional brain imaging (Frisone et al 1998b, 1999); (ii) improving the biological plausibility of the model as regards the training method. In fact the TRN modelling framework, used throughout the simulations, can be hardly considered as a candidate biological paradigm because of its intrinsic non-local nature and the discreteness of the recurrent excitatory connections. However, it is possible to overcome such a conceptual obstacle by adopting an extended Hebbian paradigm, derived from the theory of Expectation Maximisation (Morasso et al 1998, Frisone et al 1998a). In this framework, the lateral connection coefficients c ij are no more binary, although remain excitatory, and their number is much greater than that consistent with the TRN theory. The idea is to approximate a graph of binary connections with a graph of graded connections, where each binary connection is substituted by a cluster of graded connections. The level of equivalence among the two models can be established at the dynamic level, in the sense that the same input pattern must yield approximately the sense point attractor in the two types of networks. In this way, what is lost from the point of view of compactness (the TRN model is much more compact) is gained from the point of view of biological plausibility (a global learning paradigm is substituted by a local one). In practice, this kind of model is computationally very intensive and requires a

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

significant jump in normally available computational resources before becoming a practical tool for the analysis of high-dimensional motor control problems. Acknowledgments. This research was supported by the EU project SPEECH-MAPS and the Ministry of University & Research.

References Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern., 27, 7787 (1977) Badin P., Gabioud B., Beautemps D., Lallouache T., Bailly G., Maeda S., Zerling JP., Brock G. Cineradiography of VCV sequences: articulatory-acoustic data for a speech production model. In International Conference on Acoustics, pp. 349-352, Trondheim, Norway (1995) Benaim M. On functional approximation with normalized GaussianUnits. Neural Comput., 6, 319333 (1994) Benaim M., Tomasini L. Competitive and self-organizing algorithms based on the minimization of an information criterion. In "Artificial Neural Networks" (T.Kohonen, K.Makisara, O.Simula, and J.Kangas, editors), pp. 391-396, North-Holland, Amsterdam (1991) Braitenberg V. Vehicles - Experiments in Synthetic Psychology. MIT Press, Cambridge, Mass. (1984). Bullock D., Grossberg S. VITE and FLETE: Neural modules for trajectory formation and postural control. In "Volitional Action" (W.A. Hershberger, editor), pp. 253-297, North-Holland, Amsterdam (1989) Calvin WH. Cortical columns, modules, and Hebbian cell assembles. In "The handbook of brain theory and neural networks" (M.A. Arbib, editor), pp.269-272. MIT Press, Cambridge, Mass (1995) Conway JH, Sloane NJA. Sphere packings, lattices and groups. Springer-Verlag, New York, NY (1993). Droulez J.,Berthoz A. A neural model of sensoritopic maps with predictive short-term memory properties. Proceedings of the National Academy of Sciences, 88, 9653-9657, (1991) Durbin R., G.Mitchison G. A dimension reduction framework for understanding cortical maps. Nature, 343, 644-647 (1990) Das A, Gilbert CD. Topography of contextual modulations mediated by short-range interactions in primary visual cortex. Nature, 399, 655-61 (1999) FlashT., N.Hogan N. The coordination of arm movements: an experimentally confirmed mathematical model. J. Neurosci., 7, 1688-1703 (1985) Frisone F., Morasso P., Perico L. Self-organization in cortical maps & EM learning. J. Advanced Computat. Intel., 2, 178-184 (1998a) Frisone F., Vitali P., Morasso P. Cortical activity pattern in complex tasks. In "Computational Neuroscience: Trends in Research" (J.M. Bower, editor), Plenum Press, pp. 13-18 (1998b) Frisone F, Vitali P, Iannò G, Marongiu M, Morasso P, Pilot A, Rodriguez G, Rosa M, Sardanelli F. Can the synchronization of cortical areas be evidenced by fMRI? J. Neurocomp., 26-27, 10191024. (1999) Georgopoulos AP, Lurito JT, Petrides M, Schwartz AB, Massey JT. Mental rotation of the neuronal population vector. Science, 243, 234-236 (1989) Gilbert CD, Wiesel TN. Morphology and intracortical projections of functionally identified neurons in cat visual cortex. Nature, 280, 120-125 (1979) Grajski KA, Merzenich MM. Hebb-type dynamics is sufficient to account for the inverse magnification rule in cortical somatotopy. Neural Comput., 2, 71-84 (1990) Grossberg S. Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Appl. Math., 52, 213-257 (1973) Jordan MI, Rumelhart DE. Forward models: Supervised learning with a distal teacher. Cognitive Sci., 16, 307-354 (1992) Katz LC, Callaway EM. Development of local circuits in mammalian visual cortex. Ann. Rev. Neurosci., 15, 31-56 (1992) Knudsen EI, duLac S, Esterly SD. Computational maps in the brain. Ann. Rev. Neurosci., 10, 41-65 (1987)

Plausible Neural Networks for Biological Modelling (HAK Mastebroek & JE Vos, eds.), Kluwer Academic Publishers, Dordrecht, 2001

Kohonen T. Self organizing formation of topologically correct feature maps. Biol. Cybern., 43, 59-69 (1982) Levitan S, Reggia JA. Interhemispheric effects on map organization following simulated cortical lesions. Artif. Intell. Med., 17, 59-85 (1999) Levitan S, Reggia JA. A computational model of lateralization and asymmetries in cortical maps. Neural Comput., 12, 2037-62 (2000) Laboissière R. Préliminaires pour une Robotique de la Communication Parlée: Inversion et Controle d'un Modèle Articulatoire du Conduit Vocal. PhD Thesis, Institut National Polytechnique de Grenoble, (1992) Lindblom B, Lubker J, Gay T. Formant frequencies of some fixed-mandible vowels and a model of speech motor programming by predictive simulation. J. Phonetics, 7, 147-161 (1979) Lukashin AV, Georgopoulos AP. A neural network for coding trajectories by time series of neuronal population vectors. Neural Comput., 6, 19-28 (1994) Maeda S. Improved articulatory model. J. Acoust. Soc. America, 81(S1), S146 (1988) Martinetz T, Schulten K. Topology representing networks. Neural Networks, 7, 507-522 (1994) Morasso P., Sanguineti V. Self-organizing body-schema for motor planning. J. Motor Behav., 26, 131148 (1995) Morasso P, Sanguineti V. How the brain can discover the existence of external egocentric space. Neurocomput., 12, 289-310 (1996) Morasso P, Sanguineti V, Frisone F, Perico L. Coordinate-free sensorimotor processing: computing with population codes. Neural Networks, 11, 1417-1428 (1998) Morasso P, Sanguineti V, Frisone F. Computational implications of modeling grasping as a form of (multiple-parallel) reaching. Motor Control, 3, 276-279 (1999) Morasso P. (2000) Is schema theory an appropriate framework for modeling the organization of the brain? Behav. Brain Sci., 23 (2000) Munoz DP, Pelisson D, Guitton D. Movement of neural activity on the superior colliculus motor map during gaze shifts. Science, 251, 358-360 (1991) Nicoll A, Blakemore C. Patterns of local connectivity in the neocortex. Neural Comput., 5, 665-680 (1993) Ohman A. Numerical model of coarticulation. J. Acoust. Soc. America, 41, 310-320 (1967) Reggia JA, D'Autrechy CL, Sutton GG III, Weinrich M. A competitive distribution theory of neocortical dynamics. Neural Comput., 4, 287-317 (1992) Salinas E, Abbott LF. Transfer of coded information from sensory to motor networks. J. Neurosci., 15, 6461-6474 (1995) Sanger TD. Theoretical considerations for the analysis of population coding in motor cortex. Neural Comput., 6, 29-37 (1994) Sanger TD, Merzenich MM. Computational model of the role of sensory disorganization in focal taskspecific dystonia. J. Neurophysiol., 84, 2458-2464 (2000) Sanguineti V, Morasso P, Frisone F.. Cortical maps of sensorimotor spaces. In "Self-organization, Computational Maps, and Motor Control" (P.Morasso and V.Sanguineti, editors), pp 1-36. North Holland, Amsterdam (1997a) Sanguineti V, Laboissière R, Payan Y. A control model of human tongue movements in speech. Biol. Cybern., 77,11-22 (1997b) Sanguineti, V. and Laboissière, R. and Ostry, D. J., A Dynamic Biomechanical Model for Neural Control of Speech Production. J. Acoust. Soc. America, 103, 1615-1627 (1998) Schwark HD, Jones EG. The distribution of intrinsic cortical axons in area 3b of cat primary somatosensory cortex. Exper. Brain Res., 78, 501-513 (1989) Singer W. Synchronization of neural responses as a putative binding mechanism. In "The handbook of brain theory and neural networks" (M.A. Arbib, editor), pp 960-964. MIT Press, Cambridge, Mass (1995) Sirosh J, Miikkulainen R. Topographic receptive fields and patterned lateral interaction in a selforganizing model of the primary visual cortex. Neural. Comput.,9, 577-94 (1997) Zipser D, Andersen RA. A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679-684 (1988)

CORTICAL MAPS AS TOPOLOGY-REPRESENTING ...

actually be used to build topological internal representations and propose that the latter are particularly well suited for the processing of high-dimensional ...

772KB Sizes 1 Downloads 223 Views

Recommend Documents

CORTICAL MAPS AS TOPOLOGY-REPRESENTING NEURAL ...
latter are particularly well suited for the processing of high-dimensional "spatial" variables and solving complex problems of motor control that involve sensorimotor information. In particular, we apply the methods to the case of speech motor contro

Higher derivatives as multilinear maps
Higher derivatives as multilinear maps. Suppose U ⊆ R n is a domain (i.e. open and connected) and f : U → R m is differentiable at all x ∈ U. As we have seen, the derivative map of f at x is a linear map of ∆x, i.e. f (x) : R n → R m . Maki

Putative cortical dopamine levels affect cortical ...
has been found to have little effect on striatal DA or cortical noradrenaline levels (Tunbridge .... (50o80) to the PD patients tested by Williams-Gray et al. (2007), displaying no ..... An illustration of a typical series of trials. Plan (left colum

Using Fuzzy Cognitive Maps as a Decision Support ... - Springer Link
no cut-and-dried solutions” [2]. In International Relations theory, ..... Fuzzy Cognitive Maps,” Information Sciences, vol. 101, pp. 109-130, 1997. [9] E. H. Shortliffe ...

Google Maps
and local business information-including business locations, contact information, and driving directions. Start your search in this box: Begin your search with a ...

Oriented Cortical Networks in Polydimethylsiloxane ...
E-mail address: [email protected]. Abstract ... MEAs and the design can easily be adapted to differ- ... lithography on SU-8 templates, including two small.

Interhemispheric synchrony of spontaneous cortical ...
strong bias toward cardinal orientations, whereas oblique states were the least common (Fig. 1F). The correlation ... Trajectories of instantaneous orientation state “crawl” and “hop”, and oversample cardinal orientations ... state transition

Google Maps Engine Cookbook
Sep 3, 2014 - In Google Maps Engine, you always follow these three basic steps to make a map: A. Upload data ... Check the status message at the top of the page to see when it's finished. Recipe 1 | Map with point .... outlines for each polygon (stat

Your local maps - AppGeo
Modern and responsive web interface that is continuously updated to include new features ... (such as owner information) and maps for the public. • Support and ...

A comparison between concept maps, mind maps ...
without drawbacks 15–17 and they may not fit all types of target groups (such as non-academics), learning tasks (i.e. developing .... There is a myriad of node-link mapping methods from such diverse areas as psychology, computer .... and facilitati

Happy Halloween! maps
Where it's black, carve all the way through. Remove a layer where it's gray. 4. Add light and be safe with candles! maps maps.google.com ...

Your local maps - AppGeo
What zoning district am I in?, When is trash pickup at this address?, Am I in a flood zone?, What is the valuation of properties on this street? MapGeo delivers the same powerful tools and maps to smartphones, tablets, and desktops ensuring everyone

Your local maps - AppGeo
powerful tools and maps to smartphones, tablets, and desktops ensuring everyone has access to the answers they need no matter where they are. Maps are made to be ... FROM ASSESSING DATABASE. YOUR CUSTOM MAP. LAYER ON CARTO. SHARE. View data from anyw

Portfolio My Maps
members area was integrated using Wordpress with our team needing to match the design of the main front end site. ... PHP and also rewrote a number of theme files. We added the ability for job seekers to apply directly from the Chase ...

Google Maps - PDFKUL.COM
Google Maps. (maps.google.com). Google Maps is a Google service offering powerful, user-friendly mapping technology and local business ...

Geolocation with Google Maps
constantly updated through crowdsourcing from billions of. Android phones. No GPS required. Advanced positioning algorithms deliver typical accuracies of 10- ...

Google Maps API
Intoscana is Tuscany's official web site and was created to expose the beautiful. Tuscany region ... to online travel frustrations by providing its users with the ... new free tools. ... web. Dave Troy, the creator of Twittervision, explains a little

Annex A: Maps - GitHub
focused spatial data infrastructure for South Sudan. Overview ... independence analysis of the same resources, policies and features. ... Africover. LULC PDF.

Google Maps - Groups
O cr"fcvc"k 4237"Iqqing. 422"hv". Google Maps https://www.google.com/maps/@35.9102072,-79.0516454,17z. 1 of 1. 2015-2-7 17:53. 华联春节联欢会场. 赴华签证办理处. Emerson Dr. Parking. Lenoir Dr. Parking. Swain Lot. Parking. Nash Lot via.

Google Maps -
O cr"fcvc"k 4237"Iqqing. 422"hv". Google Maps https://www.google.com/maps/@35.9102072,-79.0516454,17z. 1 of 1. 2015-2-7 17:53. 华联春节联欢会场. 赴华签证办理处. Emerson Dr. Parking. Lenoir Dr. Parking. Swain Lot. Parking. Nash Lot via.

Google Maps API
Whether searching for the perfect restaurant, checking out the best hotels or finding the nearest ... The social element of the map means users can share their running routes or ... on the map, pledge deeds to help the campaign and invite friends to

Simple to use Enterprise grade security Apps as ... My Maps
... it with their co-workers. How it works: ​No action needed from administrators or end users​—​one of the many joys of services delivered from the​ ​Cloud.