Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences

Hongyuan Mei Mohit Bansal Matthew R. Walter Toyota Technological Institute at Chicago Chicago, IL, USA 60637 {hongyuan,mbansal,mwalter}@ttic.edu

Abstract We propose a neural sequence-to-sequence model for direction following, a multimodal task that is essential to realizing effective autonomous agents. Our alignment-based encoder-decoder model with long short-term memory recurrent neural networks (LSTM-RNN) translates natural language instructions to action sequences based upon a representation of the observable world state. We introduce a multi-level aligner that empowers our model to focus on sentence “regions” salient to the current world state by using multiple abstractions of the input sentence. In contrast to existing methods, our model uses no specialized linguistic resources (e.g., parsers) or task-specific annotations (e.g., seed lexicons). It is therefore generalizable, yet still achieves the best results reported to-date on a benchmark single-sentence dataset and competitive results for the limited-training multi-sentence setting. We analyze our model through a series of ablation studies that elucidate the contributions of the primary components of our model.

1

Introduction

Robots must be able to understand and successfully execute natural language navigational instructions if they are to work seamlessly alongside people. For example, a soldier may command a micro aerial vehicle to “Fly down the hallway into the second room on the right.” However, interpreting such free-form instructions (especially in unknown environments) is challenging due to their ambiguity and complexity, such as uncertainty in their interpretation (e.g., which hallway does the instruction refer to), long-term dependencies among both the instructions and the actions, differences in the amount of detail given, and the diverse ways in which the language can be composed. Figure 1 presents an example instruction that our method successfully follows. Previous work in this multimodal (language and actions) domain [6, 5, 12, 13, 3, 2] largely requires specialized resources like semantic parsers, seed lexicons, and re-rankers to interpret natural language instructions. In contrast, our approach learns to map instructions to actions via an end-to-end neural method that assumes no prior linguistic resources, learning the meaning of all the words, spatial relations, syntax, and compositional semantics from just the raw training sequence pairs. We propose a recurrent neural network with long short-term memory [10] to both encode the navigational instruction sequence bidirectionally and to decode the representation to an action sequence, based on a representation of the current world state. LSTMs are well-suited to this task, as they have been shown to be effective in learning the temporal dependencies that exist over such sequences in similar tasks [15, 11, 25, 23, 22, 26]. Additionally, we learn the correspondences between words in the input navigational instruction and actions in the output sequence using an alignment-based LSTM [4, 28]. Standard alignment methods only consider high-level abstractions of the input (language), which sacrifices information important to identifying these correspondences. Instead, we introduce a multi-level aligner that empowers the model to use both high- and low-level input representations and, in turn, improves the accuracy of the inferred directions. 1

Place your back against the wall of the “T” intersection. Go forward one segment to the intersection with the blue-tiled hall. This interesction [sic] contains a chair. Turn left. Go forward to the end of the hall. Turn left. Go forward one segment to the intersection with the wooden-

Objects B Barstool C Chair E Easel H Hatrack L Lamp S Sofa Wall paintings Tower Butterfly Fish

Floor patterns Blue Brick Concrete Flower Grass Gravel Wood Yellow

E

L

C

floored hall. This intersection conatains [sic] an easel. Turn right. Go forward two segments to the end of the hall. Turn left. Go forward

E

S

H

B

one segment to the intersection containing the lamp. Turn right. Go L

forward one segment to the empty corner. H

S

C

Figure 1: An example of a route instruction-path pair in one of the virtual worlds from MacMahon et al. [17]. Our method successfully infers the correct path for this instruction.

We evaluate our model on the benchmark MacMahon et al. [17] navigation dataset and achieve the best results reported to-date on the single-sentence task (which contains only 2000 training pairs), without using any specialized linguistic resources, unlike previous work. On the multi-sentence task of executing a full paragraph, where the amount of training pairs is even smaller (just a few hundred pairs), our model performs better than several existing methods and is competitive with the stateof-the-art, all of which use specialized linguistic resources (e.g., semantic parsers), extra annotation (e.g., seed logic-form lexicons), or reranking. We also perform a series of ablation studies in order to analyze the primary components of our model, including the encoder, the multi-level and standard aligners, and bidirectionality.

2

Related Work

A great deal of attention has been paid of late to algorithms that allow robots and autonomous agents to follow free-form navigational route instructions [17, 16, 18, 6, 24, 13, 2, 9]. These methods solve what Harnad [8] refers to as the symbol grounding problem, that of associating linguistic elements with their corresponding manifestation in the external world. Initial research in natural language symbol grounding focused on manually-prescribed mappings between language and a set of predefined environment features and a set of actions [27, 17]. More recent work in statistical language understanding learns to convert free-form instructions into their referent symbols by observing the use of language in a perceptual context [20]. One class of statistical methods operates by mapping free-form utterances to their corresponding object, location, and action referents in the agent’s world model [16, 24]. A second class treats the language understanding problem as one of parsing natural language commands into their formal logic equivalent [18, 6, 12, 3]. Both approaches typically represent language grounding in terms of manually defined linguistic, spatial, and semantic features [16, 18, 24]. They learn the model parameters from natural language corpora, often requiring expensive annotation to pair phrases with their corresponding groundings. We adopt an alternative, sequence-to-sequence formulation for the multimodal task of interpreting route instructions. We propose an end-to-end neural network that uses no prior linguistic structure, resources, or annotation, which improves generalizability and allows for end-to-end training. Our method is inspired by the recent success of such methods for machine translation [23, 4], image and video caption synthesis [15, 25, 11], and natural language generation [22, 26]. Our model encodes the input free-form route instruction and then decodes the embedding to identify the corresponding output action sequence based upon the local, observable world state (which we treat as a third sequence type that we add as an extra connection to every decoder step). Moreover, our decoder also includes alignment to focus on the portions of the sentence relevant to the current action, a technique that has proven effective in machine translation [4] and machine vision [19, 28]. However, unlike standard alignment techniques, our model learns to align based not only on the high-level input abstraction, but also the low-level representation of the input instruction, which improves performance. Recently, Andreas and Klein [1] use a conditional random field model to learn alignment between instructions and actions; our LSTM-based aligner performs substantially better than this approach.

3

Task Definition

Given training data of the form (x(i) , a(i) , y (i) ) for i = 1, 2, . . . , n, where x(i) is an variable length natural language instruction, a(i) is the corresponding action sequence, and y (i) is the observable 2

go forward two segments to the end of the hall

Action Sequence

LSTM-RNN

E

LSTM-RNN Aligner

Instruction

LSTM-RNN

LSTM-RNN ENCODER

MULTI-LEVEL ALIGNER

DECODER

World State

Figure 2: Our encoder-aligner-decoder model with multi-level alignment

environment representation, our model learns to produce the correct action sequence a(i) given a previously unseen (x(i) , y (i) ) pair. The challenges arise from the fact that the instructions are freeform and complex, contain numerous spelling and grammatical errors, and are ambiguous in their meaning. Further, the model is only aware of the local environment in the agent’s line-of-sight. The dataset [17] contains three different virtual worlds (e.g., Fig. 1) consisting of interconnected hallways with a pattern (grass, brick, wood, gravel, blue, flower, or yellow octagons) on each hallway floor, a painting (butterfly, fish, or Eiffel Tower) on the walls, and objects (hat rack, lamp, chair, sofa, barstool, and easel) at intersections.

4

The Model

We formulate the problem of mapping a lingual instruction x1:N to an action sequence a1:T as inference over a distribution P (a1:T |y1:T , x1:N ), where a1:T = (a1 , a2 , . . . , aT ) is the action sequence, yt is the world state at time t, and x1:N = (x1 , x2 , . . . , xN ) is the natural language instruction, a∗1:T = arg max P (a1:T |y1:T , x1:N ) = arg max a1:T

a1:T

T Y

P (at |a1:t−1 , yt , x1:N )

(1)

t=1

An effective means of learning this sequence-to-sequence mapping is to use a neural encoderdecoder architecture. We first use a bidirectional recurrent neural network to encode the input sentence hj = f (xj , hj−1 , hj+1 ), and where hj is the encoder hidden state for word j ∈ {1, . . . , N }, and f and c are nonlinear functions. Next, the context vector zt (computed by the aligner) encodes the language instruction at time t ∈ {1, . . . , T }. Next, another RNN decodes the context vector zt to arrive at the desired likelihood P (at |a1:t−1 , yt , x1:N ) = g(st−1 , zt , yt ) , where st−1 is the decoder hidden state at time t − 1, and g is a nonlinear function. Inference then follows by maximizing this posterior to determine the desired action sequence. Our model (Fig. 2) employs LSTMs as the nonlinear functions f and g due to their ability to learn long-term dependencies over the instruction and action sequences, without suffering from exploding or vanishing gradients. Our model also integrates multi-level alignment to focus on parts of the instruction that are more salient to the current action at multiple levels of abstraction. We next describe each component of our network in detail. LSTM Our model deploys LSTMs (Fig. 3) as the recurrent units:     ij σ   fj   σ  xj  = T (2a)  oj  σ  hj−1 tanh gj cj = fj cj−1 + ij gj

(2b)

hj = oj tanh(cj )

(2c)

Output gate

Forget gate

c Memory cell Input gate

where T is an affine transformation, σ is the logistic sigmoid that restricts its input to [0, 1], ij , fj , and oj are the input, output, and forget gates of the LSTM, respectively, and cj is the memory cell activation Figure 3: LSTM unit. vector. The memory cell cj summarizes the LSTM’s previous memory cj−1 and the current input, which are modulated by the forget and input gates, resp. The forget and input gates enable the LSTM to regulate the extent to which it forgets its previous memory and the input, while the output gate regulates the degree to which the memory affects the hidden state. 3

Encoder Our encoder takes as input the instruction sequence x1:N = (x1 , x2 , . . . , xN ), where x1 and xN are the first and last words in the sentence, respectively. We treat each word xi as a Kdimensional one-hot vector, where K is the vocabulary size. We feed this sequence into an LSTMRNN that summarizes the temporal relationships between previous words and returns a sequence of hidden annotations h1:N = (h1 , h2 , . . . , hN ), where the annotation hj summarizes the words up → − ← −> > to and including xj . Our encoder is bidirectional [7, 4], and the hidden state hj = ( h > j ; hj ) → − ← − concatenates the forward h j and backward hidden annotations h j (Eqn. (2c)). Multi-level Aligner The context representation of the instruction is computed as a weighted sum of the word vectors xj and encoder states hj . Whereas most previous work align based only on the hidden annotations hj , we found that also including the original input word xj in the aligner improves performance. This multi-level representation allows the decoder to not just reason over the high-level, context-based representation of the input sentence hj , but to also consider the original low-level word representation xj . By adding xj , the model offsets information that is lost in the high-level abstraction of the instruction. Intuitively, the model is able to better match the salient words in the input sentence (e.g., “easel”) directly to the corresponding landmarks in the current world state yt used in the decoder. The context vector then takes the form   X xj zt = αtj (3) hj j

The weight αtj associated with each pair (xj , hj ) is X αtj = exp(βtj )/ exp(βtj ),

(4)

j

where the alignment term βtj = f (st−1 , xj , hj ) weighs the extent to which the word at position j and those around it match the output at time t. The alignment is modelled as a one-layer neural perceptron βtj = v > tanh(W st−1 + U xj + V hj ),

(5)

where v, W , U , and V are learned parameters. Decoder Our architecture uses an LSTM decoder (Fig. 3) that takes as input the current world state yt , the context of the instruction zt , and the LSTM’s previous hidden state st−1 . After computing st , the output is the conditional probability distribution Pa,t = P (at |a1:t−1 , yt , x1:N ) over the next action, represented as a deep output layer [21] Pa,t = softmax (L0 (Eyt + Ls st + Lz zt )) ,

(6a)

where E is an embedding matrix and L0 , Ls , and Lz are parameters to be learned. Training We train the encoder and decoder models so as to predict the action sequence a∗1:T according to Equation 1 for a given instruction x1:N and world state y1:T from the training corpora. We use the negative log-likelihood of the demonstrated action at each time step t as our loss function, L = − log P (a∗t |yt , x1:N ).

(7)

As the entire model is a differentiable function, the parameters can be learned by back-propagation. Inference Having trained the model, we generate action sequences by finding the maximum a posteriori actions under the learned model (1). For the single-sentence task, we perform inference using standard beam search to maintain a list of the current k best hypotheses.1 We iteratively consider the k-best sequences up to time t as candidates to generate sequences of size t + 1 and keep only the resulting best k of them. For the multi-sentence task, we perform the search sentence-bysentence, and initialize the beam of the next sentence with the list of previous k best hypotheses. Also, as a common denoising method in deep learning [23, 29, 25], we perform inference over an ensemble of randomly initialized models.2 1

We use a beam width of 10 to be consistent with previous work [2]. However, our greedy search results are very close to the beam search ones. 2 At each time step t, we generate actions using the avg. of the posterior likelihoods of 10 ensemble models, as in previous work.

4

5

Experimental Setup

Dataset We use the benchmark, publicly-available SAIL route instruction corpus [17], and use the raw data in its original form (e.g., we do not correct any spelling errors). World State The world state yt encodes the local, observable world at time t. We make the standard assumption that the agent is able to observe all elements of the environment that are within line-of-sight (i.e., not occluded by walls). We represent the world state as a concatenation of a simple bag-of-words vector for each direction (forward, left, and right). The choice of bag-of-words representation is made to avoid manual domain-specific feature engineering and the combinatoriality of modeling exact world configurations. Evaluation Metrics For the single-sentence task, the standard metric deems a trial successful iff the final position and orientation exactly match those of the original demonstration. For multisentence, we disregard the final orientation (following previous work). However, this setting is still more challenging than single-sentence due to errors that cascade with sequential sentences. Training Details We follow the same procedure as Chen and Mooney [6], training with the segmented data and testing on both single- and multi-sentence versions. We train our models using three-fold cross-validation. In each fold, we retain one map as test and partition the two-map training data into training (90%) and validation (10%) sets, the latter for tuning hyperparameters. We report (size-weighted) average test results over these folds. We later refer to this training procedure as “vDev.” Additionally, following some previous methods (p.c.) that train on two maps while using the test map to decide on the stopping iteration, we also report a “vTest” setting. For optimization, we found Adam [14] to be very effective for training on this dataset. We performed early stopping based on the validation task metric. Similar to previous work [28], we found that the validation log-likelihood is not well correlated with the task metric.

6

Results and Analysis

Primary Result Table 1 reports the Table 1: Overall accuracy (state-of-the-art in bold) overall accuracy of our model for Method Single-sent Multi-sent both the single- and multi-sentence settings. We surpass state-of-the-art Chen and Mooney [6] 54.40 16.18 (SoA) results on the single-sentence Chen [5] 57.28 19.18 route instruction task (for both vDev Kim and Mooney [12] 57.22 20.17 and vTest settings), despite the fact Kim and Mooney [13] 62.81 26.57 that we use no linguistic knowledge Artzi and Zettlemoyer [3] 65.28 31.93 or external resources. Our multiArtzi et al. [2] 64.36 35.44 sentence accuracy, based upon a very Andreas and Klein [1] 59.60 – small amount of training data (a few Our model (vDev) 69.98 26.07 hundred paragraph pairs), is competOur model (vTest) 71.05 30.34 itive with SoA and outperforms several previous methods that use additional, specialized resources, e.g., semantic parsers, logical-form lexicons, and re-rankers. It is also worth to note that our model yields good results with only greedy search (beam-width of one). For vDev, we achieve 68.05 on single-sentence and 23.93 on multisentence, while for vTest, we get 70.56 on single-sentence and 27.91 on multi-sentence. Figure 1 illustrates an output example for which our model successfully executes the input natural language instruction.3 Multi-level Aligner Ablation Unlike most existing methods that align based only on the hidden annotations hj , we also include the original input word xj (Eqn. 3) to better learn which sentence “regions” are salient to the current world state. Table 2 shows that this multi-level representation 3 Note that with no ensemble, we are still state-of-the-art on single-sentence and better than all comparable approaches on multi-sentence: Artzi et al. (2013, 2014) use extra annotations with a logical-form lexicon and Kim and Mooney [13] use discriminative reranking, techniques that are orthogonal to our approach and should likely improve our results as well.

5

Table 2: Model components ablations

Full Model

High-level Aligner

No Aligner

Unidirectional

No Encoder

69.98 26.07

68.09 24.79

68.05 25.04

67.44 24.50

61.63 16.67

Single-sentence Multi-sentence

(‘Full Model’) provides significantly better results than a standard aligner (‘High-level Aligner’). Figure 4 visualizes the alignment of words to actions in the map environment for several sentences from the instruction paragraph depicted in Figure 1. Objects

Floor Objects F loor patterns patterns B B arstool Blue B lue C Barstool B C hair C B rick Brick E Chair E asel Concrete E C oncrete H Easel H atrack H FFlower lower L amp L Hatrack L Lamp Grass G rass S Sofa Sofa S Gravel G ravel Wood Wood Wall paintings paintings YYellow ellow Tower B utterf y Butterfly Fish F ish go go stop

go forward two segments to the end of the hall

go forward one segment to the intersection with the wooden-floored hall

E

stop

go

go go stop

go stop go forward one segment to the intersection containing the lamp

go go go stop go forward to the end of the hall

go stop

go

L

go

C

stop

H

go

E

S

B

go

S

L

stop

C

H

Figure 4: Visualization of the alignment between words to actions in a map for a multi-sentence instruction.

Aligner Ablation Removing the aligner and instead computing the context vector zt simply as an unweighted average (Eqn. 3), which we denote as the ‘No Aligner’ model in Table 2, yields reduced performance. Note that the ‘No Aligner’ model still maintains all connections between the instruction and actions, but uses non-learned uniform weights. Bidirectionality Ablation Encoding the input sentence in only the forward direction (denoted as ‘Unidirectional’ in Table 2) reduces the accuracy of the resulting action sequences compared to a bidirectional encoder (‘Full Model’). Encoder Ablation A ‘No Encoder’ model (Table 2) that directly feeds word vectors as randomly initialized embeddings into the decoder and relies on alignment to choose salient words performs significantly worse than our model. This is likely due to the RNN’s ability to incorporate global sentence-level information into each word’s representation. This helps resolve ambiguities, such as “turn right before . . . ” versus “turn right after . . . ”.

7

Conclusion

We presented a sequence-to-sequence approach to the multimodal task of mapping natural language navigational instructions to action plans given the local world state, using a bidirectional LSTMRNN model with a multi-level aligner. Our model achieves a new state-of-the-art on single-sentence execution and competitive results on the severely data-starved multi-sentence domain, despite using no specialized linguistic knowledge or resources. We further performed a number of ablation studies to elucidate the contributions of our primary model components. 6

References [1] Andreas, J. and Klein, D. (2015). Alignment-based compositional semantics for instruction following. In EMNLP. [2] Artzi, Y., Das, D., and Petrov, S. (2014). Learning compact lexicons for ccg semantic parsing. In EMNLP. [3] Artzi, Y. and Zettlemoyer, L. (2013). Weakly supervised learning of semantic parsers for mapping instructions to actions. TACL, 1:49–62. [4] Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. [5] Chen, D. L. (2012). Fast online lexicon learning for grounded language acquisition. In ACL. [6] Chen, D. L. and Mooney, R. J. (2011). Learning to interpret natural language navigation instructions from observations. In AAAI. [7] Graves, A., Abdel-rahman, M., and Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In ICASSP. [8] Harnad, S. (1990). The symbol grounding problem. Physica D, 42:335–346. [9] Hemachandra, S., Duvallet, F., Howard, T. M., Roy, N., Stentz, A., and Walter, M. R. (2015). Learning models for following natural language directions in unknown environments. In ICRA. [10] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8). [11] Karpathy, A. and Fei-Fei, L. (2015). Deep visual-semantic alignments for generating image descriptions. In CVPR. [12] Kim, J. and Mooney, R. J. (2012). Unsupervised pcfg induction for grounded language learning with highly ambiguous supervision. In EMNLP, pages 433–444. [13] Kim, J. and Mooney, R. J. (2013). Adapting discriminative reranking to grounded language learning. In ACL. [14] Kingma, D. and Ba, J. (2015). Adam: A method for stochastic optimization. In ICLR. [15] Kiros, R., Salakhutdinov, R., and Zemel, R. S. (2014). Unifying visual-semantic embeddings with multimodal neural language models. arXiv:1411.2539. [16] Kollar, T., Tellex, S., Roy, D., and Roy, N. (2010). Toward understanding natural language directions. In HRI. [17] MacMahon, M., Stankiewicz, B., and Kuipers, B. (2006). Walk the talk: Connecting language, knowledge, and action in route instructions. In AAAI. [18] Matuszek, C., Fox, D., and Koscher, K. (2010). Following directions using statistical machine translation. In HRI. [19] Mnih, V., Hees, N., Graves, A., and Kavukcuoglu, K. (2014). Recurrent models of visual attention. In NIPS. [20] Mooney, R. J. (2008). Learning to connect language and perception. In AAAI. [21] Pascanu, R., Gulcehre, C., Cho, K., and Bengio, Y. (2014). How to construct deep recurrent neural networks. arXiv:1312.6026. [22] Rush, A. M., Chopra, S., and Weston, J. (2015). A neural attention model for abstractive sentence summarization. In EMNLP. [23] Sutskever, I., Vinyals, O., and Lee, Q. V. (2014). Sequence to sequence learning with neural networks. In NIPS. [24] Tellex, S., Kollar, T., Dickerson, S., Walter, M. R., Banerjee, A. G., Teller, S., and Roy, N. (2011). Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI. [25] Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. (2015). Show and tell: A neural image caption generator. In CVPR. [26] Wen, T.-H., Gaˇsi´c, M., Mrkˇsi´c, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In EMNLP. [27] Winograd, T. (1970). Proceedures as a Representation for Data in a Computer Program for Understanding Natural Language. PhD thesis, MIT. [28] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In ICML. [29] Zaremba, W., Sutskever, I., and Vinyals, O. (2014). arXiv:1409.2329.

7

Recurrent neural network regularization.

Listen, Attend, and Walk: Neural Mapping of ...

three-fold cross-validation. In each fold, we retain one map as test and partition the two-map train- ing data into training (90%) and validation (10%) sets, the ...

1MB Sizes 1 Downloads 157 Views

Recommend Documents

Listen, Attend and Spell
Aug 20, 2015 - We present Listen, Attend and Spell (LAS), a neural network that learns to tran- scribe speech ... sequence learning framework with attention [17, 18, 16, 14, 15]. ...... The ground truth is: “aaa emergency roadside service”. 14 ..

A Neural Circuit Model of Flexible Sensorimotor Mapping: Learning ...
Apr 18, 2007 - 1 Center for Neurobiology and Behavior, Columbia University College of Physicians and Surgeons, New York, NY 10032, ... cross the road, we need to first look left in the US, and right ...... commonplace in human behaviors.

IOF/SEEOA Developing seminar for mapping Mapping and printing ...
IOF/SEEOA Developing seminar for mapping. Mapping and printing sprint and urban orienteering maps. 15-18 th. October 2015 Sofia, Bulgaria, hotel Legends.

Walk - Esic
Sep 30, 2009 - candidate before the due date or on the date of interview. ... also bring the original certificates including proof of date of birth for verification at ...

the secret country - Respect and Listen
Apr 8, 2013 - Free Film Screening of John Pilger's seminal documentary, “The Secret Country - The First. Australians Fight Back”. When: Monday, 8 April ...

the secret country - Respect and Listen
Apr 8, 2013 - Where: Amnesty International NSW Action Centre,. Level 1, 79 ... For more info contact [email protected], call 02 8396 7658 or.

IOF/SEEOA Developing seminar for mapping Mapping and printing ...
Seminar fee: 30eur per person (includes seminar materials, lectures, practice work, free wi fi, using of hotel conference room, water and coffe/tea breaks with ...

CATAWBA TEENS ATTEND NORTH CAROLINA CITIZENSHIP ...
County at the conference were: Shivam Desai and Sarahi Robles from Newton-Conover High School, Andrew Beard and. Darby Monsam from Bandys High ...

List of Supreme Student Government (SSG) President Who Will Attend ...
Page 1 of 3. November 16,2017. MEMORANDUM TO: Assistant Schools Division Superintendents. Chiefs, CID/SGOD. Division/District Supervisors. Principals,Head Teachers and Officer In-Charge. (Concerned Secondary Schools). Subject LIST OF SUPREME STUDENT

FIRe seveRITy mAPPIng: mAPPIng THe - Bushfire CRC
financial support was provided by Bushfires Nt and student support from Charles darwin university. ... mapping system (www.firenorth.org.au) enhancing the ...

Euless Police and Fire Community Fun Walk - City of Euless
EULESS POLICE & FIRE. COMMUNITY FUN WALK/RUN. SATURDAY. MAY 21, 2016. BOB EDEN PARK. 7:30AM. FREE T-SHIRT. REGISTRATION. FOR THE FIRST 50. FUN RUN/WALK. WALKERS/RUNNERS. PART OF THE BAYLOR. GRAPEVINE- 2016 LIFESTYLE. IMPROVEMENT CHALLENGE. TRE DEPASE

359527787-SELECTED-APPLICANTS-TO-ATTEND-INTERVIEW-AT ...
42. ANGEL ARON MBALA KE. S.L. P 582 MBEYA. Page 3 of 39. 359527787-SELECTED-APPLICANTS-TO-ATTEND-INTERVIEW-AT-NEWALA-DISTRICT.pdf.

360457207-SELECTED-APPLICANTS-TO-ATTEND-INTERVIEW-AT ...
Page 1 of 32. 1. AFISA MTENDAJI WA KIJIJI DARAJA LA III. NA JINA KAMILI ANUANI YA. POSTA. NA JINA KAMILI ANUANI YA. POSTA. 1 FARAJA LEONARD.

360291902-SELECTED-APPLICANTS-TO-ATTEND-INTERVIEW-AT ...
Loading… Page 1. Whoops! There was a problem loading more pages. Retrying... Main menu. Displaying 360291902-SELECTED-APPLICANTS-TO-ATTEND-INTERVIEW-AT-MSALAL-DISTRICT-SHINYANGA.pdf.

Neural mechanisms of synergy formation *
activity, whose equilibrium configurations .... rnusculo-skeletal body schema that imple- ... S-units model the different skeletal body segments, considered as.

Intriguing properties of neural networks
Feb 19, 2014 - we use one neural net to generate a set of adversarial examples, we ... For the MNIST dataset, we used the following architectures [11] ..... Still, this experiment leaves open the question of dependence over the training set.

Direct mapping of the temperature and velocity gradients in discs ...
(2017) recently intro- duced a method to map the thermal and density gas structure of ...... We use the Monte Carlo radiative transfer code MCFOST. (Pinte et al.

Mapping of Mungbean Yellow Mosaic India Virus (MYMIV) and ...
Mapping- marker assisted selection-MYMIV-powdery mildew-molecular markers-EST-SSR. Introduction .... complete sequenced database in black gram is.

Persistent Localization and Life-Long Mapping ... - University of Lincoln
ios, where the world is constantly changing and uncertainty grows with time, are still an open problem. Some authors try to handle these dynamics by finding the.

Mobile Mapping
Apr 24, 2013 - MOBILE MAPPING WITH ANDROID DEVICES. David Hughell and Nicholas Jengre .... 10. Windows programs to assist Android mapping .

Mobile Mapping
MOBILE MAPPING WITH ANDROID DEVICES. David Hughell and Nicholas Jengre. 24/April/2013. Table of Contents. Table of Contents .