A Feature-Rich Constituent Context Model for Grammar Induction Dave Golland John DeNero University of California, Berkeley Google [email protected] [email protected]

Abstract We present LLCCM, a log-linear variant of the constituent context model (CCM) of grammar induction. LLCCM retains the simplicity of the original CCM but extends robustly to long sentences. On sentences of up to length 40, LLCCM outperforms CCM by 13.9% bracketing F1 and outperforms a right-branching baseline in regimes where CCM does not.

1

Introduction

Unsupervised grammar induction is a fundamental challenge of statistical natural language processing (Lari and Young, 1990; Pereira and Schabes, 1992; Carroll and Charniak, 1992). The constituent context model (CCM) for inducing constituency parses (Klein and Manning, 2002) was the first unsupervised approach to surpass a right-branching baseline. However, the CCM only effectively models short sentences. This paper shows that a simple reparameterization of the model, which ties together the probabilities of related events, allows the CCM to extend robustly to long sentences. Much recent research has explored dependency grammar induction. For instance, the dependency model with valence (DMV) of Klein and Manning (2004) has been extended to utilize multilingual information (Berg-Kirkpatrick and Klein, 2010; Cohen et al., 2011), lexical information (Headden III et al., 2009), and linguistic universals (Naseem et al., 2010). Nevertheless, simplistic dependency models like the DMV do not contain information present in a constituency parse, such as the attachment order of object and subject to a verb. Unsupervised constituency parsing is also an active research area. Several studies (Seginer, 2007; Reichart and Rappoport, 2010; Ponvert et al., 2011)

Jakob Uszkoreit Google [email protected]

have considered the problem of inducing parses over raw lexical items rather than part-of-speech (POS) tags. Additional advances have come from more complex models, such as combining CCM and DMV (Klein and Manning, 2004) and modeling large tree fragments (Bod, 2006). The CCM scores each parse as a product of probabilities of span and context subsequences. It was originally evaluated only on unpunctuated sentences up to length 10 (Klein and Manning, 2002), which account for only 15% of the WSJ corpus; our experiments confirm the observation in (Klein, 2005) that performance degrades dramatically on longer sentences. This problem is unsurprising: CCM scores each constituent type by a single, isolated multinomial parameter. Our work leverages the idea that sharing information between local probabilities in a structured unsupervised model can lead to substantial accuracy gains, previously demonstrated for dependency grammar induction (Cohen and Smith, 2009; BergKirkpatrick et al., 2010). Our model, Log-Linear CCM (LLCCM), shares information between the probabilities of related constituents by expressing them as a log-linear combination of features trained using the gradient-based learning procedure of BergKirkpatrick et al. (2010). In this way, the probability of generating a constituent is informed by related constituents. Our model improves unsupervised constituency parsing of sentences longer than 10 words. On sentences of up to length 40 (96% of all sentences in the Penn Treebank), LLCCM outperforms CCM by 13.9% (unlabeled) bracketing F1 and, unlike CCM, outperforms a right-branching baseline on sentences longer than 15 words.

2

Model

The CCM is a generative model for the unsupervised induction of binary constituency parses over sequences of part-of-speech (POS) tags (Klein and Manning, 2002). Conditioned on the constituency or distituency of each span in the parse, CCM generates both the complete sequence of terminals it contains and the terminals in the surrounding context. Formally, the CCM is a probabilistic model that jointly generates a sentence, s, and a bracketing, B, specifying whether each contiguous subsequence is a constituent or not, in which case the span is called a distituent. Each subsequence of POS tags, or S PAN, α, occurs in a C ONTEXT, β, which is an ordered pair of preceding and following tags. A bracketing is a boolean matrix B, indicating which spans (i, j) are constituents (Bij = true) and which are distituents (Bij = f alse). A bracketing is considered legal if its constituents are nested and form a binary tree T (B). The joint distribution is given by: P(s, B) = PT (B) · Y PS (α(i, j, s)|true) PC (β(i, j, s)|true) · i,j∈T (B)

Y

PS (α(i, j, s)|f alse) PC (β(i, j, s)|f alse)

i,j6∈T (B)

The prior over unobserved bracketings PT (B) is fixed to be the uniform distribution over all legal bracketings. The other distributions, PS (·) and PC (·), are multinomials whose isolated parameters are estimated to maximize the likelihood of a set of observed sentences {sn } using EM (Dempster et al., 1977).1 2.1

The Log-Linear CCM

A fundamental limitation of the CCM is that it contains a single isolated parameter for every span. The number of different possible span types increases exponentially in span length, leading to data sparsity as the sentence length increases. 1 As mentioned in (Klein and Manning, 2002), the CCM model is deficient because it assigns probability mass to yields and spans that cannot consistently combine to form a valid sentence. Our model does not address this issue, and hence it is similarly deficient.

The Log-Linear CCM (LLCCM) reparameterizes the distributions in the CCM using intuitive features to address the limitations of CCM while retaining its predictive power. The set of proposed features includes a BASIC feature for each parameter of the original CCM, enabling the LLCCM to retain the full expressive power of the CCM. In addition, LLCCM contains a set of coarse features that activate across distinct spans. To introduce features into the CCM, we express each of its local conditional distributions as a multiclass logistic regression model. Each local distribution, Pt (y|x) for t ∈ {S PAN, C ONTEXT}, conditions on label x ∈ {true, f alse} and generates an event (span or context) y. We can define each local distribution in terms of a weight vector, w, and feature vector, fxyt , using a log-linear model: exp hw, fxyt i

y 0 exp w, fxy 0 t

Pt (y|x) = P

(1)

This technique for parameter transformation was shown to be effective in unsupervised models for part-of-speech induction, dependency grammar induction, word alignment, and word segmentation (Berg-Kirkpatrick et al., 2010). In our case, replacing multinomials via featurized models not only improves model accuracy, but also lets the model apply effectively to a new regime of long sentences. 2.2

Feature Templates

In the S PAN model, for each span y = [α1 , . . . , αn ] and label x, we use the following feature templates: BASIC : B OUNDARY: P REFIX : S UFFIX :

I [y = · ∧ x = ·] I [α1 = · ∧ αn = · ∧ x = ·] I [α1 = · ∧ x = ·] I [αn = · ∧ x = ·]

Just as the external C ONTEXT is a signal of constituency, so too is the internal “context.” For example, there are many distinct noun phrases with different spans that all begin with DT and end with NN; a fact expressed by the B OUNDARY feature (Table 1). In the C ONTEXT model, for each context y = [β1 , β2 ] and constituent/distituent decision x, we use the following feature templates: BASIC : L- CONTEXT: R- CONTEXT:

I [y = · ∧ x = ·] I [β1 = · ∧ x = ·] I [β2 = · ∧ x = ·]

Consider the following example extracted from the WSJ: S NP-SBJ DT 0

The

VP

JJ 1

Venezuelan

NN 2

currency

VBD 3

plummeted

NP-TMP 4

DT

this

NN 5

year

6

context

span

Both spans (0, 3) and (4, 6) are constituents corresponding to noun phrases whose features are shown in Table 1: Feature Name BASIC -DT-JJ-NN: BASIC -DT-NN: B OUNDARY-DT-NN: P REFIX -DT: S UFFIX -NN: BASIC --VBD: BASIC -VBD-: L- CONTEXT-: L- CONTEXT-VBD: R- CONTEXT-VBD: R- CONTEXT-:

(0,3) 1 0 1 1 1 1 0 1 0 1 0

(4, 6) 0 1 1 1 1 0 1 0 1 0 1

Table 1: Span and context features for constituent spans (0, 3) and (4, 6). The symbol  indicates a sentence boundary.

Notice that although the BASIC span features are active for at most one span, the remaining features fire for both spans, effectively sharing information between the local probabilities of these events. The coarser C ONTEXT features factor the context pair into its components, which allow the LLCCM to more easily learn, for example, that a constituent is unlikely to immediately follow a determiner.

3

objective gradient. Berg-Kirkpatrick et al. (2010) showed that the data log likelihood gradient is equivalent to the gradient of the expected complete log likelihood (the objective maximized in the M-step of EM) at the point from which expectations are computed. This gradient can be computed in three steps. First, we compute the local probabilities of the CCM, Pt (y|x), from the current w using Equation (1). We approximate the normalization over an exponential number of terms by only summing over spans that appeared in the training corpus. Second, we compute posteriors over bracketings, P(i, j|sn ), just as in the E-step of CCM training,2 in order to determine the expected counts: exy,S PAN =

XX sn

exy,C ONTEXT =

XX sn

I [α(i, j, sn ) = y] δ(x)

ij

I [β(i, j, sn ) = y] δ(x)

ij

where δ(true) = P(i, j|sn ), and δ(f alse) = 1 − δ(true). We summarize these expected count quantities as: exyt

( exy,S PAN = exy,C ONTEXT

if t = S PAN if t = C ONTEXT

Finally, we compute the gradient with respect to w, expressed in terms of these expected counts and conditional probabilities: ∇L(w) =

Training

X

exyt fxyt − G(w)

xyt

In the EM algorithm for estimating CCM parameters, the E-Step computes posteriors over bracketings using the Inside-Outside algorithm. The MStep chooses parameters that maximize the expected complete log likelihood of the data. The weights, w, of LLCCM are estimated to maximize the data log likelihood of the training sentences {sn }, summing out all possible bracketings B for each sentence: X X L(w) = log Pw (sn , B) sn

B

We optimize this objective via L-BFGS (Liu and Nocedal, 1989), which requires us to compute the

! G(w) =

X X xt

y

exyt

X

Pt (y|x)fxy0 t

y0

Following (Klein and Manning, 2002), we initialize the model weights by optimizing against posterior probabilities fixed to the split-uniform distribution, which generates binary trees by randomly choosing a split point and recursing on each side of the split.3 2 We follow the dynamic program presented in Appendix A.1 of (Klein, 2005). 3 In Appendix B.2, Klein (2005) shows this posterior can be expressed in closed form. As in previous work, we start the initialization optimization with the zero vector, and terminate after 10 iterations to regularize against achieving a local maximum.

35

37.5 33.7

40

3.1

49.2 47.6

41.3 40.5

The following quantity appears in G(w): X γt (x) = exyt

75

Bracketing F1

y

Which expands as follows depending on t: XXX γS PAN (x) = I [α(i, j, sn ) = y] δ(x) y

γC ONTEXT (x) =

sn

y

sn

ij

I [β(i, j, sn ) = y] δ(x)

sn

ij

This expression further simplifies to a constant. The sum of the posterior probabilities, δ(true), over all positions is equal to the total number of constituents in the tree. Any binary tree over N terminals contains exactly 2N − 1 constituents and 1 − 1) distituents. 2 (N − 2)(N (P if x = true sn (2|sn | − 1) γt (x) = 1 P sn (|sn | − 2)(|sn | − 1) if x = f alse 2 where |sn | denotes the length of sentence sn . Thus, G(w) can be precomputed once for the entire dataset at each minimization step. Moreover, γt (x) can be precomputed once before all iterations. 3.2

Relationship to Smoothing

The original CCM uses additive smoothing in its Mstep to capture the fact that distituents outnumber constituents. For each span or context, CCM adds 10 counts: 2 as a constituent and 8 as a distituent.4 We note that these smoothing parameters are tailored to short sentences: in a binary tree, the number of constituents grows linearly with sentence length, whereas the number of distituents grows quadratically. Therefore, the ratio of constituents to distituents is not constant across sentence lengths. In contrast, by virtue of the log-linear model, LLCCM assigns positive probability to all spans or contexts without explicit smoothing. 4

These counts are specified in (Klein, 2005); Klein and Manning (2002) added 10 constituent and 50 distituent counts.

Binary branching upper bound

72.0 71.9

50

64.6

53.0

25

60.0

46.6

56.2

50.3 49.2 47.6

42.7 39.9 37.5

33.7

Log-linear CCM Standard CCM Right branching

ij

In each of these expressions, the δ(x) term can be factored outside the sum over y. Each fixed (i, j) and sn pair has exactly one span and conP text, P hence the quantities y I [α(i, j, sn ) = y] and y I [β(i, j, sn ) = y] are both equal to 1. XX γt (x) = δ(x)

27.3 26.8

100

Efficiently Computing the Gradient

XXX

85.6 85.5

0

10

15

20

25

30

35

40

Maximum sentence length

Figure 1: CCM and LLCCM trained and tested on sentences of a fixed length. LLCCM performs well on longer sentences. The binary branching upper bound correponds to UBOUND from (Klein and Manning, 2002).

4

Experiments

We train our models on gold POS sequences from all sections (0-24) of the WSJ (Marcus et al., 1993) with punctuation removed. We report bracketing F1 scores between the binary trees predicted by the models on these sequences and the treebank parses. We train and evaluate both a CCM implementation (Luque, 2011) and our LLCCM on sentences up to a fixed length n, for n ∈ {10, 15, . . . , 40}. Figure 1 shows that LLCCM substantially outperforms the CCM on longer sentences. After length 15, CCM accuracy falls below the right branching baseline, whereas LLCCM remains significantly better than right-branching through length 40.

5

Conclusion

Our log-linear variant of the CCM extends robustly to long sentences, enabling constituent grammar induction to be used in settings that typically include long sentences, such as machine translation reordering (Chiang, 2005; DeNero and Uszkoreit, 2011; Dyer et al., 2011).

Acknowledgments We thank Taylor Berg-Kirkpatrick and Dan Klein for helpful discussions regarding the work on which this paper is based. This work was partially supported by the National Science Foundation through a Graduate Research Fellowship to the first author.

References Taylor Berg-Kirkpatrick and Dan Klein. 2010. Phylogenetic grammar induction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1288–1297, Uppsala, Sweden, July. Association for Computational Linguistics. Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 582–590, Los Angeles, California, June. Association for Computational Linguistics. Rens Bod. 2006. Unsupervised parsing with U-DOP. In Proceedings of the Conference on Computational Natural Language Learning. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Workshop Notes for StatisticallyBased NLP Techniques, AAAI, pages 1–13. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 263–270, Ann Arbor, Michigan, June. Association for Computational Linguistics. Shay B. Cohen and Noah A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 74–82, Boulder, Colorado, June. Association for Computational Linguistics. Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with non-parallel multilingual guidance. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 50–61, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38. John DeNero and Jakob Uszkoreit. 2011. Inducing sentence structure from parallel corpora for reordering. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 193– 203, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Chris Dyer, Kevin Gimpel, Jonathan H. Clark, and Noah A. Smith. 2011. The CMU-ARK German-

English translation system. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 337–343, Edinburgh, Scotland, July. Association for Computational Linguistics. William P. Headden III, Mark Johnson, and David McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 101–109, Boulder, Colorado, June. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 128–135, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, Main Volume, pages 478–485, Barcelona, Spain, July. Dan Klein. 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. thesis. Karim Lari and Steve J. Young. 1990. The estimation of stochastic context-free grammars using the insideoutside algorithm. Computer Speech and Language, 4:35–56. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503–528. Franco Luque. 2011. Una implementaci´on del modelo DMV+CCM para parsing no supervisado. In 2do Workshop Argentino en Procesamiento de Lenguaje Natural. Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Tahira Naseem and Regina Barzilay. 2011. Using semantic cues to learn syntax. In AAAI. Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1234–1244, Cambridge, MA, October. Association for Computational Linguistics. Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora.

In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, pages 128– 135, Newark, Delaware, USA, June. Association for Computational Linguistics. Elias Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simple unsupervised grammar induction from raw text with cascaded finite state models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1077–1086, Portland, Oregon, USA, June. Association for Computational Linguistics. Roi Reichart and Ari Rappoport. 2010. Improved fully unsupervised parsing with zoomed learning. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 684–693, Cambridge, MA, October. Association for Computational Linguistics. Yoav Seginer. 2007. Fast unsupervised incremental parsing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 384– 391, Prague, Czech Republic, June. Association for Computational Linguistics.

A Feature-Rich Constituent Context Model for ... - Research at Google

2We follow the dynamic program presented in Appendix A.1 of (Klein .... 101–109, Boulder, Colorado, June. Association for ... Computer Speech and Language,.

243KB Sizes 1 Downloads 366 Views

Recommend Documents

A Feature-Rich Constituent Context Model for ... - John DeNero
and label x, we use the following feature templates: ... 3 Training. In the EM algorithm for estimating CCM parame- ters, the E-Step computes posteriors over ...

UNSUPERVISED CONTEXT LEARNING FOR ... - Research at Google
grams. If an n-gram doesn't appear very often in the training ... for training effective biasing models using far less data than ..... We also described how to auto-.

SPECTRAL DISTORTION MODEL FOR ... - Research at Google
[27] T. Sainath, O. Vinyals, A. Senior, and H. Sak, “Convolutional,. Long Short-Term Memory, Fully Connected Deep Neural Net- works,” in IEEE Int. Conf. Acoust., Speech, Signal Processing,. Apr. 2015, pp. 4580–4584. [28] E. Breitenberger, “An

A Generative Model for Rhythms - Research at Google
When using cross-validation, the accuracy Acc of an evaluated model is given by ... For the baseline HMM model, double cross-validation optimizes the number ...

A Probabilistic Model for Melodies - Research at Google
not aware of musicological evidence to support such modeling of melodic ... we describe a probabilistic model for melody notes given Narmour features and ...

DualSum: a Topic-Model based approach for ... - Research at Google
−cdn,k denotes the number of words in document d of collection c that are assigned to topic j ex- cluding current assignment of word wcdn. After each sampling ...

A Probabilistic Model for Melodies - Research at Google
email. Abstract. We propose a generative model for melodies in a given ... it could as well be included in genre classifiers, automatic composition systems [10], or.

a hierarchical model for device placement - Research at Google
We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of. CPUs, GPUs, and other computational devices. Our method learns to assign graph operat

A Social Query Model for Decentralized Search - Research at Google
Aug 24, 2008 - social search as well as peer-to-peer networks [17, 18, 1]. ...... a P2P service, where the greedy key-based routing will be replaced by the ...

A Discriminative Latent Variable Model for ... - Research at Google
attacks (Guha et al., 2003), detecting email spam (Haider ..... as each item i arrives, sequentially add it to a previously ...... /tests/ace/ace04/index.html. Pletscher ...

in context academia in context programming - Research at Google
Think Programming,. Not Statistics. R is a programming language designed to work with data ... language—human or computer—can be learned by taking a ... degree of graphics but ones that are not ... Workshops and online courses in R are.

SPEAKER ADAPTATION OF CONTEXT ... - Research at Google
adaptation on a large vocabulary mobile speech recognition task. Index Terms— Large ... estimated directly from the speaker data, but using the well-trained speaker ... quency ceptral coefficients (MFCC) or perceptual linear prediction. (PLP) featu

Situational Context for Ranking in Personal ... - Research at Google
†Center for Intelligent Information Retrieval, University of Massachusetts Amherst, MA 01003 ..... of the recommender system used in Google Play (a commer- ..... to train this model. 5.3 End-to-End Ranking Evaluation. It is a common practice today

Context Dependent Phone Models for LSTM ... - Research at Google
dent whole-phone models can perform as well as context dependent states, given a ... which converges to estimate class posteriors when using a cross- entropy loss. ... from start to end, it is common to divide phonemes into a number of states ...

Situational Context for Ranking in Personal ... - Research at Google
Personal search; email search; contextual information; query context; deep learning. 1. INTRODUCTION ..... serve the privacy of users, the data is anonymized and fol- lowing the k-anonymity approach [42], our ... act timezone (some countries may span

CONTEXT DEPENDENT STATE TYING FOR ... - Research at Google
ment to label the neural network training data and the definition of the state .... ers of non-linearities, we want to have a data driven design of the set questions.

Sequential Dialogue Context Modeling for ... - Research at Google
2016. Leveraging sentence-level information with encoder. LSTM for semantic slot filling. In Proceedings of the EMNLP. Austin, TX. Bing Liu and Ian Lane. 2016. Joint online spoken lan- · guage understanding and language modeling with · recurrent neur

Context-aware Querying for Multimodal Search ... - Research at Google
Abstract. Multimodal interaction provides the user with multiple mo- des of interacting with a system, such as gestures, speech, text, video, audio, etc. A multimodal system allows for several distinct means for in- put and output of data. In this pa

Beam-Width Prediction for Efficient Context-Free ... - Research at Google
Efficient decoding for syntactic parsing has become a .... 1Note that we do not call this method “unsupervised” be- cause all .... mars in a more statistically principled way, although ..... tics; Proceedings of the Main Conference, pages 404–.

Context-aware Querying for Multimodal Search ... - Research at Google
ing of a multimodal search framework including real-world data such as user ... and the vast amount of user generated content, the raise of the big search en- ... open interface framework [25], which allows for the flexible creation of combined.

Research on Infrastructure Resilience in a Multi-Risk Context at ...
Earthquake performance of the built environment ... Increased land occupation ... Relation between each component damage state and a set of loss metrics (e.g..

LANGUAGE MODEL CAPITALIZATION ... - Research at Google
tions, the lack of capitalization of the user's input can add an extra cognitive load on the ... adding to their visual saliency. .... We will call this model the Capitalization LM. The ... rive that “iphone” is rendered as “iPhone” in the Ca

Confidence Scores for Acoustic Model Adaptation - Research at Google
Index Terms— acoustic model adaptation, confidence scores. 1. INTRODUCTION ... In particular, we present the application of state posterior confidences for ... The development of an automatic transcription system for such a task is difficult, ...

An ontological context model for representing a situation and the ...
A major challenge of context models is to balance simplic- ... reasoning and other processing of context away from mobile ... pertinent way. ..... through Remote Procedure Calls. 3. The Rover Core - This layer forms the core layer of Rover.