A Category-integrated Language Model for Question Retrieval in Community Question Answering Zongcheng Ji1,2, Fei Xu1,2, and Bin Wang1 1

Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2 Graduate University of Chinese Academy of Sciences, Beijing, China {jizongcheng,feixu1966}@gmail.com [email protected]

Abstract. Community Question Answering (CQA) services have accumulated large archives of question-answer pairs, which are usually organized into a hierarchy of categories. To reuse the invaluable resources, it’s essential to develop effective Question Retrieval (QR) models to retrieve similar questions from CQA archives given a queried question. This paper studies the integration of category information of questions into the unigram Language Model (LM). Specifically, a novel Category-integrated Language Model (CLM) is proposed which views category-specific term saliency as the Dirichlet hyper-parameter that weights the parameters of LM. A point-wise divergence based measure is introduced to compute a term’s category-specific term saliency. Experiments conducted on a real world dataset from Yahoo! Answers show that the proposed CLM which integrates the category information into LM internally at the word level can significantly outperform the previous work that incorporates the category information into LM externally at the word level or at the document level. Keywords: Community Question Answering, Question Retrieval, Category, Category-integrated Language Model.

1

Introduction

Community Question Answering (CQA) has recently become a popular type of web service where users ask and answer questions and access historical question-answer pairs from the large scale CQA archives. Examples of such CQA services include Yahoo! Answers1 and Baidu Zhidao2, etc. To effectively share the knowledge in the large scale CQA archives, it is essential to develop effective question retrieval models to retrieve historical question-answer pairs that are semantically equivalent or relevant to the queried questions. As a specific application of the traditional Information Retrieval, Question Retrieval in CQA archives is distinct from the search of web pages in that historical questions are organized into a hierarchy of categories. That is to say, each question in 1 2

http://answers.yahoo.com/ http://zhidao.baidu.com/

Y. Hou et al. (Eds.): AIRS 2012, LNCS 7675, pp. 14–25, 2012. © Springer-Verlag Berlin Heidelberg 2012

A Category-integrated Language Model for Question Retrieval

15

the CQA archives has a category label. The questions in the same category or subcategory usually relate to the same topic. For example, the questions in the subcategory “Travel.Asia Pacific.China” mainly relate to travel to or in the country of China. This makes it possible to exploit category information to enhance the question retrieval performance. To exemplify how a categorization of questions can be exploited, consider the following queried question ( ): “Can you recommend sightseeing opportunities for senior citizens in China?” The user is interested in sightseeing specifically in China, not in other countries. Hence, the historical question ( ) “Can you recommend sightseeing opportunities for senior citizens in Korea?” and its answers are not relevant to the queried question although the two questions are syntactically very similar, making it likely that existing question retrieval methods will rank question highly among the returned ranking list. If we can establish a connection between and the category “Travel.Asia Pacific.China”, the ranking of the questions in that category could be promoted, thus perhaps improving the question retrieval performance. Several work [3, 2] has been done to incorporate the category information into existing retrieval models. These work shows that exploiting the category information for each question properly can improve the retrieval performance effectively. However, there is one common shortcoming in both previous work: they just incorporate the category information into the existing retrieval models in a very heuristic way that they just linearly combine the category information related models with the existing retrieval models externally, no matter at the word level [2] or at the document level [3]. Language modeling has become a very promising direction for information retrieval because of its empirical good performance. In this paper, we try to integrate category information into the unigram language modeling approach. Our intuition is that a term may be more important in some category than other categories. For example, “China” is an important or salient word in the category “Travel.Asia Pacific.China”, but not in the category “Travel.Asia Pacific.Korea”. We model the individual term’s saliency in a specific category as Dirichlet hyper-parameter that weights the corresponding term emission parameter of the multinomial question language model. Thus, we attain an internal formula that effectively boosts the score contribution from terms when the term is more salient in the category of the historical question. This incorporation of category information at the word level is mathematically grounded. Furthermore, it performs empirically better than previous heuristic incorporation of category information at the word level [2] or at the document level [3] when we conduct experiments on a large scale real world CQA dataset from Yahoo! Answers. The rest of this paper is organized as follows. Section 2 reviews the related work on question retrieval and especially gives detail of the existing two most effective methods on how to exploit the category information to enhance the question retrieval performance. Section 3 details our proposed category-integrated language model. Section 4 gives a method of measuring a term’s saliency in a specific category. Section 5 describes the experimental study on a large scale real world CQA dataset from Yahoo! Answers. Finally, we will conclude our work and offer the future work in section 6.

16

Z. Ji, F. Xu, and B. Wang

2

Related Work

Recently, question retrieval has been widely investigated in CQA data. Most of previous work focuses on the translation-based methods [4, 7, 10, 12] to alleviate the lexical gap problem between the queried question and the historical questions. Beside using translation-based methods, Wang et al. [9] propose a syntactic tree matching method to find similar questions. Ming et al. [8] explore domain-specific term weight to improve question retrieval within a certain category. Cao et al. [2] and Cao et al. [3] propose different strategies to exploiting the category information to enhance the question retrieval performance in the whole collection with the whole categories. Cai et al. [1] propose to learn the latent topics from the historical questions to alleviate the lexical gap problem, while Ji et al. [5] propose to learn the latent topics aligned across the historical question-answer pairs to solve the lexical gap problem. Moreover, Cai et al. [1] also incorporate the category information of the historical questions into learning the latent topics for further improvements. Because our work in this paper mainly focuses on investigating a new method to incorporate the category information into the ranking function, thus in what follows we will review in detail the two most effective methods [3, 2] on how to exploit the category information to enhance the question retrieval performance. 2.1

Leaf Category Smoothing Enhancement

As the first attempt to exploit the category information, Cao et al. [2] propose a twolevel smoothing model to enhance the performance of language model based question retrieval. A category language model is computed and then smoothed with the whole question collection; the question language model is subsequently smoothed with the category language model. The experimental results showed that it can improve the performance of the language model significantly. Following [2], given a queried question and a historical question with the leaf category , the ranking function of the two-level smoothing model can be written as: |

1

|

1

|

1

where and are the two different smoothing parameters. If we set 1, the ranking function will reduce to the unigram language model with Jelinek-Mercer (JM) | , , smoothing [11] without any category information. | are the maximum likelihood estimation of word in the historical question , the leaf category and the whole question collection , respectively. From Equation (1), we can see that this approach combines the smoothed category language model with the original question language model linearly only in an external way at the word level. However, in our work we will integrate the category information into the original language model in an internal way at the word level, which seems to be more promising.

A Category-integrated Language Model for Question Retrieval

2.2

17

Category Enhancement

Realizing that the leaf category smoothing enhancement method is tightly coupled with the language model and is not applicable to other question retrieval models, Cao et al. [3] further propose a method called the category enhanced retrieval model, which is to compute the ranking score of a historical question based on an interpolation of two relevance scores: a global relevance score between the queried question and the category containing , and a local relevance score between the queried question and the historical question . In this case, various retrieval models can be used to compute the two scores, which may have very different ranges. Then they normalize them into the same range before linear combining. Following [3], the final ranking function can be written as: ,

1

,

,

2

where , , are the local and global relevance scores, is the parameter , to control the linear interpolation, and is the normalization function for the local and global relevance scores. From Equation (2), we can see that this approach combines the two relevance scores linearly only in an external way at the document level. The most promising idea of this method is that it can be easily applied to any existing question retrieval models. However, in our work we will focus on the language modeling framework to integrate the category information into the original question language model in an internal way at the word level. Overall, leaf category smoothing enhancement and category enhancement methods seem to be heuristic but effective to exploit the category information for question retrieval. Our work tries to improve the previous work of exploiting the category information by the following aspects. We integrate the category information measured by the category-specific term saliency into the unigram language modeling approach in a more systematic and internal way at the word level which is more effective than linear combination in an external way at the word level or at the document level. We proceed to present our method in the following sections.

3

Category-integrated Language Model

3.1

The Unigram Language Model

The unigram language model has been widely used in previous work [3, 4, 10] for question retrieval in CQA archives. The basic idea of the language model is to estimate a model for each historical question, and then rank the questions by the likelihood of the queried question according to the estimated models. Formally, let … | | be a queried question and … | | be a be the most popular multihistorical question in a question collection . Let nomial generation model estimated for the historical question , where has the same number of parameters (i.e. word emission probabilities) as the number of words

18

Z. Ji, F. Xu, and B. Wang | |

|

in the vocabulary set , i.e., likelihood would be

. Thus, the ranking function of the query

| |

|

|

,

|

3

where , is the count of word in the queried question . With such a model, the retrieval problem is reduced to the problem of estimating | | the multinomial parameters | for a given historical question . The simplest way is the maximum likelihood estimation: , ∑

,

, | |

4

To avoid zero probability, various smoothing strategies are usually applied to the estimation, which commonly interpolate the estimated document language model with the collection language model [11]. 3.2

Integration with Category Information

Now, we discuss how to integrate the category information of each historical question into the unigram language model. Our intuition of the category information is that a term may be more important in some category than other categories. That is to say, a term of a historical question in one specific category should have different term weighting than the term in other categories, we call this different term weighting as category-specific term saliency. For example, “virus” in the category “Computers & Internet.Security” is more salient than the term in the category “Computers & Internet.Software”. By empirical Bayesian analysis, we could express our knowledge or belief on the uncertainty of the parameters by some prior distribution on it. The conjugate of the distribution where the parameters come from is usually exploited to express the prior belief. The natural conjugate of multinomial distribution is Dirichlet distribution. Specifically, supposing that is the category-specific term saliency computing model (which will be introduced in section 4) and , is the computed term saliency of term in category containing , we use a Dirichlet prior on with hyper-parameters , , … , | | , given by | |

|

where

| | |

,

not depend on the parameter

5

, .

is the parameter, and



| |



does

A Category-integrated Language Model for Question Retrieval

Then, with the Dirichlet prior, the posterior distribution of ,

|

19

is given by

| |

|

,

|

|

6

, ∏| | By the property of natural conjugate distributions, | in | with parathe above equation is also a Dirichlet distribution denoted as meters , , , ,…, | |, | | . The prior distribution reflects our prior beliefs about the weight of , while the posterior distribution of reflects the updated beliefs about . Given the posterior distribution, the estimation of the word emission probability can be noted as

,

|

| |

, ∑

| |

,

7

In the above estimated question model, we can see that the category information of a historical question is integrated into the estimated unigram language model. In this way, the category-specific term saliency could be seen as transformed to word count information which is the primary element that unigram language model has the ability to model. From another point of view, we could consider that the “bag-of-words” representation of the question is transformed into a pseudo “bag-of-words” representation given the question’s category-specific term saliency computing model. In , the matching term’s frequency is transformed from , to , , and the question length is transformed from | | | | to | | ∑ , . Then, the problem of ranking the historical ques. On such ground, any tion given a queried question is changed to ranking “bag-of-words” based language model, e.g. the query likelihood model or the KL divergence model [6], could work on , and thus integrate the category information of the historical question in an internal way. Next, we further smooth the category-integrated language model by a collection language model | to account for unseen words as in [11]. Specifically, the | | collection-based Dirichlet prior , ,…, is | || used for smoothing. Thus, we can get the smoothed category-integrated estimation: ,

|

, | |



| |

,

8

Note that, the final word count for the smoothed category-integrated language model for historical question consists of the document-level count , , the , and the collection-level pseudo category-level pseudo count | . count If we set 0, the ranking function will ignore the category information, thus will reduce to the unigram language model with Dirichlet smoothing [11].

20

Z. Ji, F. Xu, and B. Wang

Finally, the smoothed category-integrated ranking function in the query likelihood language modeling framework can be written as: | |

|

4

,

|

9

Category-specific Term Saliency Measure

A key notion in our proposed category-integrated language model is a term’s category-specific term saliency , in category containing question , which represents a term’s importance or saliency in the category that the question belongs to. Term distribution in a specific category is biased as compared to that in a whole collection. In this section, we will introduce a measure of term saliency based on the divergence of term distribution in a specific category from the general whole collection. 4.1

Divergence-based Feature

The divergence of term distribution in a specific category from the whole collection reveals the significance of terms globally in its category. We employ Jensen-Shannon (JS) divergence, which is a well adopted distance measure between two probability distributions, to capture the difference of term distribution in two collections. It is defined as the mean of the relative entropy of each distribution to the mean distribution. As we evaluate the divergence at term level rather than the whole sample set, thus we examine the point-wise function for each individual term as follows: log 1 , ||

2

log 1 2

2

10

Here, and denote the category-specific and general vocabularies, and and denote their corresponding probability distribution obtained by maximum , || means the JS divergence of term between the likelihood estimation. , || is, the more specific category and general whole collection. The higher important or salient term is. 4.2

Estimating Term Saliency from Divergence-based Feature

Having the point-wise divergence feature at hand, now we will define the function needed to transform the JS divergence feature into the term saliency score. The transformation function plays a very important role in setting the scale of proportional ratio between the JS divergence feature score of different terms. The following exponential form is used and tested, of which is the JS divergence feature score, is the

A Category-integrated Language Model for Question Retrieval

parameter to control the scale of the transformed score and ter which is set as 1e-8 in our experiments.

21

is a smoothing parame11

Then, the category-specific term saliency computing model can be written as: ,

, ||

5

Experiments and Discussion

5.1

Experimental Setup

, ||

12

Question Retrieval Methods. To evaluate the performance of the proposed categoryintegrated language model, we use the following four types of baseline methods: ─ LM: Unigram Language Model without considering the category information, we will give the results of unigram language model using JM smoothing and Dirichlet smoothing together, denoted as LMJM and LMDir respectively3. ─ LM@OptC: This method performs retrieval only in the queried question’s category specified by users of Yahoo! Answers using LM, where JM smoothing is used. ─ LM@LS: Leaf Category Smoothing Enhancement method. (in section 2.1) ─ LM+LM: Category Enhancement method that the global relevance and local relevance are computed using LM, where we also use JM smoothing. (in Section 2.2) ─ CLM: This is our proposed Category-integrated Language Model. Dataset. We use an open dataset4 which is used in the pioneer work [3, 2] of exploiting the category information to enhance the question retrieval performance in a real world CQA dataset from Yahoo! Answers. The open dataset contains 252 queried questions with totally 1,373 relevant historical questions as the ground-truth. In our experiments, we perform preprocess as follows: all the questions are lowercased and stopwords are removed using a total of 418 stopwords from the standard stoplist in Lemur toolkit (version 4.10)5. Thus, after preprocessing, we get our final questions repository (3,111,219 questions), which contains 26 categories at the first level and 1262 categories at the leaf level. Each question belongs to a unique leaf category. Metrics. We evaluate the performance of all the ranking methods using the following metrics: Mean Average Precision (MAP) and Precision@n (P@n). MAP rewards methods that return relevant questions early and also rewards correct ranking of the results. P@n reports the fraction of the top-n questions retrieved that are relevant. We also perform a significance test using a paired t-test with a significant level of 0.05. 3

4 5

Note that the baseline models LMJM and LMDir are reported as references, and the main purpose of this work is to investigate a new method to exploit the category information for enhancing the question retrieval performance. http://homepages.inf.ed.ac.uk/gcong/qa/ http://www.lemurproject.org/

22

Z. Ji, F. Xu, and B. Wang

5.2

Parameter Setting

There are several parameters to be set in our experiments. Following the literature [3, 2], we set 1, 0.2 in Equation (1) for getting the unigram language model using JM smoothing and 0.2, 0.2 in Equation (1) for getting LM@LS. In LM+LM, we set 0.1 in Equation (2) [3]. In LMDir, we set 0 in Equation (8) and then tune the Dirichlet smoothing parameter by setting it as 1, 2, 3, … , 20, 30, 40, 50. From Figure 1, we can see that when 5 the performance on MAP metric reaches high and is almost stable; when 3 11 the performance on P@10 metric is relative high, however when 11, the performance on P@10 drops quickly. Thus, the best Dirichlet smoothing parameter is around 5~11, which is different from the common setting (around 500~2500) on the retrieval tasks on TREC data [11]. The main reason may be that the historical questions in question retrieval tasks are very short, which is not the case on the traditional retrieval tasks on TREC data. Specifically, we will fix 10 in all the following experiments. 0.425

0.23

0.42

0.225 0.22

0.41

P@10

MAP

0.415

0.405

0.215 0.21

0.4

0.205

0.395

0.2

0.39 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 30 40 50

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 30 40 50

Fig. 1. Sensitivity to the Dirichlet smoothing parameter the dataset for question retrieval.

of the unigram language model on

In CLM, the most two important parameters are and . is the parameter that controls proportional weight of the prior factor which is the categoryspecific term saliency. is the parameter that controls the scale of the transformation function in Equation (11) in section 4. In our experiments, we set these two parameters by maximizing the empirical performance on the collections through exhaustive searches in the following parameter space showed in Table 1. Table 1. Parameters and their coresponding search space in CLM

Parameters

Search Space 1, 2, 3, … , 20, 30, 40, 50 2.1, 2.2, 2.3, … , 3.0 1, 2, 3, … , 20, 30, 40, 50

Figure 2 reports the parameter sensitivity of CLM on the test collection. From Figure 2, we can see that the performance is relative stable when 1 10, but drops quickly when 10. The best performance is achieved when is set to

A Category-integrated Language Model for Question Retrieval

23

about 7. Moreover, when 2.3 the best value of is relative insensitive to different values of in the category-specific term saliency measure. However, the exponential weight which controls the scale of the transformation function does have some influence on the ranking performance. From the figure, a relatively bigger value of about 2.7 is preferred. Thus, the parameter setting of the best performance of CLM reported in the next subsection is: 7, 2.7. 0.47

0.25

0.45

0.24 0.23 P@10

MAP

0.43 0.41 0.39 0.37

para 2.1 para 2.3 para 2.5 para 2.7 para 2.9

0.21 0.2 0.19

0.35

para 2.1 para 2.3 para 2.5 para 2.7 para 2.9

0.18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 30 40 50

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 30 40 50

lambda_Cat

lambda_Cat

Fig. 2. Sensitivity to the parameters

5.3

0.22

and

for CLM

Performance Comparison

Table 2 presents the comparison of different methods for question retrieval. Row 1 and Row 2 are the Unigram Language Model without considering the category information, using JM smoothing and Dirichlet smoothing respectively. Row 3 is the method which performs retrieval only in the category that specified to the queried question. Row 4 and Row 5 are the Leaf Category Smoothing Enhancement and Category Enhancement methods, which are the proposed methods in the pioneer work [3, 2]. Row 6 is our proposed method. There are some clear trends in the results of Table 2: 1. Unigram language model using Dirichlet Smoothing significantly outperforms that using JM smoothing in the setting of our experiments (Row 1 vs. Row 2). 2. The method retrieving in the optimal category that specified to the queried question by users of Yahoo! Answers does not improve the methods without considering the category information (Row 3 vs. Row1, 2). The results show that the queried question’s category information does not help the retrieval performance when used directly and this is consistent with the former work [3, 2]. This maybe because relevant questions are also contained in the other categories but not only in the category of the queried question. 3. When using the methods proposed in [3, 2], which incorporate the category information into the retrieval model externally at the word level (LM@LS) or at the document level (LM+LM), the performance can significantly outperform the unigram language models that do not consider the category information (Row 4, 5 vs. Row 1, 2). These results are also consistent with the form work [3, 2]. However, our experiments of LM+LM does not outperforms LM@LS, which is not the case in the results of [3]. We think that there are mainly two possible reasons: First, the preprocessing of the dataset maybe different. Second, LM+LM combines two relevant scores externally at the document level, which is not natural than that of LM@LS at the word level.

24

Z. Ji, F. Xu, and B. Wang

4. Our proposed CLM significantly outperforms LM@LS and LM+LM (Row 6 vs. Row 4, 5). We conduct a significant test (a paired t-test with a significant level of 0.05) on the improvements of our approach over LM@LS and LM+LM. The results indicate that the improvements are statistically significant in terms of all the evaluation metrics. This demonstrates that our proposed CLM which integrate the category information into the unigram language model at the word level internally is more effective than the methods which incorporate the category information into the unigram language model externally at the word level or at the document level. Table 2. Performance comparison of different methods for question retrieval

# 1 2 3 4 5 6

6

Methods LMJM LMDir LM@OptC LM@LS LM+LM CLM

MAP 0.4103 0.4205 0.3496 0.4638 0.4424 0.4684

P@5 0.2921 0.2984 0.2762 0.3270 0.3159 0.3294

P@10 0.2159 0.2234 0.2056 0.2425 0.2341 0.2452

Conclusions

In this paper, we propose a novel category-integrated language model (CLM) to integrate the category information into the unigram language model internally at the word level for improving the performance of question retrieval in CQA archives, which views category-specific term saliency as Dirichlet hyper-parameters to weight the parameters of the multinomial language model. This integration method has solid mathematical foundation, and experiments conducted on a large scale real world CQA dataset from Yahoo! Answers demonstrate that our proposed CLM can significantly outperforms the previous work that incorporats the category information into the unigram language model externally at the word level or at the document level. Some interesting future work should be continued. First, it is of relevance to apply and evaluate other question retrieval methods, e.g., translation-based methods, in the proposed framework. Second, we believe it is interesting to include answers into our proposed framework. Finally, the hierarchical category structures may perhaps be exploited to further improve the performance of the CLM model. Acknowledgements. This work is supported by the National Science Foundation of China under Grant No. 61070111 and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDA06030200.

A Category-integrated Language Model for Question Retrieval

25

References 1. Cai, L., Zhou, G., Liu, K., Zhao, J.: Learning the latent topics for question retrieval in community qa. In: IJCNLP, pp. 273–281 (2011) 2. Cao, X., Cong, G., Cui, B., Jensen, C.S., Zhang, C.: The use of categorization information in language models for question retrieval. In: CIKM, pp. 265–274 (2009) 3. Cao, X., Cong, G., Cui, B., Jensen, C.S.: A generalized framework of exploring category information for question retrieval in community question answer archives. In: WWW, pp. 201–210 (2010) 4. Jeon, J., Bruce Croft, W., Lee, J.H.: Finding similar questions in large question and answer archives. In: CIKM, pp. 84–90 (2005) 5. Ji, Z., Xu, F., Wang, B., He, B.: Question-answer topic model for question retrieval in community question answering. In: CIKM (2012) 6. Lafferty, J., Zhai, C.: Document language models, query models, and risk minimization for information retrieval. In: SIGIR, pp. 111–119 (2001) 7. Lee, J.-T., Kim, S.-B., Song, Y.-I., Rim, H.-C.: Bridging lexical gaps between queries and questions on large online q&a collections with compact translation models. In: EMNLP, pp. 410–418 (2008) 8. Ming, Z.-Y., Chua, T.-S., Cong, G.: Exploring domain-specific term weight in archived question search. In: CIKM, pp. 1605–1608 (2010) 9. Wang, K., Ming, Z., Chua, T.-S.: A syntactic tree matching approach to finding similar questions in community-based qa services. In: SIGIR, pp. 187–194 (2009) 10. Xue, X., Jeon, J., Bruce Croft, W.: Retrieval models for question and answer archives. In: SIGIR, pp. 475–482 (2008) 11. Zhai, C., Lafferty, J.: A study of smoothing methods for language models applied to ad hoc information retrieval. In: SIGIR, pp. 334–342 (2001) 12. Zhou, G., Cai, L., Zhao, J., Liu, K.: Phrase-based translation model for question retrieval in community question answer archives. In: ACL-HLT, pp. 653–662 (2011)

A Category-integrated Language Model for Question ... - Springer Link

to develop effective question retrieval models to retrieve historical question-answer ... trieval in CQA archives is distinct from the search of web pages in that ...

315KB Sizes 1 Downloads 483 Views

Recommend Documents

A Model of Business Ethics - Springer Link
Academic Publishing/Journals, Cause Related Marketing and General .... Robin and Reidenbach (1987) suggest that a 'social contract' exists between .... the media was bemoaning that they had been misled ..... believes it to be the right course of acti

A Predictive Collision Avoidance Model for Pedestrian ... - Springer Link
Abstract. We present a new local method for collision avoidance that is based on collision prediction. In our model, each pedestrian predicts pos- sible future collisions with other pedestrians and then makes an efficient move to avoid them. Experime

A Multi-layer Model for Face Aging Simulation - Springer Link
ment cosmetic surgery planning, age-adaptive human computer interaction, etc. ... support vector machines (SVMs) to predict the shape in the future. ... [9] defined both a person-specific and a global aging axis in the shape and texture ...

IndiGolog: A High-Level Programming Language for ... - Springer Link
Giuseppe De Giacomo, Yves Lespérance, Hector J. Levesque, and Sebastian. Sardina. Abstract IndiGolog is a programming language for autonomous agents that sense their environment and do planning as they operate. Instead of classical planning, it supp

eContractual choreography-language properties ... - Springer Link
we give the schema definition [25] and additional doc- umentation online .... environment, e.g., by workflow management systems. Fur- ...... file is a data package.

eContractual choreography-language properties ... - Springer Link
Full list of author information is available at the end of the article theories .... tion systems [39,40] to enable business collaboration and ... a so-called business-network model (BNM) [41] in the ...... IEEE Computer Society, Washington, DC, USA.

Golog Speaks the BDI Language - Springer Link
Department of Computer Science and Engineering. York University, .... used when assigning semantics to programs: the empty (terminating) program nil; the ..... “main BDI thread” is inspired from the server example application in [8]. ..... Benfie

A Niche Width Model of Optimal Specialization - Springer Link
Niche width theory makes the assumption that an organization is at its best for one en- ..... account. Notice that these polymorphs are not the same as polymorph ...

A Rent-Seeking Model of Voluntary Overcompliance - Springer Link
Oct 14, 2015 - small concession beforehand, i.e. by overcomplying voluntary, the firm lowers the stake the environmental group has in the rent seeking contest, which lowers the group's lobbying effort in that contest. Voluntary overcompliance increas

Natural Language as the Basis for Meaning ... - Springer Link
practical applications usually adopt shallower lexical or lexical-syntactic ... representation, encourages the development of semantic formalisms like ours.

A biomimetic, force-field based computational model ... - Springer Link
Aug 11, 2009 - a further development of what was proposed by Tsuji et al. (1995) and Morasso et al. (1997). .... level software development by facilitating modularity, sup- port for simultaneous ...... Adaptive representation of dynamics during ...

Natural Language as the Basis for Meaning ... - Springer Link
Our overall research goal is to explore how far we can get with such an in- ...... the acquisition of Kaltix and Sprinks by another growing company, Google”, into a .... invent, kill, know, leave, merge with, name as, quote, recover, reflect, tell,

Topic-aware pivot language approach for statistical ... - Springer Link
Journal of Zhejiang University-SCIENCE C (Computers & Electronics). ISSN 1869-1951 (Print); ISSN 1869-196X (Online) www.zju.edu.cn/jzus; www.springerlink.com. E-mail: [email protected]. Topic-aware pivot language approach for statistical machine transl

Structured Sparse Low-Rank Regression Model for ... - Springer Link
3. Computer Science and Engineering,. University of Texas at Arlington, Arlington, USA. Abstract. With the advances of neuroimaging techniques and genome.

Designing Language Models for Voice Portal ... - Springer Link
Designing Language Models for Voice Portal Applications. PHIL SHINN, MATTHEW ... HeyAnita Inc., 303 N. Glenoaks Blvd., 5th Floor, Burbank, CA 91502, USA.

A Niche Width Model of Optimal Specialization - Springer Link
so that we can predict the optimal degree of specialization. ..... is a member of the Center for Computer Science in Organization and Management Science.

Model reference adaptive control of a nonsmooth ... - Springer Link
Received: 17 May 2005 / Accepted: 14 July 2005 / Published online: 29 June 2006. C Springer Science + Business ... reference control system, is studied using a state space, ...... support of the Dorothy Hodgkin Postgraduate Award scheme.

A Process Semantics for BPMN - Springer Link
Business Process Modelling Notation (BPMN), developed by the Business ..... In this paper we call both sequence flows and exception flows 'transitions'; states are linked ...... International Conference on Integrated Formal Methods, pp. 77–96 ...

A Process Semantics for BPMN - Springer Link
to formally analyse and compare BPMN diagrams. A simple example of a ... assist the development process of complex software systems has become increas-.

A Multi-level Selection Model for the Emergence of ... - Springer Link
and the reputation of the donor and the receiver (see Figure 1). ..... Dellarocas, C.: Sanctioning reputation mechanisms in online trading environments with moral ...

A Multi-level Selection Model for the Emergence of ... - Springer Link
ishment leads us to call this norm stern-judging. Long before the seminal work ..... Each simulation runs for 9000 generations, starting from ran- domly assigned ... IBM-Watson Research. Center, CIRANO working paper, 2002s–75k (2002). 21.

An animal movement model incorporating home range ... - Springer Link
Received: 1 August 2005 / Revised: 7 July 2006 / Published online: 19 September 2007 ... for nutrition, protection from predation, reproduction, etc. ..... classes are now the nonnegative integers, so using the lowest value as the base (i.e. the ...

An animal movement model incorporating home range ... - Springer Link
Sep 19, 2007 - Springer Science+Business Media, LLC 2007 ... model for telemetry relocation data that accounts for both movement and the use of resources ... the advent of global positioning system (GPS) locators, however, ... uniform distribution ov

Identification of Frequency-Domain Volterra Model ... - Springer Link
The classical modeling of electronic devices consists in building empirical models, which are electrical ... a simple and straightforward way, saving time to the design engineer at the moment of modeling and .... Taking this into account, Eq. (8) and