Towards long-term visual learning of object categories in human-robot interaction Luís Seabra Lopes 1, Aneesh Chauhan1 and João Silva1 1 Actividade Transversal em Robótica Inteligente IEETA/DETI - Universidade de Aveiro, 3810-193 Aveiro - Portugal [email protected] , [email protected], [email protected]

Abstract. Motivated by the need to support language-based communication between robots and their human users, this paper describes an object representation, categorization and recognition approach that enables a robot to ground the names of physical visually observable objects. Instead of using feature vectors, the adopted representation is based on lists of relevant color ranges in an HSV image of the object. A simple agent, incorporating these capabilities, is used to test the approach. Experimental evaluation, carried out having in mind the open-ended nature of word learning, produced encouraging results. Keywords: Human-robot interaction, external symbol grounding, word learning, one-class learning, cognitive, lists of color ranges

1 Introduction Human-robot interaction is an increasingly active research field [5]. Robots are expected to adapt to the non-expert user. This adaptation includes the capacity to take a high-level description of the assigned task and carry out the necessary reasoning steps to determine exactly what must be done. Adapting to the user also implies using the communication modalities of the human user. Spoken language is probably the most powerful communication modality. It can reduce the problem of assigning a task to the robot to a simple sentence, and it can also play a major role in teaching the robot new facts and behaviors. There is, therefore, a trend to develop robots with spoken language capabilities [5; 11; 12; 14]. Language processing, like reasoning capabilities, involves the manipulation of symbols. By symbol it is meant a pattern that represents some entity in the world by association, resemblance or convention [11]. Association and resemblance arise from perceptual, sensorimotor and functional aspects while convention is socially or culturally established. In classical artificial intelligence, symbolic representations were amodal in the sense that they had no obvious correspondence or resemblance to their referents [2]. As aspects related to perception and sensorimotor control were largely overlooked, establishing the connection between symbols and their referents remained an open issue. The problem of making the semantic interpretation of a formal symbol system intrinsic to that system was called “the symbol grounding

problem” [7]. The resurgence of connectionism in the 1980s led various authors to propose hybrid symbolic/connectionist approaches. In particular, Harnad [7] proposed a hybrid approach to the “symbol grounding problem,” which consists of grounding bottom-up symbolic representations in iconic representations (sensory projections of objects) and categorical representations (learned or innate connectionist functions capable of extracting invariant features from sensory projections). The increasing concern with perception and sensorimotor control, both in the AI and robotics communities, was paralleled in cognitive science [2]. A distributed view on language origins, evolution and acquisition is emerging in linguistics. This trend emphasizes that language is a cultural product, perpetually open-ended and incomplete, ambiguous to some extent and, therefore, not a code [9]. The study of language origins and evolution has been performed using multi-robot models, with the Talking Heads experiments as a notable example [13]. The authors of this study have reported a related robotic approach to language acquisition. Given that language acquisition and evolution, both in human and artificial agents, involve not only internal, but also cultural, social and affective processes, the underlying mechanism has been called “external symbol grounding” [4]. Language acquisition starts with acquiring basic vocabulary for familiar objects and concepts. In the earliest moments of child language development, most of the vocabulary consists of common nouns that name concrete objects in the child’s environment, such as food, toys and clothes. Gillette et al. [6] show that the more imageable or concrete the referent of a word is, the easier it is to learn. So concrete nouns are easier to learn than most verbs, but “observable” verbs can be easier to learn than abstract nouns. Cognitive models and robotic prototypes have been developed for the acquisition of a series of words or labels for naming certain categories of objects. In general, the success of a robotic language acquisition system depends on a number of factors: sensors; active sensing; physical interaction with objects; consideration of the affordances of objects; interaction with human user; object and category representations; category learning; category membership evaluation. Most of these issues still need to be suitably addressed by the robotics research community. Roy and Pentland [10] present a system that learns to segment words out of continuous speech from a caregiver while associating these words with co-occurring visual categories. The implementation assumes that caregivers tend to repeat words referring to salient objects in the environment. Therefore, the system searches for recurring words in similar visual contexts. Word meanings for seven object classes were learned (e.g., a few toy animals, a ball). Steels and Kaplan [14] use the notion of “language game” to develop a social learning framework through which an AIBO robot can learn its first words with human mediation. The mediator, as a teacher, points to objects and provides their names. Names were learned for three objects: “Poo-Chi,” “Red Ball” and “Smiley.” The authors emphasize that social interaction must be used to help the learner focus on what needs to be learned. Yu [17] studies, through a computational model, the interaction between lexical acquisition and object categorization. In a pre-linguistic phase, shape, color and texture information from vision is used to ground word meanings. In a later phase, linguistic labels are used as an additional teaching signal that enhances object categorization. A total of 12 object categories (pictures of animals in a book for small children) were learned in

experiments. The authors of the present paper have previously developed a language acquisition system that integrates the user as language instructor [11]. The user can provide the names of objects as well as corrective feedback. An evaluation methodology, devised having in mind the open-ended nature of word learning, was used. On independent experiments, the system was able to learn the names of 6 to 12 categories of regular office objects. Current approaches to the problem, although quite different from each other, all seem to be limited in the number of categories that can be learned (usually not more than 12 categories). This limitation seems also to affect incremental/lifelong learning systems not specifically developed for word learning or symbol grounding. Several authors have pointed out the need for scaling up the number of acquired categories in language acquisition and symbol grounding systems [3; 14]. This paper reports on recent developments towards the use of structured object and category representations in visual learning of object categories. The approach involves the use of color information for analyzing and segmenting the target objects. Although the final aim is to develop structured component-based representations in the near future, this paper reports on using lists of color ranges for representing objects and categories. This representation is explored in the full loop from category learning to instance recognition. Thorough experiments will also be reported.

2 System architecture The whole system comprises two main components (Fig. 1), namely the artificial agent (“the Student”) and its World (including the human Instructor). The agent architecture itself consists of a perception system, an internal lifelong category learning and recognition module and a limited action system. The current action system abilities are limited to reporting the classification results back to the Instructor. Since the current agent perceives and acts on the physical world, it will also be referred to as a robot.

Fig. 1. Agent architecture

The world includes the user, a visually observable area and real-world objects whose names the instructor may wish to teach. The user, who is typically not visible to the agent, will act as instructor. Using a simple interface, the instructor can select (by mouse-clicking) any object from the agent’s visible scene, thereby enabling shared attention. Then, the instructor can perform the following actions: — Teach the object’s class name for learning — Ask the class name of the object, which the robot will determine based on previously learned knowledge — If the class returned in the previous case is wrong, the instructor can send a correction. The student agent currently is a computer with an attached camera 1. The computer runs the visual perception and learning/ classification software as well as the communication interface for the instructor. The tasks of the perception system include capturing images from the camera, receiving user instructions and extracting object features from images (Fig. 1). When the user points the mouse to an object in the scene image, an edge-based counterpart of the whole image is generated. The implementation of the canny algorithm, Fig. 2. Example of extracted object from the publicly available openCV 2 library (toy train) of vision routines, is used for edge detection. From this edges image, the boundary of the object is extracted taking into account the user pointed position. This is performed using a region growing algorithm. Given the boundary of the object, an image of the object is extracted from the full scene image (see example in Fig. 2). After converting the object image to the HSV color format, it will be ready for processing by the learning system. The primary purpose of using HSV format lies in the fact that most of the color information is in the H component (indeed hue specifies the dominant wavelength of the color in most of its range of values), thus facilitating image analysis based on a single dimension. The communication between the student agent and human instructor is supported by the perception and action systems (for instructor input and agent feedback, respectively). At present, the communication capabilities of the robot are limited to reading the teaching options (teach, ask, correct) in a menu-based interface and displaying classification results. In the future, simple spoken language communication will be supported.

1 2

IEEE1394 compliant Unibrain Fire-i digital camera is being used. Other openCV functions have been used in the implementation. See http://www.intel.com/technology/computing/opencv/index.htm.

3 Learning and recognizing object categories Language acquisition is highly dependent on the representations and methods used for category learning and recognition. Learning a human language will require the participation of the human user as teacher or mediator [11; 14]. A learning system in a robot should support long-term learning and adaptation. The fact that symbol grounding involves finding the invariant perceptual properties of the objects or categories to which symbols refer [2; 7] suggests that learning of symbol meanings should be (predominantly) based on positive examples. Additionally, it should be noted that it is not easy to provide counter-examples in open-ended domains like word learning. Learning from positive examples defines the one class-learning paradigm [8; 15], which we have adopted in previous work [11; 16]. In particular, we have used support vector data descriptions (SVDD) for representing object categories [11; 16]. For describing the training instances, a vector of shape-related features was used [11]. The work described below continues to follow the one-class learning paradigm, but new representations and algorithms are used. As mentioned earlier, current work focuses on the use of color for describing visually perceived objects, learning object categories and recognizing new instances. Objects should be described to the learning algorithm in terms of a small set of informative features. A small number of features will shorten the running time for the learning algorithm. Information content of the features will determine the learning performance. In the other end, object categories must be described in terms of small expressive representations that can be efficiently used to evaluate membership, i.e. recognize new instances. Most work reported in the literature is based on representing objects as feature vectors, i.e. sequences of features with fixed composition and constant size. However, it has been pointed out that feature vectors are not always the best representation for learning [1]. For many different applications, artificial intelligence has classically preferred to use lists (structures of variable size and composition) rather than vectors. We believe there is unexplored potential in such an approach in classical pattern recognition tasks. In this section, an approach to object category learning based on lists of color ranges is presented. More specifically, an object O is represented as a list of the most relevant color ranges in the given object: O = [R1, …, Rn] As mentioned before, the starting point for extracting the representation of an object is an image of the object in the HSV format. Only the hue component (H = 0 .. 359) will be used. Each color range is represented as a tuple with the following form: R = (a, b, m, A) where a and b are respectively the start and end values (hue) of the color range, m is the most frequent color value (corresponding to the maximum value in the color histogram for that range) and A is the area of the object in that color range, i.e. the percentage of pixels of the object image with a color in the range.

25 20 15 10 5 0 0

50

100

150

200

250

300

350

400

Fig. 3. Histogram of hue color component in the image of Fig. 2

find_color_ranges(hist) = find_color_ranges_rec(hist, a0, b0) = begin m ← max(hist, a0, b0); a ← find_color_ranges_side(hist, m, a0, -1); b ← find_color_ranges_side(hist, m, b0, 1); A ← sum(hist, a, b); L ← find_color_ranges_rec(hist, a0, a-1); R ← find_color_ranges_rec(hist, b+1, b0); if discard_range(A) then return L ∪ R else return { (a,b,m,A) } ∪ L ∪ R ; end begin return find_color_ranges_rec(hist, 0, 359); end Fig. 4. Algorithm for determining relevant color ranges in an object image

The first step in extracting the list of the most relevant color ranges of an object pointed to by the user is to build an histogram of the hue color component of the object image. The histogram is a vector H, where Hi, i=0..359, is the relative frequency of hue value i in the object image. In a color histogram curve, the different colors of the object are represented by peaks and bell-shaped forms. An example is given in Fig. 3. These bell-shaped forms represent the most relevant color ranges of the object image. They are determined according to the algorithm of Fig. 4. The main procedure find_color_ranges(hist) simply calls a recursive procedure find_color_ranges_rec(hist, a0, b0) with initial values corresponding to the full range of hue values, i.e. a0=0 and b0=359. In turn, the recursive procedure calls an auxiliary

procedure find_color_ranges_side(hist, m, limit, incr) to search for the left and right boundary of the range, starting in the most frequent value, m, and moving in the specified direction with increment incr. This procedure will continue while frequency values keep following a descending trend (short sequences of ascending histogram values are tolerated). The procedure stops when the descending trend stops, or when frequency values drop below a threshold, or when the specified limit is reached. After determining the boundaries of the current color range, find_color_ranges_rec(hist, a0, b0) calls itself recursively to determine additional color ranges towards the left and towards the right. The current color range is discarded if its area, A, is less than a predefined threshold. Running this algorithm on the histogram of Fig. 3 produces the following color ranges: (a=0, b=15, m=0, A=5.4) (a=23, b=58, m=46, A=26.6) (a=59, b=68, m=60, A=121.3) (a=172, b=194, m=180, A=32.7) (a=195, b=213, m=197, A=5.5) These color ranges respectively correspond to red, orange, yellow, light blue and dark blue. When the user points to an object and provides its name, the extracted object representation can be used to initialize or refine the representation of the corresponding category. In this work, categories are represented the same way as individual objects, that is, through a list of color ranges. The underlying mechanism for category formation is a procedure that merges the object representation to the current category representation. In the current version, the merging is as follows. Suppose that C is the category representation, w is the number of instances used to derive such representation and I is a new instance. For all color ranges r ∈ I and s ∈ C such that r and s are compatible 3, a new range t is computed by the following weighted average: t ← (r + w⋅s)/(w+1) Color range t will be included in the new version of the category. All color ranges in I that do not find a counterpart in C and vice-versa will be directly included in the new category description. Finally, when the user points to an object and asks for its name, membership to the known categories will be evaluated. Membership of object I to category C is based on evaluating how well the respective representations match. A matching score is initialized to Score = 1.0. Color ranges are scanned from left to right. Each time a color range r ∈ I and a color range s ∈ C are found compatible, the following update step is executed: Score ← f(ra,sa,θab) ⋅ f(rb,sb,θab) ⋅ f(rA,sA,θA) ⋅ Score where θab and θA are pre-defined thresholds and the function f is defined as: 3

Currently two ranges are considered compatible if the distance between start values is less than a threshold.

− x− y

f ( x , y ,θ ) = e

θ

The use of the negative exponential has the effect that the larger the difference in each of the compared values, the lower will be the final matching score. All color ranges r ∈ I that do not find a counterpart in C and vice-versa will result in penalizing the matching score in the following way:

Score ←

− rb − r a len

⋅ Score

where len is the total length of all color ranges in instance I.

4 Experimental methodology The word learning literature has some common features. One of them is the limitation on the number of learned words. The known approaches have been demonstrated to learn up to 12 words. The other common feature is the fact that the number of words is pre defined. This is contrary to the open-ended nature of the word learning domain. Then, given that the number of categories is pre-defined, the evaluation methodology usually consists of extracting certain measures on the learning process [10; 14; 17]. Some authors plot this type of measures versus training time. As the number of words/categories is predefined, the plots usually show a gradual increase of these measures and the convergence to a “final” value that the authors consider acceptable. However, robots and software agents are limited in their perceptual abilities and, therefore, cannot learn arbitrarily large numbers of categories, particularly when perception does not enable the detection of small between-category differences. As the number of categories grows, learning performance will evolve, with phases of performance degradation followed by recovery, but will eventually reach a breakpoint. A well-defined teaching protocol can facilitate the comparison of different approaches as well as the assessment of future improvements. With that in mind, the teaching protocol of Fig. 5 was previously proposed [11]. This protocol is applicable for any open-ended class learning domain. For every new class the instructor introduces, the average precision of the whole system is calculated by performing classification on all classes for which data descriptions have already been learned. Average precision is calculated over the last 3×n classification results (n being the number of classes that have already been introduced). The precision of a single classification is either 1 (correct class) or 0 (wrong class). When the number of classification results since the last time a new class was introduced, k, is greater or equal to n, but less than 3×n, the average of all results is used. The criterion that indicates that the system is ready to accept a new object class is based on the precision threshold. However, the evaluation/correction phase continues until a local maximum is reached.

introduce Class0; n = 1; repeat { introduce Classn; k = 0; repeat { Evaluate and correct classifiers; k ← k + 1; } until ( (average precision > precision threshold and k≥n) or (user sees no improvement in precision) ); n ← n + 1; } until (user sees no improvement in precision). Fig. 5. Teaching protocol proposed in [11] used for performance evaluation

5 Experimental results Several experiments were carried out according to the experimental protocol just described. As in previous work [11], the precision threshold was set to 0.667. No assumptions were made concerning the maximum number of categories. New categories were not introduced in a pre-established sequence. A set of instances of 13 categories was initially gathered (office objects and small colorful toys), and more would be added if the learning system would show to be able to learn more categories. These categories were introduced to the learning system in random sequences. Based on the teaching protocol, each experiment stopped when the breakpoint was reached. In total, 6 experiments were conducted. As shown in Table 1, the number of learned categories in different experiments varies between 7 and 10 (since breakpoint was reached between the 8th and 11th categories). Table 1 also shows the number of question/correction iterations executed until the breakpoint as well as the classification precision and learning efficiency values in breakpoint (learning efficiency is defined as the ratio between the measured classification precision and the classification precision of a random classifier). All experiments followed the evolution pattern previously observed in [11]. In general, the introduction of a new category to the agent leads to the deterioration in classification precision followed by gradual recovery. This process continues until the system starts to confuse categories in a way that is no longer possible to recover from.

As illustration, Figs. 6 and 7 show the evolution of classification precision and learning efficiency in one of the experiments (experiment 5 in Table 1). Table 1. Summary of all experiments

Experiment 1 2 3 4 5 6

# categories in breakpoint 10 8 10 10 11 11

# iterations

Class. precision in breakpoint 60 60 50 55 50 40

170 137 180 189 256 314

Learn. Efficiency in breakpoint 6 4.5 5 5 5.5 5

120 100 80 60 40 20 0 0

50

100

150

200

250

300

Fig. 6. Evolution of classification precision against number of question/correction interactions in a single experiment (breakpoint at the 11th object category) 12 10 8 6 4 2 0 0

50

100

150

200

250

300

Fig. 7. Evolution of learning efficiency against number of question/correction interactions in a single experiment (breakpoint at the 11th object category)

In this experiment, the agent could learn 10 categories, reaching the breakpoint after the introduction of the 11th category. For the first four categories, introduced in

the first 10 iterations, maximum precision (100%) was achieved and no correction was required. For each of the next six introduced categories, the performance evolved in the standard way: initial instability followed by recovery. After introducing the eleventh category, classification precision drops to values around 50% and doesn’t show any recovering trend, leading the instructor to conclude that the breakpoint was reached. Fig. 7 displays the evolution of learning efficiency. As can be seen, efficiency stabilizes around 5.5, which means that precision stays well above the random classifier precision. Table 2 shows the categories and number of training instances used in the 5th experiment. If compared with the results of [11], it can be seen that a comparable performance was obtained, with the advantage of using far less training instances (about 1/3). Table 2. Number of training instances used in each category during experiment of figs 6-7.

Object class Pen Toy three Stapler Toy horse Toy seal Toy starfish

#instances 10 1 18 4 15 12

Object class Cup Toy one Toy mobile Toy A Screw driver

#instances 1 5 6 10 21

6 Conclusion This paper follows previous work of the authors on using one-class learning for grounding a robot’s vocabulary. The target vocabulary currently includes names of office objects as well as toys visually observable by the robot. In this approach, the robot’s user is explicitly included in the language acquisition process. Through mechanisms of shared attention and corrective feedback, the human user, acting as instructor or mediator, can help the agent ground the words used to refer to the object categories. This work is motivated by the need to support language-based communication between robots and their users. While most work reported in the literature is based on representing objects as feature vectors, this paper explores the possibility of using lists of structured features for that purpose. The adopted representation is based on lists of relevant color ranges in an HSV image of the object. A simple agent, incorporating these capabilities, was used to test the approach. As in previous work, this work was also carried out with a particular concern for the fact that word learning is an open-ended domain. On six independent experiments, it was possible to teach between 7 and 10 names of object categories. These results are comparable to others reported in the literature. In the near future, we intend to explore two directions: 1) Integrate color-based work reported above with shape-based work previously published; 2) Develop more

sophisticated object and category representations that allow category learning and recognition with fewer training instances and, therefore, fewer training iterations. Acknowledgments. The Portuguese Research Foundation (FCT) supported this work under contract POSI/SRI/48794/2002 (project “LANGG: Language Grounding for Human-Robot Communication”), which is partially funded by FEDER.

References 1. Aha, D. W. & Wettschereck, D. (1997). Case-based learning: Beyond classification of feature vectors. In M. van Someren & G. Widmer (Eds.), Proceedings of the Nineth European Conference on Machine Learning (ECML 1997) (pp. 329-336), LNCS 1224, London, UK: Springer-Verlag. 2. Barsalou, L. (1999). Perceptual symbol systems, Behavioral and Brain Sciences, 22(4), 577-609. 3. Cangelosi, A. & Harnad, S. (2000). The adaptive advantage of symbolic theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication, 4(1), 117-142. 4. Cowley, S. J. (2007a). Distributed language: Biomechanics, functions and the origins of talk. In C. Lyon, C. Nehaniv & A. Cangelosi (Eds.), Emergence of communication and language. Heidelberg (pp. 105-127), London: Springer. 5. Fong, T., Nourbakhsh, I. & Dautenhahn, K. (2003). A survey of socially interactive robots: Concepts, design, and applications, Robotics and Autonomous Systems, 42, 143-166. 6. Gillette, J. Gleitman, H., Gleitman, L. & Lederer, A. (1999). Human simulations of vocabulary learning. Cognition, 73, 135-176. 7. Harnad, S. (1990). The symbol grounding problem, Physica D, 42, 335-346. 8. Japkowicz, N. (1999). Are we better off without counter-examples? In Proceedings of the First International ICSC Congress on Computational Intelligence Methods and Applications (CIMA-99), pp. 242–248. 9. Love, N. (2004). Cognition and the language myth. Language Sciences, 26, 525-544. 10. Roy, D. & Pentland, A. (2002). Learning words from sights and sounds: A computational model. Cognitive Science, 26, 113-146. 11. Seabra Lopes, L. and A. Chauhan (2007) How many Words can my Robot learn? An Approach and Experiments with One-Class Learning, Interaction Studies, 8(1), p. 53-81. 12. Seabra Lopes, L., Teixeira, A. J. S., Quinderé, M. & Rodrigues, M. (2005). From robust spoken language understanding to knowledge acquisition and management. Proceeding of Interspeech 2005 (pp. 3469-3472). Lisbon, Portugal. 13. Steels, L. (2001). Language games for autonomous robots. IEEE Intelligent Systems (special issue on Semisentient Robots, L. Seabra Lopes and J. H. Connell, Eds.), 16(5), 1622. 14. Steels, L. & Kaplan, F. (2002). AIBO’s first words: The social learning of language and meaning, Evolution of Communication, 4(1), 3-32. 15. Tax, D. M. J. (2001). One-class classification: Concept learning in the absence of counterexamples. Unpublished doctoral dissertation, Technische Universiteit Delft, The Netherlands. 16. Wang, Q., L. Seabra Lopes and D. Tax (2004) Visual Object Recognition through OneClass Learning, Image Analysis and Recognition: International Conference, ICIAR 2004, Part I, A. Campilho and M: Kamel (edrs.), LNCS 3211, Springer, p. 463-469. 17. Yu, C. (2005). The emergence of links between lexical acquisition and obje\ct categorization: A computational study. Connection Science, 17(3 4), 381-397.

Towards long-term visual learning of object categories in ... - CiteSeerX

50. 100. 150. 200. 250. 300. 350. 400. Fig. 3. Histogram of hue color component in the image of Fig. 2 .... The use of the negative exponential has the effect that the larger the difference in each of the compared ... As illustration, Figs. 6 and 7 ...

267KB Sizes 1 Downloads 226 Views

Recommend Documents

Towards long-term visual learning of object categories in ... - CiteSeerX
learning, one-class learning, cognitive, lists of color ranges. 1 Introduction ... Word meanings for seven object classes ..... As illustration, Figs. 6 and 7 show the ...

towards resolving ambiguity in understanding arabic ... - CiteSeerX
deal strides towards developing tools for morphological and syntactic analyzers .... ﻪﺗاذ ﲎﺒﳌا ﰲ. The meeting were attended in the same building (passive voice).

Estimating the Aspect Layout of Object Categories
as robotics, autonomous navigation and manipulation. In ..... ergy values as opposed to the energy values themselves. From the point of view of ..... mance of our algorithm with [27], we bin our viewpoint .... unsupervised scale-invariant learning. I

Kernelized Structural SVM Learning for Supervised Object ... - CiteSeerX
dim. HOG Grid feature. Right: Horse detector bounding boxes generated by [7], the coordinates of the 9 bounding boxes are con- catenated to create a 36 dim.

Learning Inter-related Visual Dictionary for Object ...
Indian Conference on Computer Vision, Graphics and Image. Processing, Dec 2008. 6 ... 1800–1807, 2005. 1. [23] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, ...

Learning visual context for object detection
position in the image, the data is sampled relatively to the object center ... age data being biased. For example, cars ... As a result we visualize two input images ...

Sleep Scheduling Towards Geographic Routing in Duty ... - CiteSeerX
Department of Network Engineering, Dalian University of Technology, China ... (TPGF) in duty-cycled wireless sensor networks (WSNs) when there is a mobile ...

Sleep Scheduling Towards Geographic Routing in Duty ... - CiteSeerX
(TPGF) in duty-cycled wireless sensor networks (WSNs) when there is a mobile sink, this paper .... sensor nodes, this advantage is more obvious. That's because.

Identification of Rare Categories Using Extreme Learning Machine
are useful in many fields such as Medical diagnostics, Credit card fraud detections etc. There are many methods are use to find the rare classes, they are ...

ASPIRATION LEARNING IN COORDINATION GAMES 1 ... - CiteSeerX
This work was supported by ONR project N00014- ... ‡Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, ...... 365–375. [16] R. Komali, A. B. MacKenzie, and R. P. Gilles, Effect of selfish node ...

Anticipatory Learning in General Evolutionary Games - CiteSeerX
“anticipatory” learning, or, using more traditional feedback ..... if and only if γ ≥ 0 satisfies. T1: maxi ai < 1−γk γ. , if maxi ai < 0;. T2: maxi ai a2 i +b2 i. < γ. 1−γk

Identification of Rare Categories Using Extreme Learning Machine
are useful in many fields such as Medical diagnostics, Credit card fraud detections etc. There are ... Here the extreme learning machine is use for classification.ELM is used .... Generative Classifiers: A Comparison of Logistic Regression and.

Anticipatory Learning in General Evolutionary Games - CiteSeerX
of the Jacobian matrix (13) by ai ±jbi. Then the stationary ... maxi ai. , if maxi ai ≥ 0. The proof is omitted for the sake of brevity. The important ..... st.html, 2004.

ASPIRATION LEARNING IN COORDINATION GAMES 1 ... - CiteSeerX
‡Department of Electrical and Computer Engineering, The University of Texas .... class of games that is a generalized version of so-called coordination games.

Unsupervised Learning of Probabilistic Grammar-Markov ... - CiteSeerX
Computer Vision, Structural Models, Grammars, Markov Random Fields, .... to scale and rotation, and performing learning for object classes. II. .... characteristics of both a probabilistic grammar, such as a Probabilistic Context Free Grammar.

Semantic-Shift for Unsupervised Object Detection - CiteSeerX
notated images for constructing a supervised image under- standing system. .... the same way as in learning, but now keeping the factors. P(wj|zk) ... sponds to the foreground object as zFG and call it the fore- ..... In European Conference on.

Learning Non-Linear Combinations of Kernels - CiteSeerX
(6) where M is a positive, bounded, and convex set. The positivity of µ ensures that Kx is positive semi-definite (PSD) and its boundedness forms a regularization ...

How the Object of Affect Guides Its Impact - CiteSeerX
semantically and evaluatively-linked concepts, thereby facilitat- ... sociated concepts. ...... Martin, L. L., Achee, J. W., Ward, D. W., & Wyer, R. S. (1993). Mood as.

How the Object of Affect Guides Its Impact - CiteSeerX
... Charlottesville, VA 22904, USA. Email: [email protected] ...... low, but as a little (inedible) cloud (Mischel, Shoda, &. Rodriguez, 1989). And in a related way, ...

Setting up the target template in visual search - CiteSeerX
Feb 9, 2005 - so a new target template needs to be set up on every trial. Despite ...... Email: [email protected] or [email protected]. Address: ...