International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

Learning Hierarchical Fuzzy Rule-based Systems for a Mobile Robot Controller Antony Waldock

Brian Carse and Chris Melhuish

Advanced Technology Centre BAE SYSTEMS Filton, Bristol, England Email: [email protected]

Intelligent Autonomous System Lab University of the West of England Bristol, England Email: brian.carse,[email protected]

Abstract— A reinforcement learning technique for mobile robot control must be capable of coping with a high dimensional continuous state space. Fuzzy Q-Learning provides a means of coping with a continuous state space but suffers from problems of scalability. This paper proposes a new Hierarchical Fuzzy Q-Learning (HFQL) algorithm that combines a Hierarchical Fuzzy Rule Based System (HFRBS) and Fuzzy Q-Learning (FQL). The algorithm uses the variance in the approximated value function to determine the inaccurate rules to specialise. This initial work uses the mountain car problem to compare the performance of Tabular Q-Learning with Fuzzy Q-Learning for both uniform and variable state space representations.

I. I NTRODUCTION Development of an autonomous mobile robot for a real-world environment is extremely challenging. The construction of a suitable control policy is a complex and time-consuming process. Classical control theory [1] provides techniques for generating a control policy from mathematical models of the environment, platform and control laws. Unfortunately constructing models that accurately represent the dynamics of a complex environment (both platform and external environment) can be a long and difficult process. Reinforcement learning [2] is seen as a solution to the reliance on building accurate models of the platform and environment. Learning a control policy for a mobile robot, while on the platform in the environment, bypasses the need to mathematically specify models. Fuzzy Q-Learning provides a reinforcement learning technique to construct a Fuzzy Rule Based System (FRBS). FRBSs have been demonstrated to cope well with uncertain

and imprecise environments when used for mobile robot control [3]. In recent years, Hierarchical Fuzzy Rule Based Systems (HFRBSs)[4] have been demonstrated to improve scalability using variable resolution discretization. This contribution outlines initial investigations into the construction of a HFRBS for mobile robot control using Fuzzy Q-Learning. The paper gives a brief overview of Fuzzy Q-Learning and variable resolution discretization techniques in sections II and III followed by the algorithm details in section IV. The mountain car problem is outlined in section V with the results and discussion presented in section VI. II. F UZZY Q-L EARNING Reinforcement learning [5] is concerned with learning a policy π that maps states S to actions A so as to maximize a numerical reward signal, r. Much work has been conducted on applying reinforcement learning techniques to learning a control policy for mobile robot control [6][7]. QLearning [8] is a popular reinforcement learning technique where the learner incrementally builds a Q-function that attempts to estimate the discounted future reward for taking actions from given states (a state-action pair). On every time step t, the Q(st , at ) of a state st and at is updated using the reward signal rt+1 received and the discounted future V (st+1 ) of the next state st+1 (Equation 1). Q(st , at ) ← Q(st , at ) + (1) α(rt+1 + γV (st+1 ) − Q(st , at ))

582

International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

where V (st+1 ) = maxa0 ⊂A Q(st+1 , a0 ), α is the learning rate and γ is the discount rate. On each update, the current estimate of Q(st , at ) is shifted towards the observed value rt+1 + γV (st+1 ) by the learning rate α. The best policy can be determined by selecting the action with the highest Q-Value. Fuzzy Q-Learning [9] combines Q-Learning and Fuzzy Rule Based Systems (FRBSs) to provide a means of coping with continuous inputs and outputs. FRBSs are popular for mobile robot control due to their ability to cope with uncertain environments while retaining a degree of human interpretability [10][11]. A FRBS partitions the continuous state space into a series of IF..THEN rules. Each rule consists of a set of input linguistic symbols Si and an output linguistic symbol Ai . The layout of a FRBS with two inputs is illustrated in Figure 1.

583

sions. The degree of activationQor ’truth value’ of n a rule is defined as µi (x) = j=0 Sj,i (xj ). The Match Set is defined as the set of active rules (µi (x) > 0) for a given input vector. Centre of sums is used to infer a crisp action a(x) given a set of N rules using: PN a(x) =

i=0

µi (x) × centre(ai ) Py i=0 µi (x)

(3)

The Q-value can be represented in a tabular form q[i, j] where i is the rule index and j is the action chosen from a set of J actions. Let i0 represent the selected action for rule i. The Qvalue for an inferred action for input vector x is: PN Q(x, a) =

µi (x) × q[i, i0 ] PN i=0 µi (x)

i=0

(4)

and the value is: PN V (x) =

i=0

µi (x) × maxj≥J (q[i, j]) PN i=0 µi (x)

(5)

When the inferred action a is applied to the state x, the environment transitions to state y with reward r. The Q-value is updated using equation 1 but the learning rate α is combined with the rule’s degree of activation:

1: Fuzzy Q-Learning A linguistic symbol associates a fuzzy set with a natural language meaning e.g. small, medium, very large etc. A fuzzy set links an input variable xj with a membership function, to represent its applicability with regard to the current input vector x = (x1 , .., xn ) and hence, determine the rules overall influence in the decision process. For Fuzzy Q-Learning, each rule Ri has an associated Q-value qi for each action (Equation 2). Ri :

if x1 is Si,1 and ...xn is Si,n (2) then y = Ai with qi

where Si,j is a linguistic symbol on the input variable xj and n is the number of input dimen-

µi (x) q[i, i0 ] = q[i, i0 ] + α PN ∆Q i=0 µi (x)

(6)

where ∆Q is (rt+1 + γV (y) − Q(x, a)). The Fuzzy Q-Learning algorithm is outlined in Algorithm 1. Algorithm 1 Fuzzy Q-Learning algorithm 1: repeat 2: Observe the input vector x 3: Select actions using an Exploration/Exploitation Policy (EEP) 4: Compute the global consequence a(x) and the Q-value Q(x, a) 5: Apply the action a(x) 6: Receive the reinforcement r 7: Observe the input vector y 8: Update the Q-Value using (6) 9: until End

International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

The representation selected for the linguistic symbols directly affects the accuracy of the approximated function [12]. If a greater degree of accuracy is demanded then the structure (i.e. size, shape and position) of the membership functions, which represent the linguistic symbols, must be altered. Increasing the granularity of the linguistic symbols can facilitate improvements in accuracy but interpretability is reduced. For mobile robot control, the level of granularity is also determined by the resources available (i.e. physical memory size). Hence, a trade-off exists between accuracy and resources/interpretability. A method of balancing this trade-off is termed variable resolution discretization. III. VARIABLE R ESOLUTION D ISCRETIZATION The majority of function approximation techniques uniformly partition the state space. Variable resolution discretization varies the size of the partitions in an attempt to minimise the approximation error. Techniques move from a ’general to specific’ representation by successively refining areas of the state space using a splitting-criterion or expansion policy. A large number of variable resolution discretization techniques exist and have been applied to a multitude of research areas, such as classifer systems [13], discrete reinforcement learning [14][15] and Decision Trees in the form of ID3 and C4.5 [16][17]. In this contribution, variable resolution discretization is used to approximate the value function and hence learn the control policy. A Hierarchical Fuzzy Rule Based System (HFRBS) is used to perform variable resolution discretization. A HFRBS initially divides the state space into a fixed number of linguistic symbols. Fuzzy Q-Learning is used to learn the value function as in a standard FRBS. The HFRBS then employs an expansion policy to determine inaccurate areas of the state space and the corresponding rules. When an inaccurate area is identified, the rule representing that portion of the state space is specialised into a set of more specific rules. This process of specialisation continues until a desired level of accuracy is achieved. Figure 2 shows an example of a partitioned decision space (i) and its corresponding hierarchical representation (ii). Current expansion policies require either access

584

2: Hierarchical Fuzzy Rule Base to the entire training set [18], [19] which is not possible for a mobile robot controller or are only applicable to supervised learning [20]. IV. HFRBS FOR M OBILE ROBOT C ONTROL To perform variable resolution discretization using a HFRBS, an expansion policy to determine inaccurate areas of the state space must be defined. As a HFRBS can be viewed as a FRBS (see Figure 2), Fuzzy Q-Learning can be used to learn the optimal policy for a given task by approximating the value function. Reducing the approximation error of the value function can therefore improve the control policy. Within discrete reinforcement learning, a variety of expansion policies have been explored including using the variance in the value function, policy disagreement and a state’s influence in the overall approximation. Munos and Moore [14] demonstrate that the variance of the value function can be used as an expansion policy within discrete reinforcement learning. Within this contribution, a new expansion policy is introduced that uses the Raw Score Method to calculate the variance and hence estimate the approximation error. The raw score method is a convenient computation alternative for calculating the standard deviation from a set of observations. The raw only requires three variPnscore method Pn ables ( t=0 x2t , t=0 xt and n) where xt is the observation at time t and n is the number of observations. The variance of an action j on rule i can be calculated using qt [i, j] at each time step t as an observation: s

Pn

(

Pn

q [i,j])2

− t=0 nt (7) SDi,j = n Equation 7 calculates the standard deviation of the Q-Value for each rule’s action. The variance of 2 t=0 qt [i, j]

International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

a rule is determined, in the same way as the value, by using the variance of the currently selected action i0 . Due to the dynamic programming nature of Q-Learning, q[i, j] requires a period of time, p, to converge. If the raw score method was calculated from all observations of q[i, j] the variance would be artificially high. An estimate of the time to converge can be derived from the learning rate and discount rate. The estimated convergence time can be calculated by solving the geometric series: p

X 1 −≥ (α − tαt ) 1−γ t=0

(8)

where  determines the accuracy required. For Fuzzy Q-Learning, the learning rate α varies depending on the rule activation. The convergence c[i, j] of an action j on rule i can be estimated using a dynamic programming approach [15]. c[i, j] ← c[i, j] + αµ(x)(Qmax − c[i, j])

Algorithm 2 Hierarchical Fuzzy Q-Learning 1: Specialise inaccurate rules 2: repeat 3: Observe the input vector x (Compute the match set M1 ) 4: Select actions using an Exploration/Exploitation Policy (EEP) 5: Compute the global consequence a(x) and the Q-value Q(x, a) 6: Apply the action a(x) 7: Receive the reinforcement r 8: Observe the input vector y (Compute the match set M2 ) 9: Update the Q-Value of M1 using (6) 10: Update the Convergence of M1 using (9) 11: if M1 and M2 have converged then 12: Update the variance with M1 using (7) 13: end if 14: until (goal reached) OR (time out)

(9)

1 . and µ(x) is PNµi (x) where Qmax = 1−γ i=0 µi (x) q[i, j] is deemed to have converged when c[i, j] is greater than . The variance of a rule can only be updated when all the rules in the current match set and previous match set have converged. The overall HFRBS algorithm is outlined in Algorithm 2. In the current implementation, specialisation is only performed at the beginning of every episode but it is envisaged that specialisation could be done after every update.

3: The Mountain Car Problem

V. M OUNTAIN C AR E XPERIMENT The experiments were conducted on a classic reinforcement learning problem, the mountain car as defined by Sutton and Singh [21]. The mountain car problem provides a continuous state space but with only 2 dimensions visualisation of the variable resolution discretization is still possible. The aim of the task is to park a car with zero velocity at the top of a steep mountain road. The car is underpowered and must therefore gain sufficient inertia from the opposite slope to reach the goal. The layout of the problem is depicted in Figure 3. The position and velocity are updated according to a simplified physics model as defined by

xt+1 dxt+1

= xt + dxt+1 (10) = dxt + 0.001at − 0.0025cos(3xt )

The position is bounded between [−1.2 ≤ xt ≥ 0.5] and the velocity between [−0.07 ≤ dxt ≥ 0.07]. The actions range between [−1 ≤ at ≥ 1] and control the acceleration of the car. At the beginning of each episode, the car is positioned randomly between −1.2 and 0.5 on the slope with a random velocity between −0.07 and 0.07. An episode lasts for 10,000 time steps or until the car reaches the goal at the top of the slope (xt ≥ 0.5). If the car hits the left hand side (wall) the velocity is reset to 0. The reward function is defined as:

585

International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

  −1 0 Rt =  1−

|dxt | 0.07

if xt ≤ -1.2 if -1.2 > xt < 0.5 if xt ≥ 0.5

Initially all rules have a Q-value of zero. In order to produce a fair comparison between Tabular and Fuzzy Q-Learning, both algorithms were run with the discrete actions (−1, 0, +1). The performance is measured from 100 test points evenly distributed over the state space. The performance measures used are the average number of steps taken to reach the goal and the percentage of test points that result in a positive reward signal. The results were averaged over 20 runs for 10 randomly generated trials (the trials were 500 episodes long and the same for each run).

586

an improved approximation of the value function due to smooth interpolation between the value of the rules. The approximated value function for both QL and FQL with 25 rules is illustrated in Figure 4a and 4b.A drawback of FQL vs QL is the

Value 0.25 0.2 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2 -0.25

-1.2

0.08 0.06 0.04 0.02 -1

-0.8

0 -0.6

-0.4

-0.2

Position

0

0.2

-0.02 -0.04 -0.06 0.4 -0.08 0.6

Velocity

VI. R ESULTS AND D ISCUSSION The mountain car problem was used to compare the performance of Tabular Q-Learning and Fuzzy Q-Learning for both a uniform (QL, FQL) and a hierarchical (HQL, HFQL) state space. The results are displayed in Table I. I: Mountain Car Results Algorithm QL QL FQL FQL HQL HQL HFQL HFQL

Rules 25 100 25 100 25 100 25 93.7

Steps Taken 91.19 79.47 86.29 76.34 80.85 77.37 80.38 71.65

% Completed 55.0 76.7 60.6 81.6 64.4 70.7 67.4 73.6

Table I shows the performance of each algorithm for 25 and 100 rules. As might be expected, each algorithm shows an improvement in performance given a greater number of rules. A. Tabular Q-Learning vs Fuzzy Q-Learning For a uniform state space, FQL provides performance improvements over Tabular QL. The average number of steps taken is reduced by 4.8% (5.6% and 4.1% for 25 and 100 rules respectively) and the number of successfully completed runs rises by 8.25% (10.1% and 6.4 %). FQL results in better performance with a smaller number of rules. The increased performance can be attributed to

(a) Tabular Q-Learning

Value 1.4 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8

-1.2

0.08 0.06 0.04 0.02 -1

-0.8

0 -0.6

-0.4

Position

-0.2

0

0.2

-0.02 -0.04 -0.06 0.4 -0.08 0.6

Velocity

(b) Fuzzy Q-Learning

4: The value function for 25 rules time taken to learn (Figure 5). For 25 rules, QL learns an estimate of the value function within 100 episodes where FQL improves gradually to reach the same level over 150 episodes. The difference in learning rate is a result of the interaction between the overlapping rules. In summary, FQL provides an improved approximation of the value function but takes longer to learn. B. Uniform vs Variable Representation The results in Table I demonstrate that variable resolution discretization using the variance of the approximated value function provides improvements for both Tabular and Fuzzy Q-Learning.

International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

could limit the overall performance. VII. C ONCLUSION AND F UTURE W ORK

5: Learning curves for QL and FQL (25 and 100 rules)

Variable resolution discretization provides significant improvements when a small set of rules are used. For 25 rules, the percentage of runs completed rose from 55% to 64.4% for QL and 60.6% to 67.4% for FQL. HFQL provides the best performance given only 25 rules with 67.4% of runs completed and an average time taken of 80.38 steps. For 100 rules, the time taken is reduced to 71.65 steps but the percentage of runs completed does not exceed FQL. A HFQL may focus on minimising the time taken for a set of runs while ignoring others. Figure 6a and 6b show the state space for (x axis=position, y axis=velocity) partitions for 25 and 100 rules.

This contribution has outlined our initial experiments using a Hierarchical Fuzzy Q-Learning (HFQL) algorithm on the mountain car reinforcement learning problem. The new algorithm combines a Hierarchical Fuzzy Rule Based System (HFRBS) and Fuzzy Q-Learning (FQL). The work has presented the performance benefits of Fuzzy Q-Learning over Tabular Q-Learning for uniform and variable state space representations. An expansion policy using the variance of the approximated value function has been shown to provide significant performance benefits for a small number of rules. These initial investigations into combining Fuzzy Q-Learning with variable resolution discretization techniques have shown performance benefits. The experiments have demonstrated that an expansion policy based solely on the variance could limit the overall performance. Future work will include investigations into different criterion for expansion policies and focus on comparing the algorithms within a simulated mobile robot navigation task. ACKNOWLEDGEMENT This work was funded by the Advanced Technology Centre, BAE SYSTEMS, UK. R EFERENCES

(a) 25 rules

(b) 100 rules

6: The state space partitions for HFQL HFQL concentrates on specialising rules that are close to the wall (≤ −1.2) and near goal (≥ 0.5) because these have the highest variation in the approximated value function. Focusing solely on these areas improves performance initially but

[1] K. Ogata, Modern Control Engineering, 3rd ed. Tim Robbins, 1997. [2] T. M. Mitchell, Machine Learning. MIT Press, 1997, no. ISBN 0-07-042807-7. [3] H. Hagras, V. Callaghan, and M. Colley, “Outdoor mobile robot learning and adapation,” IEEE Robotics and Automation Magazine, vol. 8, no. 3, pp. 53–69, 2001. [4] O. Cord´on, F. Herrera, and I. Zwir, “Fuzzy modeling by hierarchical built fuzzy rule bases,” International Journal of Approximate Reasoning, vol. 27, pp. 61–93, 2001. [5] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. Cambridge, MA: The MIT Press, 1998. [6] W. D. Smart and L. P. Kaelbling, “Reinforcement learning for robot control,” in Proceedings - SPIE The International Society for Optical Engineering. SPIE 4573, 2002, pp. 92–103. [7] C. Gaskett, “Q-learning for robot control,” Ph.D. dissertation, Research School of Information Sciences and Engineering, ANU, 2002. [8] C. Watkins, “Learning with delayed rewards,” Ph.D. dissertation, University of Cambridge, England, 1989.

587

International Conference on Machine Intelligence, Tozeur – Tunisia, November 5-7, 2005

[9] P. Y. Glorennec and L. Jouffe, “Fuzzy q-learning,” in Proceedings of Fuzz-IEEE 1997, Sixth International Conference on Fuzzy Systems, Barcelona, Spain, 1997, pp. 659–662. [10] R. Guanloa, P. Musilek, F. Ahmed, and A. Kaboli, “Fuzzy situation based navigation of autonomous mobile robot using reinforcement learning,” in In Proceedings of North American Fuzzy Information Processing Systems (NAFIPS), 2004. [11] E. Tunstel, T. Lippincott, and M. Jamshidi, “Introduction to fuzzy logic with application to robotics,” in First National Students Conference of National Alliance of NASA University Research Centres, NC A and T State Univ, Greensboro, NC, 1996. [12] A. Bastian, “How to handle the flexibility of linguistic variables with applications,” International Journal of Uncertainty, Fuziness and Know-Based Systems, vol. 2 (4), pp. 463–484, 1994. [13] C. Melhuish and T. Fogarty, “Applying restricted mating policy to determine state space niche using immediate and delay reinforcement,” in Evolutionary Computing, AISB Workshop, Leeds, UK, 1994, pp. 224–237. [14] R. Munos and A. Moore, “Variable resolution discretization in optimal control,” Machine Learning, vol. 49, pp. 291–323, 2002. [15] H. Vollbrecht, “Hierarchical reinforcement learning in continuous state spaces,” Ph.D. dissertation, Dissertation an der Universitat Ulm, 2003. [16] J. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, pp. 81–106, 1986. [17] J. R. Quinlan, “Discovering rules by induction from large numbers of examples: a case study,” in Expert systems in the micro-electonic age, D. Michie, Ed. Edinburgh Unversity Press, 1979. [18] O. Cord´on, F. Herrera, and I. Zwir, “A hierarchical knowledge-based environment for linguistic modelling: Models and iterative methodology,” Department of Computer Science and Artifical Intelligence, Universidad de Granada, Granada, Spain, Tech. Rep., 2000. [19] R. Holve, “Rule generation for hierarchical fuzzy systems,” in North American Fuzzy Information Processing Societ - NAFIPS, 1997, pp. 444–449. [20] A. Waldock, B. Carse, and C. Melhuish, “An online hierarchical fuzzy rule-based system for mobile robot controllers,” in Proceedings of EUSFLAT 2003: An International Conference in Fuzzy Logic and Technology, Zittau, Germany, September 2003, pp. 534–539. [21] R. Sutton and S. Singh, “Reinforcement learning with replacing eligibility traces,” Machine Learning, vol. 22, pp. 123–158, 1996.

588

Learning Hierarchical Fuzzy Rule-based Systems for a Mobile ...

mobile robot control must be capable of coping with a high dimensional .... space into a fixed number of linguistic symbols. ... discount rate. The estimated convergence time can .... National Students Conference of National Alliance of.

1MB Sizes 0 Downloads 245 Views

Recommend Documents

A Scalable Hierarchical Fuzzy Clustering Algorithm for ...
discover content relationships in e-Learning material based on document metadata ... is relevant to different domains to some degree. With fuzzy ... on the cosine similarity coefficient rather than on the Euclidean distance [11]. ..... Program, vol.

Hierarchical Learning
The CSL Algorithm. CSL(ε,δ,s) ε'← ε/sK δ'← δ/2s m ← ln(K/δ')/ε' for i ← 1 to s get EX i. # advance to next lesson for j ← 1 to 2m # create filtered sample. (x,y) ← Push-Button(EX i. ) # draw exmple if F. 1. ,…,F i-1 agree on x a

On Identification of Hierarchical Structure of Fuzzy ...
Definition 2.2 (Choquet integral). For every measurable function f on X, can be writ- ten as a simple function f = n. ∑ i=1. (ai − ai−1)1Ai + m. ∑ i=1. (bi − bi−1)1Bi ,.

Structure-Perceptron Learning of a Hierarchical Log ...
the same dimension at all levels of the hierarchy although they denote different subparts, thus have different semantic meanings. The conditional distribution over all the states is given by a log-linear model: P(y|x; α) = 1. Z(x; α) exp{Φ(x, y) Â

Learning hierarchical invariant spatio-temporal features for action ...
way to learn features directly from video data. More specif- ically, we .... We will call the first and second layer units simple and pool- ing units, respectively.

Scalable Hierarchical Multitask Learning ... - Research at Google
Feb 24, 2014 - on over 1TB data for up to 1 billion observations and 1 mil- ..... Wc 2,1. (16). The coefficients λ1 and λ2 govern the trade-off between generic sparsity ..... years for each school correspond to the subtasks of the school. ID. Thus 

Multicast based fast handoff in Hierarchical Mobile IPv6 ...
Handoff-Aware Wireless Access Internet Infrastructure. (HAWAII) [15]. ... home agent by sending another BU that specifies the binding between its home address ...

Rulebased modeling: a computational approach ... - Wiley Online Library
Computational approach for studying biomolecular site dynamics in cell signaling systems ..... ligand-binding site, which comprises domains I and III of the EGFR ...

A hierarchical approach for planning a multisensor multizone search ...
Aug 22, 2008 - Computers & Operations Research 36 (2009) 2179--2192. Contents lists .... the cell level which means that if S sensors are allotted to cz,i the.

Macro/micro-mobility fast handover in hierarchical mobile IPv6
Abstract. Mobile Internet Protocol version 6 (MIPv6) has been proposed to solve the problem of mobility in the new era of Internet by handling routing of IPv6 packets to mobile nodes that have moved away from their home network. Users will move frequ

Multicast based fast handoff in Hierarchical Mobile ...
A domain gateway registers its address with the HA and forwards the packets to Mobile Node (MN). These approaches need special signaling to update mobile- ...

Supervised fuzzy clustering for the identification of fuzzy ...
A supervised clustering algorithm has been worked out for the identification of this fuzzy model. ..... The original database contains 699 instances however 16 of ...

A Scalable Hierarchical Power Control Architecture for ...
1. SHIP: A Scalable Hierarchical Power Control. Architecture for Large-Scale Data Centers. Xiaorui Wang ... Power consumed by computer servers has become a serious concern in the ... years as more data centers reach their power limits. Therefore, ...

a mobile mapping data warehouse for emerging mobile ...
decade there will be a global population of over one billion mobile imaging handsets - more than double the number of digital still cameras. Furthermore, in ...

a mobile mapping data warehouse for emerging mobile ...
Mobile vision services are a type of mobile ITS applications that emerge with ... [12], we develop advanced methodologies to aid mobile vision and context ...

A Multicast Protocol for Physically Hierarchical Ad ... - Semantic Scholar
Email:[email protected]. Abstract—Routing and multicasting in ad hoc networks is a matured research subject. Most of the proposed algorithms assume a ...

A Hierarchical Fault Tolerant Architecture for ... - Semantic Scholar
Recently, interest in service robots has been increasing in ... As it may be deduced from its definition, a service robot is ..... Publisher, San Francisco, CA, 2007.

a hierarchical model for device placement - Research at Google
We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of. CPUs, GPUs, and other computational devices. Our method learns to assign graph operat

A Hierarchical Framework for Realizing Dynamically ...
where. • Ai and bi are constant which depend only on the i-th .... Bi-RRT in the configuration-time space of an end-effector. ..... software developed by CEA-LIST.