Configuration space based Efficient View Planning and Exploration with Occupancy Grids Lila Torabi, Moslem Kazemi, Kamal Gupta Abstract— The concept of C-space entropy for sensor-based exploration and view planning for general robot-sensor systems has been introduced in [21], [22], [24], [25]. The robot plans the next sensing action (also called the next best view) to maximize the expected C-space entropy reduction, (known as Maximal expected Entropy Reduction, or MER). It gives priority to those areas that increase the maneuverable space around the robot, taking into account its physical size and shape, thereby facilitating reachability for further views. However, previous work had assumed a Poisson point process model for obstacle distribution in the physical space, a simplifying assumption. In this paper we derive an expression for MER criterion assuming an occupancy grid map, a commonly used representation for workspace representation in much of the mobile robot community. This model is easily obtained from typical range sensors such as laser range finders, stereo vision, etc., and furthermore, we can incorporate occlusion constraints and their effect in the MER formulation, making it more realistic. Simulations show that even for holonomic mobile robots with relatively simple geometric shapes (such as a rectangle), the MER criterion yields improvement in exploration efficiency (number of views needed to explore the C-space) over physical space based criteria. Index Terms— sensor-based path planning, occupancy grid model, C-space entropy, view planning

I. INTRODUCTION In robotic exploration problem, a robot, equipped with a sensor seeks to acquire a map of its environment, initially unknown to it. The robot-sensor system explores its environment incrementally and iteratively, planning its path in currently known free space to the next best view (NBV) configuration, executing the planned path to move to that configuration followed by a sensing action [25]. We are interested in general robot-sensor systems, where a distance or range sensor is mounted on a robot mechanism with non-trivial geometry and kinematics, for example a mobilemanipulator. In [21], [22], [24], [25] we have introduced and developed the concept of configuration-space (C-space) entropy as a measure of the knowledge of the planning space. The C-space of a robot system is the space of all possible configurations of the system and is the space in which motion planning problem for a dimensioned robot is posed and solved [7],[17]. The robot plans the next sensing action (also called the next best view) to maximize the expected C-space entropy reduction, (known as Maximal expected Entropy Reduction, or MER). Intuitively, this measure gives priority to those areas that increase the maneuverable Lila Torabi, Moslem Kazemi, and Kamal Gupta are with Robotic Algorithms & Motion Planning (RAMP) Lab, School of Engineering Science, Simon Fraser University, Burnaby, BC V5A 1S6, Canada. Email:

{ltorabi|moslemk}@sfu.ca, [email protected]

space around the robot, taking into account its physical size and shape, thereby facilitating reachability of further views. Furthermore, the exploration efficiency of the MER criterion (in terms of number of views taken to explore the C-space, and by implication the reachable part of physical space) over pure physical space based criteria such as Maximize Physical space Volume (MPV) was demonstrated for fixed base eyein-hand systems. However, our previous work had assumed a Poisson point process model for obstacle distribution in the physical space (P-space). In this idealized model, obstacles are considered as points [25], and therefore realistic issues such as occlusion constraints are not reflected in MER computation. In this paper, we derive an expression for MER criterion with the occupancy grid (or OCC-grid) representation, where an occupancy probability is assigned to each grid-cell in the physical space. It is a more realistic and commonly used model for P-space representation in much of the mobile robot community [10]. OCC-grid is rather easily obtained from typical range sensors such as laser range finders, stereo vision, etc., incorporating sensory noise and uncertainty [10] and furthermore, it allows for occlusion constraints and their effect to be incorporated in the MER formulation. We show that even for holonomic mobile robots with relatively simple geometric shapes (such as a rectangle), the MER criterion yields improvement in exploration efficiency (number of views) over physical space based criteria such as closest Frontier or Maximal unknown Physical space Volume (MPV) Frontier [23]. The overall planner we use is SBIC-PRM (sensor-based incremental construction of probabilistic road map) [24], and is briefly explained later in Section 4. We consider this paper as an initial step toward exploration with an autonomous mobile-manipulator. Research on sensor-based exploration for mobile robots [10] and that on manipulator arms [15], [19], [25] have been developed independently partly because the underlying issues are different (e.g., no build up of localization error and high dimensional C-spaces in manipulators), although there are commonalties (such as use of entropy for exploration). A key reason behind extending our MER formulation to occupancy grid representation is to bring the two sub-areas together and look at the resulting issues. For instance, we feel that existing SLAM based exploration which assumes point robot [8] can be nicely integrated with MER based exploration and hence apply to mobile-manipulators [14]. Our MER framework with occupancy grids applies to any general robot (with its associated geometry and kinematics), equipped with a range sensor, and our near term goal is

to apply this to the Simon Fraser University (SFU) mobilemanipulator (currently being assembled), a powerbot [3] as the mobile base with a PowerCube arm mounted on the base. A Hokuyo laser range scanner [5] could be either mounted on the base or on the arm, or both. However, our simulation results are presented with a simulated mobile robot (and not a mobile manipulator) equipped with a simulated range scanner, mainly because our simulator, being built on top of mobilesim [2] does not yet have the capability to simulate the whole mobile-manipulator. There is a substantial relevant literature on various aspects of the exploration problem in the mobile robotics literature [10]. A main focus has been simultaneous localization and mapping (or SLAM) [10], [18] to reduce uncertainty in robot localization and mapping. More recently, entropy reduction has also been incorporated in SLAM to guide the robot to the next view [8], [10], [18]. Within the SLAM context, the entropy can be decomposed into that due to localization and that due to map uncertainty [10], [20]. In these works the robot is considered as a point or disc shape mobile base, and they do not take into account the size and shape of the robot, and the environment is not cluttered. Yamauchi et al. [23] used frontier-based approach which plans the next view on the frontier between the known and unknown physical space. In [12] a sensor based RRT (called SRT) has been presented as the motion planner and Frontier-based criterion for the view planner [16]. SRT is applicable to only disc shaped robots and makes no distinction between sensing and planning space. Furthermore there is an assumption that the region occupied by the robot is a subset of sensor field of view (FOV). This assumption is often not valid, for the angular span of the FOV of a typical range sensor is less than full 360 degrees. Banos and Latombe [6] also use the MPV criterion for determining the next best view. These physical space criteria for view planning do not consider the robot shape and size, there is no consideration if the newly explored area will facilitate the robot to maneuver itself for further view planning. The exploration efficiency is therefore highly dependent on robot/manipulator shape. There are other works in sensor-based view planning context, here we simply mention Choset et al. [9] for sensor-based path planning for a planar rod robot, and Kruse et al. [19], [21] for previous work on Eye-in-Hand systems. II. O CCUPANCY GRID MODEL AND MER C RITERION In OCC-grid model, the physical space P is represented as a discretized collection of grid cells cij . A probability pij , called the void probability, is assigned to each cij , and corresponds to the probability that the cell is free. Formally, pij = p(Cij = 1) is the probability of being free, where the binary random variable (r.v.) Cij denotes the occupancy status ( 1/0 denotes free/obstacle) of each grid-cell, cij . Assuming that the Cij ’s are independent, the probability that a robot configuration q is free is given by, Y def p(q) = p(Q = 1) = pij , (1) cij ∈A(q)

where the binary r.v. Q denotes the collision status of the corresponding robot configuration q and A(q) ∈ P denotes the region occupied by robot in physical space at configuration q (Q = 0 denotes robot is in collision, i.e. A(q) intersects a known obstacle). Correspondingly the entropy of a configuration q, is given as, H(Q) = −[p(q) log p(q) + (1 − p(q)) log(1 − p(q))]. (2) Assuming we have N configurations in C-space, the Cspace entropy could be considered as the summation of marginal entropy terms of all configurations [25] (mutual entropy terms are ignored mainly for computational efficiency). These N configurations could arise either by discretizing the C-space (for a low dimensional C-space) or over a randomly chosen set of configurations (for a high dimensional Cspace). H(C) = H(Q1 , Q2 , ..., QN −1 ) ∼ =

N X

H(Qi ).

(3)

i=1

C-space entropy represents a measure of information about the collision status of robot configurations in the C-space. Each potential sensing action provides some knowledge about the obstacles/free region in P-space and consequently corresponds to entropy reduction in C-space. We attach a coordinate frame to the sensor’s origin. Let s denote the vector of parameters that determine the sensor’s frame, i.e., sensor’s configuration, and V(s) denote the sensor field of view at robot configuration s. Let V (s) denote the set of r.v.’s (i.e., Cij ’s) corresponding to the status of the cells cij ∈ V(s). The expected entropy reduction for a configuration, q, by a sensing action, s, which is also the mutual information between Q and V (s), is formally defined as: def

E[∆ H(Q)] = E{H(Q) − H(Q|V (s))} = I(Q; V (s)). s (4) The above expectation is carried out over all possible sensing results for sensing action s. Since for any two random variables, conditioning will always reduce expected entropy [11], we know that a sensing action always will reduce expected C-space entropy, or equivalently, E[∆H(Q)] = s

I(Q; V (s)) ≥ 0. Based on (3), the C-space entropy reduction could be determined by the summation over the marginal entropy reduction of respective N robot configurations, and is given by, E[∆ H(C)] ∼ = s

N X i=1

E[∆ H(Qi )]. s

(5)

The next best view (to be chosen) is the one that results in maximal expected C-space entropy reduction, where the expectation is computed over all possible sensing results for each sensing action s. The MER (Maximal expected Entropy Reduction) criterion is then formally defined as, smax = arg max[E{∆ H(C)}]. s

s

(6)

For simplicity we use pij = p for all cij . Please note that this is not a restriction, the expression is simply more elaborate to write otherwise. The marginal expected entropy reduction for configuration q with H(Q) = H0 turns out to be:

E ∆ H(Q)) = s

Fig. 1. Schematic of a beam sensor and associated geometric information for MER computation

We now present the entropy reduction expectation computation in C-space. Surprisingly, it turns out in (7) that the MER computation, even with occlusion incorporated, lends itself to a succinct expression. We first consider a beam sensor, and then a generic sensor with a conical FOV, which is essentially a set of beam sensors emitted from a common origin. A. Beam Sensor The beam sensor senses along a beam, V(s), starting from the sensor origin up to L, the range of the sensor, and it returns the distance of the first hit point along the beam. The intersection of the beam with the unknown physical space is denoted by Vu (s). In the simplified sensor model, all gridcells, along the beam that are in front of the hit point are sensed free, i.e., pij |V (s) = 1, and the cell corresponding to the hit point is sensed as obstacle, i.e., pij |V (s) = 0 and the pij ’s values for the cells behind the hit point remain unchanged [10]. The expectation computation in the MER expression is over the set of all possible sensing outcomes for a given sensing action, thus their corresponding probability and the corresponding entropy reduction needs to be determined. It turns out that the set of all possible sensing outcomes for V (s) can be grouped into a finite number of events, determined by simple geometrical computations. The geometry of a sensing action is illustrated in Fig. 1. Vu (s) T is divided into a set of intervals. The ith interval of Vu (s) A(q), is denoted as IRi , correspondingly |IRi | is the number of gridcells in this interval, and IOi is the ith interval of Vu (s) that does not intersect A(q), correspondingly the number of grid-cells in this interval is denoted as |IOi |. The number of intervals depends on the geometry of the robot, and for a general mobile-manipulator with m convex links, the number of intervals is m. All possible sensing outcomes are now grouped into the following m + 2 events: Eventc : The hit point lies inside A(q), i.e., hit point lies in one of the IRi ’s. The robot configuration q would be in collision, were this event to happen, and H(Q|eventc ) = 0. Eventi : i = 0, . . . ,m-1 : The hit point lies inside IOi+1 . Eventm : There is no hit point in Vu (s).

=

m X

Pi

[Hi p

j=1

|IOj |+|IRj |

m X

P i Hi − H0

i=0

(1 − p|IO(i+1) | )] − H0 p|IO1 | , (7)

i=1

where Pi = P (Eventi ), and Hi = H(q|Eventi ), and is easily computed by inserting p(q|Eventi ) in (2), with ( 0 if Eventc p(q) p(q|Eventi ) = otherwise, i Σ |IRj | p

j=1

Note T that Σij=1 |IRj | is the number of cells sensed free in A(q) V(s), were Eventi to happen. As one can see, for a given q, A(q) is easily determined using forward kinematics, and |IOi |, |IRi | are easily determined by simple geometrical computations and ray casting. The marginal entropy reduction for a configuration q is therefor linear in of number cells in the beam, and hence is efficiently computed. Note that since occlusion does affect MER for OCC-grid model, therefore it prefers sensing positions “closer” to the set of sensed configurations. B. The Generic Laser Range Finder Now we consider a generic laser range finder, whose conical FOV is defined by a set of laser beams, emitted from one common sensor origin, as shown in Fig. 2. For expectation computation, the field of view for any two beams, Vi (s), Vj (s) will be considered independent. This assumption is true for continuous representation of P-space, but discretizing the P-space to some resolution, may result in dependency between beams, as they might hit the same grid-cells. It is easily proved that expected entropy reduction for the generic sensor is the sum of the expected entropy reduction from each beam. Hence for b beams, V1 (s), V2 (s), ...., Vb (s),

Fig. 2.

Schematic of the Generic Range Finder as a set of beams

S S where V (s) = V1 (s) V2 (s) ...Vb (s), the general form of the marginal entropy reduction computation for a configuration q, is given by E[∆ H(Q)] = s

b X

E{H(q|Vi (s)) − H(q)}.

(8)

i=1

The expression on the right-hand side is computed using (7). The proof is omitted due to lack of space. The expected entropy reduction for the entire C-space, using (3), is then given by summing over N robot configurations, E[∆ H(C)] ∼ = s

N X m X

Pi

[Hi p

j=1

|IOj |+|IRj |

(1 − p|IO(i+1) | )] − H0 p|IO1 | . (9)

i=1 i=1

Having computed the expected entropy reduction expression for a given sensor action, we can use the MER criterion under OCC-grid based model to decide the next scan. The algorithm then is as follows:

Algorithm 1 OCC-grid-MER view planner for every s do determine V(s) E[∆ H(C)] = 0 s for every qTnot in collision with known obstacles do if A(q) Vu (s) 6= 0 then compute E[∆ H(q)] s E[∆ H(C)] = E[∆ H(C)] + E[∆ H(q)] s s s end if end for smax = arg max{E[∆ H(C)]} s s end for C. Other View Planning Criteria As mentioned earlier in the introduction, mobile roboticists have generally assumed a point robot and focussed on the physical space exploration. Following criteria have been used for choosing the next best view in the past: (i) maximize the volume of unknown physical space in the sensor FOV [15]. (ii) Choose a point on the frontier, the region on the boundary of the free-space and unknown space [23]. The point could be the one closest to current robot configuration or could be chosen to maximize the unknown volume in the sensor FOV. And, more recently, in the context of SLAM, (iii) choose a robot action that maximizes the entropy reduction in physical space [8]. To compare C-space based MER criterion to a physical space based criterion for the dimensioned robot, some enhancement to the physical space based criteria is needed, since they assume a point robot. For example, in the original frontier-based method, it is implicitly assumed that any point of the frontier is reachable, which is not the case when the robot has size/shape associated with it.

A rather straightforward enhancement is as follows. We compute the radius of the inscribed circle, centered at the centroid of the mobile robot. The known obstacles and unknown regions in the physical space are dilated by this radius. The frontier region of this dilated space is computed. For each point on the frontier, the robot is oriented so that the bisector of the sensor’s field of view is perpendicular to the tangent at the unknown region. The view configurations are sorted by either (i) the distance to the current robot configuration (Closest-Frontier) or (ii) the volume of the unknown region inside the sensor FOV (MPV-Frontier), and put into a list Ls . The planner chooses the first one from the list. We call this method Frontier-view-planner-for-circle. We could consider a circumscribed circle of the mobile robot shape, but this would be very conservative and narrow regions will become ineligible. III. SIMULATION RESULTS We now compare the MER criterion with the extended Frontier-MPV and Frontier-Closest criteria for a rectangular mobile base. The task for the robot is to explore its environment, starting from its initial configuration. The overall sensor-based planner used is SBIC-PRM (sensor-based incremental construction of probabilistic road map) reported in [25]. It consists of an incrementalized model-based PRM [24] that operates in the currently known environment and a view planner that decides a reachable configuration within the currently known environment from which to take the next view. The following is a brief description of SBIC-PRM. The planner places a set of random samples in C-space and checks their collision status within known physical space. The free ones are added as a node to the roadmap and the status-unknown samples are added to a list L. A view planner is called to plan a sensing configuration, which is reachable (within the current roadmap) from the current robot configuration. The robot moves to this configuration and takes the next view. After the scan, the physical space known to the robot is updated. The collision status of all the samples in list L is also updated, the obstacle ones are removed from the list, the free ones are made nodes in the roadmap and checked for connectivity with their neighboring nodes, and in this way, the roadmap is incrementally expanded. MER based view planner is incorporated as follows. The set of sensor configurations, [variable s in algorithm 1], is

(a) Environment 1, sensor FOV with 4m range and 100◦ angle Fig. 3. scan.

(b) Environment 2, sensor FOV with 8m range and 180◦ angle

Two environments and the MobileBase with the sensor at the first

(a) Closest Frontier Iteration5

(b) Closest Frontier Iteration15

(a) Closest Frontier Iteration5

(b) Closest Frontier Iteration10

(c) MPV Frontier Iteration5

(d) MPV Frontier Iteration15

(c) MPV Frontier Iteration5

(d) MPV Frontier Iteration10

(e) MER Iteration 5

(f) MER Iteration15

(e) MER Iteration 5

(f) MER Iteration10

Fig. 4. The simulation snap shots(left is P-space,right is C-space) for three criteria in environment 1, using a laser sensor with 4m range and 100◦ angle.

obtained via forward kinematics, over all reachable nodes in the roadmap. The “extended” frontier-based view planner is incorporated as follows. Choose the first configuration from the list Ls , that is collision free and reachable. Call it qs . If no such configuration exists, choose the first configuration in Ls and find the nearest node to it in the roadmap. Extend this node towards qs , as far as possible. Call this “extended” configuration qs . If there is no successful extension towards frontier, randomly select a reachable configuration from roadmap and call it qs . In our simulations, the P-space, a 10m × 10m room, is represented by 200×200 grid-cells. The simulation program, written in C++, uses ‘Aria’ [4] as a library to communicate with the simulated robot in MobileSim which is built based on player-stage [1], [13]. We conducted a series of simulations for robots with different sizes in five different environments, and considered three different sets of parameters for the range sensor. The following sets of parameters were used for the sensor : 1) FOV with 4m range and 100◦ angle 2) FOV with 8m range and 180◦ angle 3) FOV with 2m range and 180◦ angle Due to the random nature of the underlying planner (SBICPRM), we carried out 10 runs for each experiment, and computed the averages of the C-space and P-space exploration rates . In Fig. 3, a 2.0m × 0.6m mobile base is shown in two environments, the first environment represents an unstructured space cluttered with obstacles, and the second environment contains some narrow passages. Some of the scans for the three criterion, Closest-Frontier, MPV-Frontier and MER for these two environments are shown in Fig. 4

Fig. 5. The simulation snap shots(left is P-space,right is C-space) for the three criteria in environment 2, using laser sensor with 8m range 180◦ angle.

and Fig. 5. Due to lack of space, we only show snapshots of the simulation results for the first environment with the first set of parameters for the laser sensor, and for the second environment with the second set of parameters for the laser sensor. The left sub-images in each snapshot shows the physical space and the right sub-images shows the Cspace. Since the C-space is three-dimensional (x, y, θ), we show the projection of the sample configurations on the x−y sub-space. In P-space ‘gray’, ‘white’ and ‘dark’ (in colored version ‘green’, ‘white’ and ‘black’) regions denote unknown part, known and obstacle, respectively. In C-space gray is unknown, the roadmap edges are shown as white, and sampled configurations in dark gray (in colored version the unknown configurations are blue and roadmap nodes are red). The expansion of the roadmap shows the amount of the exploration. As one can see in Fig. 4 and Fig.5, MER not only has a better performance in exploring C-space, but also the P-space has been explored more efficiently. This is because MER criterion, by better exploring the C-space, facilitates the robot in searching a larger set of configurations for further view planning and hence better views. This comparative advantage gets more pronounced for sensors with smaller FOV (Fig. 4). As an aside, one can see in Fig. 4 that MPV-Frontier provides a better performance in exploring P-space as compared to Closest-Frontier, but the explored C-space is scattered into small disconnected regions, which is not efficient for navigation. Thus in the environments cluttered with obstacles the MPV-frontier would tend to be the least efficient. To quantify the above results, we show the percentage of known C-space and P-space area, and the cumulative distance traveled by the robot versus number of iterations

P−space Exploration

ACKNOWLEDGMENT

90 MER MPV Frontier Closest Frontier

We would like to thank Pengpeng Wang and Zhenwang Yao for stimulating discussions.

80

Known P−space Percentage

70

60

R EFERENCES

50

[1] [2] [3] [4] [5] [6]

40

30

20

10

1

5

10

15 20 Number of iterations

25

30

35

(a)

[7]

C−space Exploration

[8]

100 MER MPV Frontier Closest Frontier 90

[9] Known C−space Percentage

80

70

[10]

60

50

[11] 40

[12] 30

1

5

10

15 20 Number of iterations

25

30

35

(b) Fig. 6. (a) P-space exploration, (b) C-space exploration vs. number of iterations, for MER, MPV-Frontier and Closest Frontier in environment 2, using a laser sensor with 4m range, and 100◦ angle.

[13]

[14]

(i.e., scans taken) for the three criteria for the first set of simulations (environment 2) in Fig. 6. The percentage of known C-space is estimated by the number of known configurations from a set of random samples. Clearly, the MER criterion performs better, and the difference is more significant when the the FOV is not wide compared to the dimensions of the environment. For example we can easily see from these plots that on average the MER only takes 18 scans to explore 80% of the C-space, and 65% of P-space, where MPV-Frontier do so in 24 scans and Closest-Frontier in about 35 scans.

[15]

[16] [17] [18]

[19]

IV. C ONCLUSION AND F UTURE W ORK We extended our C-space based MER formulation for occupancy grid maps, which are rather naturally derived from sensed range data. This further allows MER criterion to reflect realistic issues such as occlusion constraints. A succinct expression for MER is derived, and is efficiently computed. We showed that even for a simple rectangular shaped robot, the MER criterion results in significantly more efficient C-space and P-space exploration over physical space based criteria such as Frontier based methods. This view planning method is easily applied to a general robot (mobile/manipulator) sensor system. We believe that our OCC-grid based MER criterion could be merged with SLAM approaches [8], [10] for a unified exploration framework for a mobile-manipulator. We are currently pursuing this in [14].

[20] [21] [22] [23] [24] [25]

http://playerstage.sourceforge.net/index.php?src=stage. http://robots.mobilerobots.com/mobilesim/. http://www.activrobots.com/robots/power.html. http://www.activrobots.com/software/aria.html. http://www.hokuyo-aut.jp/products/urg/urg.htm. H. Banos and J. C. Latombe, Robot navigation for automatic construction using safe regions, Proceeding of Int Symp. on Experimental Robotics, 2000, pp. 395–404. Wolfram Bulgard, Dieter Fox, and Sebastian Thrun, Principles of robot motion, The MIT Press, London, England, 2005. Stachniss C., Grisetti G., and Burgard W., Information gain-based exploration using rao-blackwellized particle filters, Proc. of Robotics: Science and Systems (RSS) (Cambridge, MA, USA), 2005, pp. 65–72. H. Choset, B. Mirtich, and J. Burdick, Sensor based planning for a planar rod robot: incremental construction of the planar rod-hgvg, Proc. IEEE Int. Conf. on Robotics and Automation, 1997, pp. 3427– 3434. Howie Choset, Kevin Lynch, Seth Hutchinson, George Kantor, Sebastian Thrun, Wolfram Bulgard, and Lydia Kavaraki, Probabilistic robotics, The MIT Press, London, England, 2005. Thomas M. Cover and Joy A. Thomas, Elements of information theory, John Wiley and Sons, New York, 1991. Oriolo G., Vendittelli M., Freda L., and Troso G., The srt method: randomized strategies for exploration, Proceedings of the IEEE International Conference on Robotics and Automation, 2004, pp. 4688– 4694. Brian Gerkey, R.T. Vaughan, and Andrew Howard, The player/stage project: Tools for multi-robot and distributed sensor systems, Proceedings of the 11th International Conference on Advanced Robotics, 2003, pp. 317–323. Y. Huang, Environment exploration with a generic robot system, Ph.D. thesis, School of Engineering Science, Simon Fraser University, In preparation. E. Kruse, R. Gutsche, and F.M. Wahl, Efficient, iterative, sensor based 3-d map building using rating functions in configuration space, proceedings of IEEE International Conference on Robotics and Automation, 1996, pp. 1067–1072. Freda L. and G. Oriolo, Frontier-based probabilistic strategies for sensor-based exploration, Proceedings of the IEEE International Conference on Robotics and Automation, 2005, pp. 3881–3887. J.C. Latobme, Robot motion planning, Kluwer Academic Publishers, 1991. A.A. Makarenko, S.B. Williams, F. Bourgault, and H.F. DurrantWhyte, An experiment in integrated exploration, proceedings of IEEE International Conference on Intelligent Robots and System, 2002, pp. 534–539. P. Renton, M. Greenspan, H. Elmaraghy, and H. Zghal, Plan-nscan: A robotic system for collision free autonomous exploration and workspace mapping, Journal of Intelligent and Robotic System (1999), 207–234. V. Sujan and S. Dubowsky, Efficient information-based visual robotic mapping in unstructured environments, Int. J. Robotics (2005), 275– 293. P. Wang and K. Gupta, Computing c-space entropy for view planning based on generic sensor model, Proceedings of IEEE International Conference on Robotics and Automation, 2003, pp. 2406–2411. , View planning for exploration via maximal c-space entropy reduction for robot mounted range sensors, Advanced Robotics (2006), 771–792. B. Yamauchi, A frontier-based approach for autonomous exploration, proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, 1997, pp. 146–151. Y. Yu, An information theoretical incermental approach to sensorbased motion planning for eye-in-hand systems, Ph.D. thesis, School of Engineering Science, Simon Fraser University, 2000. Y. Yu and K. Gupta, C-space entropy: A measure for view planning and exploration for general robot-sensor systems in unknown environments, International Journal of Robotics Research (2004), 1197–1223.

Configuration space based Efficient View Planning and ...

than full 360 degrees. Banos and Latombe [6] also use the. MPV criterion for determining the next best view. These physical space criteria for view planning do ...

703KB Sizes 1 Downloads 234 Views

Recommend Documents

Configuration space based Efficient View Planning and ...
(sensor-based incremental construction of probabilistic road map) [24], and is briefly explained later in Section 4. We consider this paper as an initial step toward ...

Efficient Multi-Ranking Based on View Selection for ...
scaling up to large data sets without losing retrieval accuracy. ... of digital images on both personal computers and Internet, the retrieval task in image data.

The Resource-based View: Origins and Implications
influential paper that defined the notion of a corporation's “core competence. ..... However, empirical tests require that the con- ..... Strategic grouping in the auto industry significantly explained the structural decisions of host country produ

Network Entropy Based on Topology Configuration and ...
LI Ji(李季)1,2**, WANG Bing-Hong(汪秉宏)2, WANG Wen-Xu(王文旭)2, ZHOU ... 2Department of Modern Physics, University of Science and Technology of China, ... [2−5] In recent years, represented in the small ... words, log2 Ω bit information i

Network Entropy Based on Topology Configuration and ...
A definition of network entropy is presented, and as an example, the relationship .... Metabolic map ... of topology entropy of ER networks by means of anal-.

Safe and Efficient Robotic Space Exploration with ... - Semantic Scholar
The authors are currently developing, under the support of NASA, a Robot ... supported by the Advanced Space Operations Technology. Program of NASA's ...

Zero-Sum Defender: Fast and Space-Efficient Defense against Return ...
IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.1 JANUARY 2014. 303. LETTER Special Section on Cryptography and Information Security. Zero-Sum ...

Safe and Efficient Robotic Space Exploration with Tele ...
which will allow a small number of human operators to safely and ... telesupervision of a fleet of mobile robots. This research is ... While we do not explicitly address issues of ...... [11] T.E. Rogers, J. Peng, and S. Zein-Sabatto. “Modeling.

Apparatus and methods for providing efficient space-time structures for ...
Sep 8, 2009 - “Channel Estimation for OFDM Systems With Transmitter Diversity in Mobile Wireless .... transmission line capacity) as used in SISO OFDM systems. .... telephone system, or another type of radio or microwave frequency ...

Buffer-space Efficient and Deadlock-free Scheduling of Stream ...
Jun 15, 2010 - Figure 4(a), {a} is fired 18 times during the simulation and leaves 360 tokens at (a, c) (z{a}{c}((a, c)) = 360). We in- crease the queue length of (a ...

Apparatus and methods for providing efficient space-time structures for ...
Sep 8, 2009 - implemented as part of a wireless Local Area Network (LAN) or Metropolitan ...... computer-readable storage medium having a computer pro.

A Robot Supervision Architecture for Safe and Efficient Space ...
NASA JPL or the K10 at NASA ARC, a secure Internet-based connection is used. Referring to Figure 4, we ... Transactions on Industrial Electronics, Vol. 50, No.

The Resource-based View: Origins and Implications
persistent firm performance differences in the field of strategic management. .... are probably similar to baseball managers: they receive too much credit when ...

Port-based Modeling and Control for Efficient Bipedal Walking ... - Free
control technique uses the computed optimal trajectories to define new coordi- ...... The shapes of such surfaces are usually available as point clouds or.

Optimal and Efficient Speculation-Based Partial ... - Semantic Scholar
Sydney, NSW 2052, Australia qiongc,jxue¡@cse.unsw.edu.au .... Motivated by these two previous research efforts, this work addresses the ...... IBM System Jour- nal, 39(1), 2000. [3] J. M. Anderson, L. M. Berc, J. Dean, S. Ghemawat, M. L.. Henzinger,

Efficient and Effective Video Copy Detection Based on Spatiotemporal ...
the Internet, can be easily duplicated, edited, and redis- tributed. From the view of content ... in this paper, a novel method for video copy detection is proposed. The major ...... images," IEEE International Conference on Computer. Vision, 2005.

Support vector machine based multi-view face detection and recognition
theless, a new problem is normally introduced in these view- ...... Face Recognition, World Scientific Publishing and Imperial College. Press, 2000. [9] S. Gong ...

efficient model-based speech separation and denoising ...
sults fall short of those achieved by Algonquin [3], a state-of-the-art mixture-model based method, but considering that NSA runs an or- der of magnitude faster, .... It bears noting that the Probabilistic Sparse Non-negative matrix Fac- torization (

Efficient Market Mechanisms and Simulation-based ...
cation in multi-agent systems such as bandwidth allocation, network routing, electronic marketplaces, ..... spectrum to wireless service providers such as AT&T, Cingular, Sprint and Verizon. ... They aim for widespread coverage for their.

Port-based Modeling and Control for Efficient Bipedal Walking Robots
3.3.4 Conditions for contact release . ..... robot, and Wisse & van Frankenhuyzen (2003) extended McGeer's original pas- .... 1.2. PORT-HAMILTONIAN MODELING AND CONTROL. 7. 0 m. 30 m. 60 m ...... Photoshop and even LaTeX! Besides ...

Efficient Histogram-Based Sliding Window
this is computationally very intensive. .... ern CPU. {mb}B b=1 is either a model histogram or a support vector to which the feature histogram h is ..... Applications.

Efficient Histogram-Based Sliding Window - CiteSeerX
a sliding window is common in solving computer vision problems such as object ..... 2006 dataset [15] and test four object classes, i.e., person, motorbike, bicycle ...

Efficient and Effective Similarity Search over Probabilistic Data Based ...
networks have created a deluge of probabilistic data. While similarity search is an important tool to support the manipulation of probabilistic data, it raises new.

Port-based Modeling and Control for Efficient Bipedal Walking Robots
(e.g. series or parallel connection for the damper), without the need to rewrite the ... control in terms of interacting physical systems has several advantages. ...... board and an external PC by wireless Bluetooth communication or a wired serial.