16831 - Statistical Techniques in Robotics Final Project Report Navigational Policy Generation for the Intelligent Mobility Platform Using Wireless and Stereo Vision for Localization Nick Armstrong-Crews, Sandra Mau, Kevin Yoon December 14, 2006

1

Introduction

The Intelligent Mobility Platform (IMP) is an assistive mobility device for the elderly (see Figure 1). It is a robotically-enhanced rolling walker equipped with a laser rangefinder, motors, encoders, and a PDA-style input and display device that can be used to request and read navigational directions to desired locations. It is also capable of automatically parking itself when it is not needed as well as positioning itself to assist a person out of a chair. Laser hardware is generally expensive and heavy. Cost, power-consumption, ease-of-use, and safety are just some of the reasons to replace the laser with a cheaper, passive, and more lightweight sensor if it can be done with little or no sacrifice to performance and reliability. Moreover, the wheel encoders are not only heavy but their accuracy is subject to the nature of the driving surface and corrupted by forces applied by the user. Additionally, a mechanism for fast global localization is desirable since such a device would not be powered continuously and thus able to continuously keep track of its position. We propose to enhance the IMP with two sensing modalities: stereo vision and wireless reception. Stereo vision is cheaper than most laser rangefinders. It also consumes less power and can deliver 3D range information rather than just a planar array of ranges. Furthermore, it is capable of capturing information that 1

Figure 1: The IMP (Intelligent Mobility Platform) is more relevant to human navigation such as the presence of other people or vehicles and where they are heading. Visual odometry can also be used to replace the function of the encoders and drive a motion model of the estimated pose. Wireless cards of the 802.11 standard are as cheap and ubiquitous as the wireless access points (WAP) they communicate with. Increasingly more buildings are equipped with a network of these WAPs and recent research has been done to take advantage of them as landmarks for localization [1]. When given a map of known WAP locations, a wireless card can be used for instantaneous localization without need for some initial motion. Combined with 3D range data obtained from stereo and an a priori known occupancy map the accuracy of the pose estimate can be further improved. The IMP, being an assistive mobility device, should naturally be capable of navigating its user from point A to point B. Its ability to localize, however, is location-sensitive. Certainty in pose estimates depend on how many WAPs are detectable and the strength of the signals that are detected. The accuracy of visual odometry would also vary depending on the local presence of viewable features. In [3], Roy et al. introduce the idea of belief compression to efficiently solve POMDPs. We intend to apply this concept here to generate navigational policies that balance high pose certainty with low cost paths. In collaboration with Geoff Gordon, we have developed and tested these features in simulation to verify out methods before implementation and test 2

on the IMP. The contributions of this work include 1. Combining wireless signal strength and local range data for improved localization accuracy and certainty. 2. Demonstrating the efficacy of belief compression for the navigation of an assistive mobility platform.

2

Related Work

Two major aspects of this project are sensory fusion for localization and smart path planning to reduce the probability of getting lost. Both aspects contribute to a better localization estimate, both now and in the future. The sensory data we will incorporate for localization include visual featuretracking using a camera and signal strength maps of wireless access ports. This section describes related work done in Wi-Fi localization and visual feature-tracking for localization. We will also review methods of navigation to reduce the likelihood of higher localization error in the future including Roy’s original Coastal Navigation (or Augmented MDP) method and his subsequent POMDP variant known as exponential family principal components analysis (E-PCA)[3]. Wi-Fi Localization Localization using Wi-Fi can generally be grouped into methods that use range estimation and those that do not [6]. The former tends to use triangulation methods based on range [5] and the latter is more varied, the most popular of which being a signal strength look-up map [5] [1]. Triangulation requires less training and can be performed quickly on-line, however, the results tend to be less accurate. Signal strength look-up maps for localization are more accurate but require more memory. The difficulty in any Wi-Fi localization method occurs due to fluctuating signal strengths and when the number of access points are sparse. Most methods require the detection of three or more [5]. The best methods to our knowledge are accurate to within only 2m in the best of times [5] [1]. This project recognizes the shortcomings in localization using Wi-Fi and proposes to augment it with vision. Stereo Visual Odometry and Range Data Extraction

3

Vision has been used in robotic motion control for many decades. A very thorough survey of the topic was written by DeSouza and Kak [4]. The implementation of visual odometry involves tracking features at each time step and using that information to make a motion estimation. The benefits of using stereo vision for visual odometry include robust hazard detection and accurate odometry [7]. Stereo range data can be extracted from disparities between corresponding features in a stereo image pair and will be used for improved localization. At this initial stage, however, visual odometry will not be used to drive the motion model of the localization filter due to the difficulty of simulating vision data. Path Planning based on Position Uncertainty One common method for motion planning based on position uncertainty is to use POMDPs. However, one drawback to using this is that it is intractable for a large number of states. Nick Roy introduced a path planning method known as coastal navigation (or Augmented MDP) in 1998 [2] with the objective of path planning to reduce the average positional uncertainty. His original Augmented MDP trajectory generation method involves finding the trade-off between shortest-path and minimizing entropy (a measure of sensor uncertainty). Recent advances to improve POMDP was to scale large problems down by using “belief space sparsity” to reduce the problem to lower dimensions [3]. The difference between the Augmented MDP (or Coastal Navigation) and E-PCA is that E-PCA uses belief features instead of entropy. Both in essense try to convert the POMDP into a tractable MDP. We will implementing both the normal Augmented MDP and the E-PCA compressed MDP.

3

Particle Filter Localization

Our approach for localization uses the wireless signal strength for gross-scale localization (in particular, solving the “kidnapped robot problem”), while using stereo range data for local state estimation and visual feature-tracking for fine-scale odometry.

4

3.1

Localization with Wireless Access Points (WAP)

The wireless sensor model uses a pre-generated 2D map of predicted signal strengths from each WAP. Of course, the received signal strength (RSS) from a single WAP could be the same at a variety of locations, hence the decision to use a particle filter with its multi-modal filtering capabilities. All of these 2D RSS maps are combined to form a 3D RSS lookup table which is used by the wireless sensor model when calculating the likelihood of a wireless reading, z w , which is a vector of signal strength readings the length of number of WAPs that exist in the lookup table. The RSS lookup table is constructed by taking wireless readings at various points in the scene. The signal strength values are measured over approximately 30 seconds and time-averaged to compensate for fluctuating signals and delayed WAP detections. The result is interpolated for every WAP encountered in the map to construct each 2D slice of the lookup table. A simulated lookup table is shown in Figures 2(b) and 2(c) which were generated from some arbitrarily chosen WAP locations that do not necessarily correspond to the locations of actual WAPs. The map of Wean Hall in Figure 2(a) was constructed with CARMEN1 via offline SLAM using data collected from the laser range finder on the IMP. The likelihood value p(ztw |xt ) is produced by comparing the expected RSS reading given pose xt , z w∗ , to the measured z w . Let z w (i) denote the signal strength detected by the ith WAP. The pseudocode for this calculation is shown in Algorithm 1 where normpdf (µ, x, σ) is a function that returns the probability density at x given mean µ and standard deviation σ. Note also that z w may have N aN elements corresponding to WAPs that were not detected. The algorithm handles this by replacing such instances with a small probability value.

3.2

Localization with Stereo Vision

For fine-scale localization, a stereo camera is used to produce range data that is matched against a pre-built 2D occupancy map (Figure 2(a)). The stereo camera produces a disparity image such as the one shown in Figure 3. Pixel intensities in the disparity image correspond to distances, thus a 3D pointcloud can be generated. These points are then projected onto the ground plane for comparison against the 2D occupancy map. The result is vector of 2D points, z s . In this work, z s is created directly from the Wean Hall 1

http://carmen.sourceforge.net/

5

(a)

(b)

(c)

Figure 2: (a) 5th Floor Wean Hall (including walkway outside leading to Porter Hall). (b) and (c) represent two slices of the RSS lookup table corresponding to simulated WAPs in the Wean Hall scene. Red indicates areas of high RSS while blue indicates areas of low RSS. Signal strength decreases with the inverse square of the distance from the transmitting WAP.

Algorithm 1 Calculate p(ztw |xt ) p = [] Calculate ztw∗ from xt and RSS lookup table σ = 0.5 {Experimentally determined} for i = 1 to N umberOf W AP S do if z w (i) == N aN then p(i) = 0.001 else p(i) = normpdf (z w∗ (i), z w (i), σ) end if end forQ return i p(i)

6

Figure 3: Original image (left), disparity image (center), and ground plane projection (right) occupancy map via ray-tracing due to the difficulties of simulating vision data. Noise is added to better simulate real-world data. The z s measurement is passed into the stereo sensor model where the points are converted to a vector of ranges and binned at some specified angular resolution. Given that 3D point data is available, scan matching could certainly have been used, but binning the points into a fixed-length rangevector limits the size of the measurement vector - which might otherwise include hundreds of points - and allows for speedier likelihood calculations. A beam mixture model such as the one described in [8] is used to determine the likelihood value p(zts |xt ) for each particle.

3.3

Combined filtering

The weights given by the wireless sensor model and the stereo sensor model are easily combined to produce a single weight incorporating the likelihood of both sensor readings. wt = p(ztw |xt )p(zts |xt )

3.4

Particle Filter Performance

Figure 4 compares estimation accuracy of the robot pose as it is manually driven around in the wean hall scenario with different combinations of sensors enabled. It is fairly clear that the combined use of wireless and stereo data lead to localization accuracy surpassing that of either sensor alone. Figure 5 compares localization error between coastal paths and straightline paths (driven manually). The truncated lines indicate that the goal was reached earlier and the simulation terminated; hence, we clearly see the tradeoff between short paths and localization error. Note that the coastal 7

Figure 4: Localization error path achieves lower error with stereo alone, but with both sensors we see an interesting effect that differs from the results obtained by [2]: the error associated with a coastal path is not significantly different from the error of a straight-line path. This is due to the fact that there are still wireless “features” in the center, even though there are no occupied space features. This will have important implications in planning, as shall be described presently.

4 4.1

Navigational Policy Planning Methods Overview

For generating navigational policies, we tried two methods: Augmented MDP using mean and entropy features (Coastal Navigation) and a variant using Exponential Family PCA (E-PCA) for belief features [3]. Both methods, as mentioned in related work, reduce a high-dimensionality POMDP into a lower-dimensional one, making it tractable. We collect sample beliefs, reduce its dimensionality, build a low-dimensional belief MDP, then solve the resulting MDP to generate a navigational policy. This work is 8

7 Straight path (Stereo−only) Coastal path (Stereo−only) Straight path (Wifi+Stereo) Coastal path (Wifi+Stereo)

xy−pose error (meters)

6 5 4 3 2 1 0

0

10

20

30

40 time

50

60

70

80

Figure 5: Straight-line versus coastal path comparison based on previous work done by Roy and Gordon. [3] We will be testing on large problems (1800 states) to see how well these methods scale. Figure 6 outlines the two approaches which are explained in further detail in the following sections. sampling

sampling

b

b′

1-NN b*

reconstruct

project

{q}′

{q}

B′

B*

b*′

b′ 1-NN

b*

b*′ MLE

MLE (a)

(b)

Figure 6: Learn reduced models: (a) E-PCA POMDP (b) Augmented MDP

9

4.2

E-PCA

1. Generate occupancy map (using VASCO software for SLAM) and wireless signal strength map (using Kismet software). “Ground truth” position provided by Carmen software via SICK laser. 2. Using particle filter, drive robot to achieve a set of reachable beliefs: B∗. 3. Find (locally) optimal compression for beliefs via E-PCA: B → b. 4. Sample discrete motion sensor models to learn transition model in reduced space : b∗ → b∗0 . 5. Solve the learned MDP via value iteration: p(b∗ ) → a. 6. Execute policy by calculating features and 1-NN: {q} → b → b∗ , p(b∗ ) → a.

4.3

Augmented MDP

1. Generate occupancy map (using VASCO software for SLAM) and wireless signal strength map (using Kismet software). “Ground truth” position provided by Carmen software via SICK laser. 2. Using particle filter, derive reduced space belief features based on mean and variance: {q} → b∗ . 3. Sample continuous motion and sensor models to learn transition model in reduced space : b∗ → b∗0 . 4. Solve the learned MDP via value iteration: p(b∗ ) → a. 5. Execute policy by projecting belief into reduced space: {q} → B → b → b∗ , p(b∗ ) → a.

4.4

Belief Point Collection

Sample beliefs are required by the MDP planners to serve as a database of beliefs that can actually be encountered. The particles of the localization filter are used directly to approximate these beliefs. Figure 7(a) outlines belief calculation for the E-PCA approach. A belief point is created by centering a Gaussian distribution at each particle and calculating at each state the sum of the probability densities of all particles. 10

Particle Filter

e

re c

iz et

on

cr

st

P

s di

ru c

t

The values are normalized over all states resulting in an 1800-element belief vector. For the A-MDP approach, shown in Figure 7(b), the low-dimensional belief, consisting of the sufficient statistics meanx , meany , meanθ , varx , vary , and varθ , can be easily calculated directly from the particles. The method of belief point collection entails manually driving the robot in the scene in a manner typical of how it is expected to be driven during normal use and also such that good coverage of the state space is obtained. (Note that, for this domain, this does simply mean driving to every xyposition but also visiting different orientations at these positions.) Belief points are calculated and logged at a fixed time interval.

{q}

Particle Filter

B*

(a)

{q}

b*

(b)

Figure 7: Collect reachable beliefs: (a) E-PCA POMDP (b) Augmented MDP

5

Results

Due to hardware problems with the IMP during development (described further in Challenges), we ran a simulated problem instead. Tests were done using a square map of 300x300 pixels. This was discretized into 15x15 square bins (x,y) as well as 8 possible orientations from 0 to 2 pi, giving us a 15x15x8 = 1800 state problem. Belief points were collected by sending the simulated robot to randomly assigned waypoints on the map as was described in the Belief Point Collection section.

11

5.1

E-PCA Compression

E-PCA is a compression algorithm proposed by Gordon and Roy for use in POMDP methods [3]. This alorightm efficiently compressed this 1800 state problem down to 4 basis/states. The quick speed of this algorithm allowed running many belief points (around 10000), which took less than 15 minutes. Reconstruction using E-PCA had an average KL-divergence error of 0.1956, slightly higher than the results given in Roy [3]. A possible reason for this is because E-PCA did not do as well at reconstructing orientation angle (theta) as it did with position (x,y). The problems that Roy looked at in the paper seemed to consider only position, not orientation, for simplicity. It was found that E-PCA was very good at reconstructing the (x,y) pose (which again agrees with Roy and Gordon’s findings), however, theta tended not to be as accurate. Figure 8 shows two examples of a belief before and after doing a compression and reconstruction through E-PCA. The top images show robot pose as a state of (x,y,theta) along the x axis. The peaks correspond to high probability in a certain (x,y) and the periodicity in the peaks is due to high probability at that (x,y) for different thetas (up to 8 possible orientations). The two images below are the original belief and the reconstructed belief plotted on a (x,y), 2D map, neglecting theta. Figure 8(a) on the left shows a belief that E-PCA reconstructed very well for (x,y,theta) whereas Figure 8(b) was not as good for theta, but the (x,y) position was very accurate as can be seen in the latter two plots. Possible reasons for this are discussed further in the Comparing Methods subsection.

5.2

E-PCA Compressed Belief MDP

It was found that building the MDP instance from the high-dimensional Transition, Observation and Reward functions and the E-PCA basis took the bulk of time in this entire process. A variety of methods were done to improve computation time including: • We took advantage of the fact that the sensors (each WAP and stereo camera) are independent of each other when calculating our observation probabilities. So this can be rewritten as p(zt |xt ) = p(ztwap1 |xt )p(ztwap2 |xt )p(ztstereo |xt ) such that our matricies were much smaller in size and much faster to manipulate. • Finding the E-PCA basis parameters was very quick, so we ran a lot of belief points (˜10000) through that to try to get as accurate a 12

(a)

(b)

Figure 8: Top left picture shows original belief (blue *) and reconstructed E-PCA belief (red o) over all states. The peaks correspond to certainty of being at (x,y) for a particular theta. Below that the same beliefs are plotted as (x,y) poses on the map. (a) Shows one where E-PCA was spot-on for (x,y,theta) and (b) shows one where E-PCA was less accurately predicting theta, but still predicting (x,y) well.

13

Figure 9: E-PCA generated policy executed on the left shows a top-right corner starting point. The run of the right shows a top-left corner starting point. basis as possible. The step of building the reduced MDP took much longer so we ran a few selective points (˜900) through that to reduce the computation time. It makes sense that the basis for projection and reconstruction require more data points because if the transformation is not accurate, then the MDP instance would be useless. Whereas a sparse MDP instance might not provide the optimal policies, but should give a general idea of motion. • We matricized/vectorized everything in Matlab. Although it seems trivial, it was important and we spent a lot of time doing this due to the large datasets we were running. In regards to performance, the navigational policies are not as good as those of the A-MDP. Figure 9 shows two runs with starting points at opposing top corners. As you may notice, the path seems somewhat coastal, but is however meandering at times; almost like the robot is exploring outward until it is getting lost, then he turns back towards a wall to find its way. Also, not every single starting point on the map leads to the goal (in a reasonable number of steps). Possible reasons may be the small number of points we use to build an instance (˜900) as well as the small number of samples taken to building an instance. The more samples the longer it takes, though it would be more accurate. For debugging and testing purposes, we wanted to keep the computation time as low as possible.

14

5.3

Augmented MDP using Mean and Variance

The Augmented MDP features of mean and variance made solving the problem much more tractable (and less confusing.) It has the same theory of using belief compression to reduce the space of the problem as the E-PCA method, however, with much more accurate results based on our simulations. Figure 12 shows that when the mean and variance belief points were reconstructed as a high-dimensional belief, the KL Divergence was very low; lower than that of using E-PCA. Interestingly, we also discovered that the robot’s behavior is qualitatively different with the addition of the wireless localization modality (on the square map, using the Augmented MDP). The contrasts can be seen in Figures 10(a) and 10(b). Essentially, the robot is more confident with wireless (since it receives some sensory information even in the unoccupied center of the map), resulting in straight-line behavior. This result satisfies the intuitive expectation that if a localization system is really good, then we need not concern ourselves with uncertainty when planning. As such, perhaps a less complex planner (such as A*) would be more appropriate here (nearly as effective at a much lower computational cost).

50

50

100

100

150

150

200

200

250

250

300

50

100

150

200

250

300

300

(a)

50

100

150

200

250

300

(b)

Figure 10: Example Augmented MDP executions (a) without wireless, and (b) with wireless. The robot begins in upper left corner and the goal is in lower left corner (3σ contours in blue). In (a), observe that the robot follows a “coastal” path to avoid getting lost. Also note that the robot (intelligently) turns toward the corner in the beginning and toward the wall in the upper center as it begins to get lost. However, in (b), the robot goes essentially straight for the goal, knowing that, thanks to wireless, it won’t get too lost in the middle.

15

5.4

Phenomena

The robot prefers to turn left. Perhaps this is due to wrap-around of theta. While we account for said wrap-around in our nearest-neighbor calculations, we used MATLAB’s builtin kmeans function, which does not have such an option. The robot prefers move-and-turn motions rather than moving straight, resulting in zig-zag paths. Steps are normalized to eliminate any distance benefit to diagonal motion; but perhaps there are localization benefits to turning while moving (e.g., viewing different landmarks at each timestep).

5.5

Comparing Methods

To fairly compare between the two different methods of obtaining lowdimensional beliefs, we transformed them both to a high-dimensional belief state. Figure 11 illustrates this conversion for comparison. We collected a set of high-dimensional (1800 state) belief through the particle filter by random walk. We then use this set as a ground truth comparison for both E-PCA and Augmented MDP methods. E-PCA projected and reconstructed those beliefs, similar to the experiments described in the E-PCA Compression section and Augmented MDP had to be converted from its low-dimensional belief (6 features) to the 1800 state belief for comparison. Based on our comparison, it seemed that the mean and variance features used by A-MDP was a better estimator. It was found that the KL Divergence of belief points between ground truth and E-PCA reconstructed beliefs was 0.5019 whereas the KL Divergence of belief points between ground truth and mean and variance features of A-MDP was only 0.0995. The KL Divergence between the two methods directly was 0.6126. The mean and variance method was very good at predicting the theta of the robot wheras the E-PCA was often less accurate for theta as mentioned in the previous section. One such result is shown in Figure 12 where the E-PCA reconstruction for (x,y) is very good but theta is off in comparison to mean and variance compression method. There are a few likely reasons for this. Theta is more difficult for E-PCA to regress accurately since the scale in comparison to the (x,y) pose is very small, thus small deviances would affect theta greatly. Another reason is that everything is being run through simulation which approximates using Gaussian distributions. The results tended to be unimodal, thus a mean and variance can quite accurately describe these features. E-PCA would likely be a better choice for a real-world experiment with possible multimodal

16

belief features in the environment. Also, mean and variance have a basis of 6 whereas E-PCA is using a basis of 4. This direct comparison likely is not fair, but intuitively, more basis should be able to encode information more accurately.

5.6 5.6.1

Unexpected Challenges Hardware

The IMP experienced hardware failure with its laser connection soon after we began using it (although we didn’t discover the failure before having wasted a great deal of time). Then, we tried to use SICK-equipped Pioneer robots; but their versions of Carmen were incompatible with their hardware (hence, more wasted time). Finally, funding for the IMP’s stereo camera was mysteriously delayed for several months; instead, we “borrowed” a stereo camera from one of Manuela Veloso’s ER1 robots. 5.6.2

Software

Kismet did not support several of the wireless cards we had; eventually, we were able to find a system with a supported card, but only after a kernel rebuild and various module patches. Additionally, Kismet had no option to log signal strength data over time – it only kept the maxima and minima. Hence, we had to hack the Kismet source code to log these things, as well as hacking it to communicate with our main MATLAB code via TCP/IP. 5.6.3

Project Inter-dependencies

The visual odometry, necessary for all other components, was part of a Computer Vision class project; the localization, a dependency of the planning component, was part of a Statistical Techniques in Robotics class project; and the planning component depended on all other components. Of course, the due dates for these projects were in reverse-dependency order. 5.6.4

Miscellaneous

One wireless card had inconsistent performance; we found it was physically split down the side, and performed better when we pinched it together. We couldn’t log in to our IMP-interface laptop when it was off the CMU network, which it had to be to communicate with the IMP. When we asked the Help Desk to fix it, not only did they fail at fixing this problem, but they succeeded at deleting our MATLAB installation. 17

Particle set

A-MDP

Original belief point

B1800 E-PCA Low-dimensional beliefs

b4

b6

B1800

Reconstructed beliefs

Figure 11: How we compared the different methods of compression.

18

Figure 12: These images show a comparison between belief points using EPCA and mean and variance compression methods. The top left figure shows E-PCA reconstructed beliefs, mean and variance reconstructed beliefs and ground truth for pose (x,y,theta). The other three images show the position (x,y) of the robot on the map according to the E-PCA, mean and ground truth.

19

MATLAB is slow, and modifying algorithms to be “matricized” to combat this slowness takes a lot of programming time and can (and did) introduce bugs. Alternatively, we could have used C++, but then we’d have had to fight segfaults, makefiles, and couldn’t have visualized our results easily. None of these projects are very well aligned to our individual research directions; so, although fascinating, they had to compete rather than cooperate with our other research. .. . And, worst of all, constant complaining required a good deal of precious time.

6

Conclusions

In our simulated world, we have come to the following conclusions: 1. The particle filter performs better using stereo and wireless sensing modalities than using either alone. 2. The Augmented MDP method outperforms the E-PCA POMDP method. Due to its simplicity and faster performance, the Augmented MDP is a better option when the belief distribution is roughly Gaussian (i.e., unimodal). 3. With the use of both stereo and wireless sensing modalities, the robot chose a path resembling essentially what a classical planner would have chosen. Hence, if the state estimator never veers too far from ground truth, the speed and simplicity of classical planners (e.g., A*) would make them preferable to the techniques we tried here. Whether or not any of these conclusions hold true on the real robot is yet to be discovered; however, we expect qualitatively similar results.

20

References [1] Berna, M. et al. “A Learning Algorithm for Localizing People Based on Wireless Signal Strength That Uses Labeled and Unlabeled Data,” IJCAI, 2003. [2] Roy, N. et al, “Coastal Navigation - Mobile Robot Navigation with Uncertainty in Dynamic Environments”, ICRA, Detroit, MI, May 1999: volume 1, pp. 35– 40 [3] Roy, N. and G. Gordon “Exponential Family PCA for Belief Compression Using POMDPs”, NIPS, volume 15, 2002. [4] DeSouza, G. and A. Kak, “Vision for mobile robot navigation: A survey,” IEEE Trans. PAMI, Vol. 24, No. 2, pp. 237-267, 2002. [5] A. Smailagic et al., “Location sensing and privacy in a context aware computing environment,” Proc. Pervasive Computing, 2001. [6] Langendoen, K. and N. Reijers, “Distributed localization in wireless sensor networks: a quantitative comparison,” Comput. Networks, vol. 43, no. 4, pp. 499518, 2003. [7] J. Campbell, R. Sukthankar, I. Nourbakhsh. “Visual Odometry Using Commodity Optical Flow,” AAAI, (demo), 2004. [8] S. Thrun, W. Burgard, and D. Fox. “Probabilistic Robotics,” The MIT Press, Cambridge, Massachusetts, 2005.

21

7

Appendix A: System Overview Encoders

Ladar

Carmen

Kismet

Wireless

Collater

Occupancy Map

Stereo Vision

Range data

Signal Table

Stereo Observation Model Particle Filter

Wireless Observation Model

Motion Model Visual Odometry

Project onto basis

Belief Points

ACTION!!! Nearest Neighbor

basis E-PCA {b'}

Sampling POMDP instance

POMDP Solver

Figure 13: System overview

22

Policy π(b') → a

16831 - Statistical Techniques in Robotics Final Project ...

Dec 14, 2006 - cloud can be generated. .... less signal strength map (using Kismet software). “Ground truth” position provided by Carmen software via SICK laser. 2. ..... the Help Desk to fix it, not only did they fail at fixing this problem, but.

1MB Sizes 15 Downloads 141 Views

Recommend Documents

Final Robotics Project Proposal pdf.pdf
Final Robotic ... posal pdf.pdf. Final Robotics ... oposal pdf.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Final Robotics Project Proposal pdf.pdf.

INTRODUCTION TO ROBOTICS final exam REVIEW SHEET.pdf ...
... Data. Programming to accumulate and react through the use of variables. Page 1 of 1. INTRODUCTION TO ROBOTICS final exam REVIEW SHEET.pdf.

final project requirements - GitHub
In the course of the project, we expect you to complete the following tasks: 1) Gather ... The presentations should target a non-technical audience and serve the ...