Kalman Filter for Mobile Robot Localization

Who? From?

Paulo Pinheiro (T.A) / Eleri Cardozo (Professor) 2014 Class of IA368W: M´ etodos Estoc´ asticos em Rob´ otica M´ ovel (UNICAMP) [email protected]

When?

May 15, 2014

Instructions THIS GUIDE IS INTENDED TO HELP THE IA368W CLASS DEVELOP THE KALMAN FILTER. The realm is mobile robot localization problem. The equations are colorful for an easy understanding. We assume you have the robot with a laser and odometry information. The odometry must return [x,y,th]. The laser must return a set of detected features. For each laser reading you should get from your feature detection algorithm the [Lx, Ly, lx, ly], where Lx, Ly is the real pose of the landmark, and lx,ly, the pose of the correspondent landmark the robot estimated.

The algorithm should work fine even with a single feature.

For each element in the equations, we will show an example of expected output, thus you can be aware of what kinda matrix or value you should expect for. If you find some issue/typo in this presentation or want to make some suggestion, let me know. [email protected]

Algorithm - This is all you need to get it done! while true do // Reading robot’s pose PoseR = GET[Pose.x; Pose.y; Pose.th] // Prediction step Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt // Update step feature = FeatureDetection; Kt = Σ¯t ∗ HtT ∗ (Ht ∗ Σ¯t ∗ HtT + Qt )−1 INOVA = (zsensort − zrealt ) Xt = X¯t + Kt ∗ INOVA PoseR = Xt and PUT (PoseR) Σt = (I − Kt Ht )Σ¯t sleep(Dt) end

Equation 1 - Getting the pose of the robot

PoseR = GET[Pose.x; Pose.y; Pose.th]

Equation 1 - Getting the pose of the robot

PoseR = GET[Pose.x; Pose.y; Pose.th] The odomotry must return the pose of the robot. It’s ok if it is not so precise.

Matlab example Pose = GET([host ’/motion/pose’]); Pose.th = mod(Pose.th*pi/180,2*pi); PoseR = [Pose.x; Pose.y; Pose.th];

Example what you should get:

PoseR = (2373, 1608, 6)

Equation 2 -Starting prediction stage.

Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt Σ¯t is the uncertainty over the pose.

Equation 2 Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt

Gt



1 Gt = 0 0

Vt

0 1 0

 t −∆st sin(θt−1 + ∆θ ) 2 t ∆st cos(θt−1 + ∆θ ) 2 1



a Vt =  b

1 2b

 c d  1 − 2b

t t ) − ∆st sin(θt−1 + ∆θ ) a = 21 cos(θt−1 + ∆θ 2 4b 2 ∆θt ∆θt 1 ∆st b = 2 sin(θt−1 + 2 ) + 4b cos(θt−1 + 2 ) t t ) + ∆st sin(θt−1 + ∆θ ) c = 21 cos(θt−1 + ∆θ 2 4b 2 ∆θt ∆θt 1 ∆st d = 2 sin(θt−1 + 2 ) − 4b cos(θt−1 + 2 )

New Variables... θt−1 ?

∆θt ?

∆st ? and b? - Next Slide.

A stop by to define the new variables...

θt−1

Robot’s current theta. PoseR(3).

∆θt

∆θt = (Vel.right ∗ Dt − Vel.left ∗ Dt)/(2 ∗ b).

∆st

∆st = (Vel.right ∗ Dt + Vel.left ∗ Dt)/2.

b

The axis of robot. Let’s use b = 165 .

Vel.right and Velocity of right wheel and left wheel. Vel.left Dt

Time the loop will sleep. Let’s use 2 seconds. Dt = 2 .

Keep going on Equation 2 Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt The initial pose covariance Σt−1 or Σ0 is a 3x3 matrix of zeros. It will be updated by the last equation of the algorithm.

Σt−1 Σt−1

Σ∆ t

 Σ∆t =

Ks

Ks = 0.1



Kθ = 0.1

 0 = 0 0

Ks |∆st | 0

0 0 0

 0 0 0

0 Kθ |∆θt |



Both Ks and Kθ are good values for the robot used in the classes.

Equation 2

Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt

Rt σx2 Rt =  0 0 

σx

Laser error for x: σx = 5

σy

Laser error for y: σy = 5

σθ

Laser error for theta: σθ = 1

0 σy2 0

 0 0 σθ2

All σ are good values for the robot used in our classes.

Equation 2 Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt You should get something like that:

Example G :



1 G = 0 0

Example Σt−1 : Σt−1

Example Vt :

Example Σ∆t (robot is not moving):

 0 = 0 0

0.4999194 V = −0.0089757 0.0030303



 0 0 0

0 0 0



Σ∆t =

 0 0 1

0 1 0

0 0

 0.4999194 −0.0089757 −0.0030303

0 0



Equation 2

Σ¯t = (Gt ∗ Σt−1 ∗ GtT ) + (Vt ∗ Σ∆t ∗ VtT ) + Rt You should get something like that (continuation):

Example Rt:



0 25 0

 0 0 1



0 25 0

 0 0 1

25 Rt =  0 0

Example Σ¯t

25 Σ¯t =  0 0

Since is the 1st run, the Σ¯t has the value of Rt

Equation 3 - Feature Detection.

feature = FeatureDetection;

Equation 3 - Feature Detection.

feature = FeatureDetection; The FeatureDetection function should return the landmark pose captured by the laser (lx , ly ) and the correspondent real landmark pose (Lx , Ly ).

Example of what you could get:

feature = (lx , ly , Lx , Ly ) feature = (4704, 665, 4700, 670)

Equation 4 - Calculating the factor Kt (Update stage)

Kt = Σ¯t ∗ HtT ∗ (Ht ∗ Σ¯t ∗ HtT + Qt )−1

Be prepared! The size of the matrices H and Q will depend on the number of features you’ve got. Let’s take a look...

Equation 4 - Calculating the factor Kt .

Kt = Σ¯t ∗ HtT ∗ (Ht ∗ Σ¯t ∗ HtT + Qt )−1

Ht

q xt and yt

For each feature, it will be required these 2 rows in H.   L −y t − Lx√−x − y√q t 0 q   t Ht =  Ly −yt − Lx −x −1 q q

q = (Lx − xt )2 + (Ly − yt )2 xt is the x of the robot pose (PoseR(1)) yt is the y of the robot pose (PoseR(2))

Equation 4 - Calculating the factor Kt Kt = Σ¯t ∗ HtT ∗ (Ht ∗ Σ¯t ∗ HtT + Qt )−1

Qt

For each feature, it will be required these 2 rows in Q (example for 1 feature).  2  σ 0 Qt = ld 2 0 σlθ

Qt

Example for 2 features. σld2 0 Qt =  0 0 

0 2 σlθ 0 0

0 0 σld2 0

σld

σld = 0.5 (laser distance accuracy related)

σlθ

σlθ = 0.1 (laser angular accuracy related)

 0 0  0 2 σlθ

Both σlθ and σld are good values for our robot.

Σ¯t

Σ¯t We already got it from the equation 2.

Equation 4 - Calculating the factor Kt Kt = Σ¯t ∗ HtT ∗ (Ht ∗ Σ¯t ∗ HtT + Qt )−1 For 1 feature, you should get something like that:

Example Ht :



−0.82887 Ht = 0.00025

Example Qt :

 Qt =

Example Kt for 1 feature.

Example Kt for 2 features.

−0.55944 −0.00037

0.25 0



0.71 Kt = 0.68 0



−0.55 Kt =  −0.8 0.003

−0.004 −0.001 −0.34

 0.00000 −1.00000

 0 0.01

 −0.007 −0.007 −0.99

−0.01 0.23 −0.00004

 0.012 0.001  −0.033

Equation 5 - Calculating the Innovation value

INOVA = [zsensort − zrealt ]

Equation 5 - Calculating the Innovation value

INOVA = [zsensort − zrealt ]

zsensort

zrange

zsensort is related to the information the feature detection gave to you.   zrange zsensort = zbearing p

(lx − PoseR(1))2 + (ly − PoseR(2))2 )

zbearing

arct2((ly − PoseR(2)), (lx − PoseR(1))) − PoseR(3);

lx and ly

lx and ly are the values returned from the FeatureDetection. They are related to the point where the laser thinks the landmark is at. Not the real pose of the landmark.

Equation 5 - Calculating the Innovation value

INOVA = [zsensort − zrealt ]

zrealt

Lrange Lbearing Lx and Ly

zrealt is related to the real Landmark correspondent to the zt   Lrange zrealt = Lbearing p

(Lx − PoseR(1))2 + (Ly − PoseR(2))2 )

arct2((Ly − PoseR(2)), (Lx − PoseR(1))) − PoseR(3); Lx and Ly are the values returned from the FeatureDetection. They compose the real pose of the correspondent real Landmark.

Equation 5 - Calculating the Innovation value

INOVA = [zsensort − zrealt ] You should get something like that: For each feature a 2x1 matrix is added.

Example INOVA for 1 feature:

 INOVA =

 3.00 0.000003

Example Kt ∗ INOVA for 1 Kt ∗ INOVA = (−2, 2, 0) feature This is the pose difference you must add to the robot’s pose.

Equation 6 - Updating the new pose

PoseR = Xt + Kt ∗ INOVA and PUT (PoseR)

Equation 6 - Updating the new pose

PoseR = Xt + Kt ∗ INOVA and PUT (PoseR)

PoseR = Xt + Kt ∗ INOVA PoseR = PoseR + Kt ∗ INOVA; NewPose.x = PoseR.x; NewPose.y = PoseR.y ; NewPose.th = PoseR.th ∗ 180/pi; PUT([host ’/motion/pose’], NewPose);

Equation 7 - Updating the pose covariance Σt

Σt = (I − Kt Ht )Σ¯t

Equation 7 - Updating the pose covariance Σt Σt = (I − Kt Ht )Σ¯t

I



1 I = 0 0

0 1 0

 0 0 1

Kt

We’ve already got it. See the Equation 4.

Ht

We’ve already got it. See the Equation 4.

Σ¯t

We’ve already got it. See the Equation 2.

Example Σt : 

1.3038e − 01 Σt =  5.2667e − 02 −2.7058e − 05

5.2667e − 02 2.8502e − 01 −1.0314e − 04

 −2.7058e − 05 −1.0314e − 04 3.3223e − 03

Equation 8 - Sleeping for the next round

sleep(Dt)

Equation 8 - Sleeping for the next round

sleep(Dt) Sleep for n seconds. Let’s use Dt = 2. Done.

Then Then do it all again.

END.

Kalman Filter for Mobile Robot Localization

May 15, 2014 - Algorithm - This is all you need to get it done! while true do. // Reading robot's pose. PoseR = GET[Pose.x; Pose.y; Pose.th]. // Prediction step. ¯Σt = (Gt ∗ Σt−1 ∗ GT t )+(Vt ∗ Σ∆t ∗ V T t ) + Rt. // Update step feature = FeatureDetection;. Kt = ¯Σt ∗ HT t ∗ (Ht ∗ ¯Σt ∗ HT t + Qt )−1. INOVA = (zsensort − zrealt ).

227KB Sizes 2 Downloads 280 Views

Recommend Documents

6DOF Localization Using Unscented Kalman Filter for ...
Email: [email protected], {ce82037, ykuroda}@isc.meiji.ac.jp. Abstract—In this ..... of the inliers is high, the accuracy of visual odometry is good, while when the ..... IEEE International Conference on Robotics and Automation, Vol. 1, ... Sur

The Kalman Filter
The joint density of x is f(x) = f(x1,x2) .... The manipulation above is for change of variables in the density function, it will be ... Rewrite the joint distribution f(x1,x2).

20140304 pengantar kalman filter diskrit.pdf
Department of Computer Science. University of North Carolina Chapel Hill. Chapel Hill, NC 27599-3175. Diperbarui: Senin, 24 Juli 2006. Abstrak. Di tahun 1960 ...

Mobile Robot Indoor Localization Using Artificial ...
to validate our approach several ANN topologies have been evaluated in experimental ... to accomplish several tasks in autonomous mobile robotic area. Also, knowledge about ... the wireless network are Received Signal Strength Indication. (RSSI) and

Accurate Mobile Robot Localization in indoor ...
for localization of a mobile robot using bluetooth. Bluetooth has several inherent advantages like low power consumption, an ubiquitous presence, low cost and ...

Importance Sampling-Based Unscented Kalman Filter for Film ... - Irisa
Published by the IEEE Computer Society. Authorized ..... degree of penalty dictated by the potential function. ..... F. Naderi and A.A. Sawchuk, ''Estimation of Images Degraded by Film- ... 182-193, http://www.cs.unc.edu/˜welch/kalman/media/.

Extended Kalman Filter Based Learning Algorithm for ...
Kalman filter is also used to train the parameters of type-2 fuzzy logic system in a feedback error learning scheme. Then, it is used to control a real-time laboratory setup ABS and satisfactory results are obtained. Index Terms—Antilock braking sy

Kalman filter cheat sheet.pdf
Page 1 of 1. Kalman Filter. Kalman filter: a data fusion algorithm - best estimate of current state given: Prediction from last known state pdf (probability density ...

External Localization System for Mobile Robotics - GitHub
... the most known external localization reference is GPS; however, it ... robots [8], [9], [10], [11]. .... segments, their area ratio, and a more complex circularity .... The user just places ..... localization,” in IEEE Workshop on Advanced Robo

Importance Sampling Kalman Filter for Image Estimation - Irisa
Kalman filtering, provided the image parameters such as au- toregressive (AR) ... For example, if we consider a first-order causal support (com- monly used) for ...

Effect of Noise Covariance Matrices in Kalman Filter - IJEECS
In the Kalman filter design, the noise covariance matrices. (Q and R) are .... statistical variance matrix of the state error, the optimality of the Kalman filter can be ...

Robot Localization Network for Development of ...
suite for the development of a location sensing network. The sensor suite comprises ... mobile robotics, localization technology refers to a systematic approach to ...

Robot Localization Sensor for Development of Wireless ... - IEEE Xplore
issues for mobile robot. We describe a novel localization sensor suite for the development of a wireless location sensing network. The sensor suite comprises ...

Descalloping of ScanSAR Image using Kalman Filter ...
IJRIT International Journal of Research in Information Technology, Volume 1, ... Therefore such techniques are not suitable to be applied to processed data. A.

Descalloping of ScanSAR Image using Kalman Filter ...
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 4, ... two major artifacts in processed imges known as scalloping and inter-.

The Kalman Filter - Yiqian Lu 陆奕骞
The Kalman filter algorithm has very high shreshhold for comprehension, and the logic is twisted with sophisticated mathematical notations. In this note, we derive the standard. Kalman filter following the logic of Harvey (1990)[1]. All essential ste

Effect of Noise Covariance Matrices in Kalman Filter - IJEECS
#1, *2 Electrical and Computer Engineering Department, Western Michigan ... *3 Electrical Engineering Department, Tafila Technical University, Tafila, Jordan.

What is the Kalman Filter and How can it be used for Data Fusion?
(encoders and visual) to solve this issue. So for my math project, I wanted to explore using the Kalman Filter for attitude tracking using IMU and odometry data.

Robust Eye Localization for Lip Reading in Mobile ...
Emails: {[email protected], [email protected]}. Abstract- ... setting a proper region including lip. ... detection is to set a sufficient lip region, where the fine lip.

External Localization System for Mobile... (PDF Download Available)
We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variab

Rescue Robot Localization and Trajectory Planning ...
Rescue Robot Localization and Trajectory Planning Using ICP and Kalman Filtering. Based Approach ... target and path in disaster areas, its precision, speed,.