A Stream Field Based Partially Observable Moving Object Tracking Algorithm Kuo-Shih Tseng Robotics Control Technology Dept., Intelligent Robotics Technology Division Mechanical Industry Research Laboratories, Industrial Technology Research Institute Hsinchu, Taiwan seabook @ itri.org.tw

Abstract— Self-localization and tracking a moving object is a key technology for service robot interactive applications. Most tracking algorithms focus on how to correctly estimate the acceleration, velocity, and position of the moving objects based on the prior states and sensor information. What has not been studied so far is tracking the partially observable moving object which is often hidden from a robot痴 view using lasers. Applying the traditional tracking algorithms will lead to the divergent estimation of the object痴 position. Therefore, in this paper, we propose a novel laser based partially observable moving object tracking and self-localization algorithm. We adopt stream functions and Rao-Blackwellised particle filter (RBPF) to predict where the partially observable moving object will go in previously mapped environmental features. Moreover, a robot can localize itself and track such a moving object according to stream field. Our experimental results show the proposed algorithm can localize itself and track the partially observable moving object effectively.

accelerator model [4], but its signals and noise are limited to linear Gaussian and single hypothesis assumptions. However, particle filter can track objects with nonlinear probability distribution and multi-hypotheses although the price is its high computational complexity [5], [6]. Moreover, SLAMMOT can simultaneously estimate the position of a robot, map, and moving objects using EKF algorithms with laser range finder [7], [8].

Object

Robot

(a) Object

Keywords— moving object tracking, stream field, localization, kalman filter, RBPF.

I.

INTRODUCTION

Navigation in a static environment is an essential technology for mobile robot. The four major areas of such navigation problem consist of localization, mapping, obstacle avoidance, and path planning [1]. When an environment becomes dynamic, it will be an interactive navigation problem including leading, following, intercepting, and avoiding people [2]. Accordingly, object tracking is a key function of a robot to accomplish such tasks. For the following task, a robot has to track and follow the moving object without getting lost. Therefore, a robot should be able to track the moving object, follow it, localize itself, and avoid obstacles in a previously mapped environment. In the previous works, most tracking algorithms aim at correctly estimating the acceleration, velocity and position of the moving objects based on the past and sensor information [3]. For example, Kalman filter is adopted to track moving objects with constant velocity model and/or constant

978-1-4244-2287-6/08/$25.00 © 2 008 IEEE

Obstacle

Robot

(b) Fig. 1 The dash line is the scan range of laser, the solid line is the real scan range of laser and the arrow is scanned points of laser. (a) Observable moving object tracking (b) Unobservable moving object tracking

Conditional particle filter can concurrently estimate robot position and people motion with previously mapped environment [2]. Most of these tracking algorithms employ IMM [9] for Kalman filter or particle filter to fuse multiple models. In such case, Kalman filter or particle filter predicts inflated Gaussian distribution or dispersed particles without corrections of sensor information. However, they are effective only if an object is observable (Fig. 1(a)) [2], [4], but they cannot correct the predicted object motion when the moving object is unobservable as shown in Fig. 1(b). In this paper, we call it as partially observable moving object tracking (POMOT) case because the robot still can observe

environment but cannot observe the hidden object. In [10], the authors propose a map-based tracking algorithm using RBPF to concurrently estimate robot position and ball motion. It models the physical interaction between walls and a ball even if a ball is unobservable (Fig. 1(b)), but it can only track a passive object. For active objects, a walking person is hard to be tracked when he is unobservable. Some algorithms can plan with information of previously mapped environment and robot position. Information gain-based exploration can concurrently estimate map, robot position, and find the optimal goal and move toward it [11]. Dynamic action spaces can be utilized to search and explore a moving object which goes to one of the known destinations [12]. These algorithms can search for a moving object or a goal in some assumptions, but they cannot localize a robot and track a moving object even if the moving object is unobservable. Therefore, in this paper, we propose a novel laser based self-localization and partially observable moving object tracking (POMOT) algorithm. Because the motion of the moving object usually depends on environments, we adopt the stream field as the motion model of Rao-Blackwellised particle filter (RBPF) to predict object motion. With the stream field, we model the interaction between the virtual goal of a moving object, the environmental features, and a moving object. Moreover, a robot can localize itself and track such moving object according to a virtual stream field. The rest of the paper is organized as follows. Section II describes path planning and our adopted stream field for object tracking. In Section III, our proposed tracking algorithm which combines the stream field and RBPF is presented. Then, our proposed self-localization and object tracking algorithm is presented in Section IV. Experimental results are given in Section V, and finally Section VI concludes this paper.

¶f ¶y ¶y ¶f = , =. (2) ¶x ¶y ¶x ¶y And then, the velocities u along the X-axis and v along the Y-axis can be derived by stream function as below

u=

¶y ( x, y ) ¶y ( x, y ) , v=. ¶y ¶x

Simple flow includes uniform flow, source, sink, and free vortex. The complex potential can be formed by these simple flows with various combinations. In this paper, we use sink and a doublet flow which combines a sink and a source flow.

y y sin k = C tan -1 ( ) , (4) x y y source = -C tan -1 ( ) , (5) x y y doublet = -C tan -1 ( 2 ), (6) x + y2 where C is the constant in proportion to the flow velocity.

G

S

(a)

II.

STREAM FIELD MOTION MODEL FOR POMOT

Potential field and stream field were widely adopted for motion planning and obstacle avoidance in mobile robotic domain [13-15]. They are based on physical axiom by virtual field but not analysis algorithms. The advantage of the approach is their efficiency but the disadvantage is difficult to control the robot exactly. In the following, we will introduce how to carry out motion planning and our proposed motion model of an estimator using stream function. A. Motion Planning Using Stream Function For an irrational and incompressible flow, there exists a complex potential which consists of potential function f (x) and stream functiony (x) . The complex potential is defined as w = f + iy = f ( z ), z = x + iy; (1)

(3)

(b)

Fig.2. The circle G is the goal, the circle S is the starting point and the solid circle is an obstacle (a) obstacle avoidance (b) stream line

There are four major methods to define various complex potential for real environments. They are simple flow, use of specific theorems, conformal mapping, and a panel method [16]. We adopt specific theorems to construct stream function for motion planning. As shown in Fig. 2, we assume the robot will move toward the goal from the starting point. The obstacle is between the goal and the starting point. In such case, we can model this environment as the goal is a sink flow and the obstacle is a doublet flow. According to circle theorem, we can get the stream field which consists of a sink flow y sin k ( x, y ) and a doublet flow

y doublet ( x, y ) by

y ( x, y ) = y sin k ( x, y ) + y doublet ( x, y ) æ y - ys = -C tan -1 çç è x - xs

ö ÷÷ ø

(7)

ö æ a 2 ( y - yd ) ç + ( yd - ys ) ÷ 2 2 ÷ ç ( x - xd ) + ( y - yd ) + C tan -1 ç ÷, a 2 ( x - xd ) çç + ( xd - xs ) ÷÷ 2 2 ø è ( x - xd ) + ( y - yd ) where ( xs , y s ) is the center of sink, ( xd , y d ) is the center of doublet, a is the radius of doublet, and C is the constant proportion to the flow velocity. More details of stream field derived by circle theorem can be found in [15]. Finally, when the robot position, goal, and obstacle are known, the desired robot velocities can be computed by (3) and the heading q d can be computed by

æ ¶y ( x , y ) ö ç÷ -1 ç ¶ x ÷. q d = tan ç ¶y ( x, y ) ÷ ç ÷ ¶y è ø

(8)

predicting efficiently even if the object is unobservable. However, the virtual goal position of a partially observable moving object is a multi-hypotheses problem. In order to solve such problem, we adopt particle filter to estimate N kinds of the virtual goal position. In the next section, we will discuss how to track a moving object using stream field based motion model of particle filters. III.

POMOT USING STREAM MOTION MODEL AND RBPF

In order to achieve efficient motion prediction, we integrate stream field based motion model in our proposed tracking algorithm. The proposed graphical model is shown in Fig. 4(b), and it is quite different from traditional tracking algorithms in Fig. 4(a). In Fig. 4(a), the prediction stage of traditional tracking algorithms will diverge without effective measurements. However, our proposed RBPF algorithm using stream field based motion model will achieve effective prediction by a virtual sink flow and doublets generated from obstacles even without effective measurements (Fig. 4(b)). For POMOT case, particle filter will be a good algorithm since there will be multi-hypotheses when the object is sheltered from environments.

S

With that, the robot can realize real-time motion planning.

G B. The Proposed Motion Model Using Stream Function In probabilistic based tracking algorithms, the motion model for prediction stage is a key technique for maneuvering targets. However, in unobservable case, Kalman filter or particle filter predicts inflated Gaussian distribution or dispersed particles without corrections of sensor information. Therefore, we propose a stream field based motion model which can predict efficiently according to known map features and a virtual goal. In object tracking algorithms, the position at time t is

xt = f (x t -1 + u t -1 ) ,

(9)

where u t -1 is the object motion at time t - 1 , and it is the most difficult term to be estimated. As shown in Fig. 3(b), a robot cannot track the moving object efficiently when the moving object is unobservable. Thus we assume that the object will avoid a known obstacle and move toward a virtual goal (Fig. 3(a)). The stream field can be calculated by (7). A virtual sink and a doublet which is constructed by known environment can generate a stream field and the object motion can be predicted by é ¶(y sin k ( xt -1 , yt -1 ) + y doublet ( xt -1 , yt -1 )) ù ú éut -1 ù ê ¶yt -1 u t -1 = ê ú = ê ú ëvt -1 û ê- ¶ (y sin k ( xt -1 , yt -1 ) + y doublet ( xt -1 , yt -1 )) ú êë úû ¶xt -1

(a)

(10)

In such method, we predict object motion by a virtual goal and known obstacles but not estimated velocity directly. The advantage of stream field motion model is that it can keep

(b) Fig.3. the circle G is the goal, the circle S is the start point and the solid (a) virtual motion model (b) real environment

Particle filter was widely used for objects tracking. The advantage of particle filter is that it can predict and correct with arbitrary nonlinear distribution and n-hypotheses. However, the major disadvantages are its assumptions. First, it is hard to predict n-hypotheses by accurate probability distribution. Second, the complexity of the particle set is exponential in the number of tracked variables. On the other hand, one problem of object tracking is how to estimate object motion accurately and it is illustrated as follows. We assume that xt is object states. z t is the measurement data when time is t . Particle filter will estimate the states of moving objects by prediction and correction

stages. Prediction stage is to sample a set of particles i t

i t

i t -1

x ~ q ( x | x , zt ) ,

and some doublets generated form previously mapped (11)

i

where xt is the predicted particle according to the proposal i

i

distribution q( xt | xt -1 , z t ) . Correction stage is to resample

xti according to the weighting wti by

features. However, it is not efficient to sample states S t :

S ti = {xti , yti , Gxi ,t , G iy ,t , Cti , Dx0,t , D y0,t , D1x,t , D1y ,t ..} In order to sample efficiently, states are divided into object i

i

states Ot , goal states Gt ,and doublets . These three groups are denoted by:

Oti = {xti , yti } Gti = {G xi ,t , G yi ,t , Cti } Dt = {Dx0,t , D y0,t ..}

Ot

Ot +1

Object location

zt -1

zt

zt +1

Object detection

MOT

Ot -1

In known feature map, doublets are fixed so it is an independent term. The justification for this decomposition follows form the factorization of the probability:

P( S1:t | z1:t ) = P(O1:t , G1:t | z1:t )

(a)

Dt

Dt -1

Dt +1

(13)

= P(O1:t | G1:t , z1:t ) · P(G1:t | z1:t )

We proposed Rao-Blackwellised particle filter for tracking, with particle filter being applied to estimate N kinds of goal

Doublet

i

Gt -1

Gt

Gt +1

Sink

Ot -1

Ot

Ot +1

Object location

zto-1

zto

zto+1

Object detection

POMOT

states Gt and Kalman filter to estimate N kinds of object i

states Ot because stream field motion model is nonlinear. i

i

computed by goal states Gt and doublets Dt using (7). i

Finally, stream states St are resampled by weighting

wti which is computed by Gaussian distribution. The

(b)

ut

z

L t

ut +1

Robot control

z

L t +1

Landmark detection

i

algorithm can predict the particle set Ot efficiently when it is

Localization

z

L t -1

Robot Location

unobservable. The algorithm will be presented in the next section. IV.

Map (Doublet)

m Gt -1

Gt

Gt +1

Sink

Ot -1

Ot

Ot +1

Object location

z to-1

z to

z to+1

Object detection

POMOT

ut -1

X t +1

Xt

X t -1

(c) Fig. 4. (a) The DBN of traditional tracking using particle filter (b) the DBN of POMOT using stream function (c) the DBN of Localization and POMOT

wti µ wti-1

p( zt | xti ) p( xti | xti-1 ) q ( xti | xti-1 , zt )

i

Goal states Gt are sampled and then object states Ot are

(12)

LOCALIZATION AND POMOT ALGORITHM

Effective prediction of sheltered object motion relies on robust self-localization and object tracking techniques. It is difficult to predict object motion when the object is sheltered for a long time. To achieve effective prediction, a robot has to move toward the sheltered zone and gets useful measurements related to the target object. In this paper, we integrate POMOT and self-localization algorithms for robust localization and tracking. Our proposed graphical model is shown in Fig. 4(c). It can localize robot, and track the moving object by virtual sink flow and doublet flow generated from mapped features even if the object is unobservable. Our self-localization and RBPF algorithm is summarized in Table I and it is described as follows. The inputs are previous samples S t -1 , measurement zt , and control information ut

When the moving object is sheltered from environments or moving obstacles, measurement data zt is useless for

(line 1). The samples S t -1 include goals of the object Gt -1

corrections. In such POMOT case, an accurate proposal distribution is helpful for prediction without correction. In order to keep predicting the object position, we propose a stream field motion model which is composed of a virtual sink

position using EKF (lines 3-4). All laser measurements will be represented line features using least square line algorithm. If the feature is associated with known landmarks (line 7), the

and positions of the object Ot -1 . The algorithm predicts robot

robot position is corrected using EKF (lines 8-10). Otherwise, the feature is associated with dynamic objects (line 12). Goal i

states Gt are sampling firstly (line 15) and the N kinds of i

object states Ot are predicted according to stream field motion model in (7) (line 16). Table I : EKF Localization and RBPF algorithm 1. Inputs :

{

S t -1 = Gt(-i )1 , Ot(-i )1 | i = 1,..., N

} posterior at time t - 1

u t -1

control mesurement

zt

observation

2. S t := φ

// Initialize

3. m t = g (u t , m t -1 )

// Predict mean of robot postion

4. S t = Gt S t -1GtT + Rt

// Predict covairance of robot postion

5. for m := 1,...., M do 6. for c := 1,...., C do

// EKF Localization update

7.

if d mL < d thL do

8.

K = St H

9.

m t = m t + K ( z - h ( m t ))

10.

S t = ( I - K t H tc )S t

c t

// if d m < d th , z i is landmark m (H St H c t

c t

11.

c t

cT t

V.

EXPERIMENTAL RESULTS

We adopt UBOT as the mobile robot platform to verify our proposed object tracking and self-localization algorithm (Fig. 5(c)). UBOT is developed by ITRI/MSL in Taiwan, and it is equipped with one SICK laser. In our experiments, the moving person walks along the straight line, and the robot follows the moving target by remote control.

+ Qt ) -1

c t

else do z oc = z c

12. 13. w

cT t

(a) (b) Fig. 5. The circle is a robot. The solid circle is a person who walks along the dash line. The solid rectangle is a wall. (b)

(i )

// z c is a dynamic feature

:= 0

14. for i := 1,...., N do i t

i t

// RBPF Tracking

i t -1

15

G ~ p (G | G , z t )

16.

Oti ~p(Oti | Ot-i 1 ,Gti , Dt , z t ) // Predict by stream field, see (7)

17.

for j := 1,...., J do

18.

if d < d do o m

(a)

// virtu al goal smapling // data association for moving objects

o th

// z j is a possible moving object

19.

O ik := kalman update(Ot-i 1 ,Dt ,Gt-i 1 , z to, j ) // update object

20.

wti := p( z to, j | Ot-i 1 ,Dt ,Gt-i 1 )

21.

S t := S t È G , O

{

(i ) t -1

(i) t -1

// compute weighting

}

// insert S t into sample set

(b)

22. Discard smaples in S t based on weighti ng wti (resampling) 23. return S t , m t , S t

If ith particle is associated with a moving object, RBPF will i

update moving object position O t . The algorithm computes i

the weighting of ith particle w t (line 20). Then, particles will be resampled according to their weightings (line 22). In i

observable case, stream states St including object states

Oti and goal states Gti will be convergence. In POMOT case, i

stream states object states Ot will keep predicting based on i

previous stream field S t -1 .

(c) Fig. 6. The states of the robot, moving objects, and environments. (a) Initial state. (b) Tracking state. (c) POMOT state. (The red solid circle is a virtual goal of a moving object. The blue rectangle is the predicted moving object position by Kalman filter. The red boldface circle is the predicted moving object position by RBPFT. The blue circle is the starting point of the robot. The red circle is predicted by EKF localization. The black circle is computed by odometer data. The red points belong to outlier data. The blue points belong to landmark data and are for robot position correction. The black lines are walls.)

As shown in Fig. 6(a), initially UBOT is at the starting point (blue circle) and there is no moving object in the environment. All scanned points are shown as red points. If

Mahalanobis distance between the scanned point and landmarks are smaller than the threshold, this point is judged as one of known landmarks (blue points) and can be utilized to correct robot position. Otherwise, they will be assigned to outlier points. If the outlier points are close to Ubot, they will be deemed as dynamic ones. As the total amount of dynamic points are large enough, KF and RBPF will start to track the moving object. 120

Y (cm)

100 80

Encoder

60

EKF Goal

40 20 0 0

20

40

60

80

100

X (cm)

(a) 1500 1000 Y (cm)

KF 500

RBPFT People

0 -50

0

50

100

150

200

250

-500 X (cm)

(b) 300 200 Y (cm)

KF 100

RBPFT People

0 -50

0

50

100

150

200

inflated and RBPF will keep prediction according to stream field which includes a virtual goal and doublet flows. The virtual goal is convergence when the object is observable and tracked by RBPF. Table II summarizes the accumulated error of Figs. 7(a) and 7(b). In the experiments, UBOT moves 1.37m toward the goal (rectangle).The accumulated error of odometer data is 4.09 cm and the estimated error of EKF localization algorithm is 1.82 cm. It shows that EKF localization algorithm can effectively eliminate the accumulated error. On the other hand, the people trajectory is represented by magenta line (Fig. 7(b)), the triangles are the tracked positions by RBPF, and the diamonds are the tracked positions by KF using constant velocity model. The average error and standard deviation are computed by five tracking experimental data. Because legs scanned by laser are walking, the estimated position is not at center of legs. It leads to some error but the data still is useful for tracking performance verification because the scanned data of input is the same for KF and RBPF. When the object is observable, the average tracking error by KF is 27.18cm and the standard deviation of error is 26.18cm. RBPF tracked average error is 23.84 cm and standard deviation of error is 11.43. The KF standard deviation of error is bigger than RBPF because constant velocity model usually generates vibrations when the object is not constant velocity, for example, walking legs. In POMOT case, the average tracking error by RBPF is 20.30 cm and standard deviation of error is 12.95cm. The average tracking error by KF is 713.38cm and the standard deviation of error is 870.2cm. The KF average error and standard deviation of error is bigger than RBPF because KF predicts inflated acceleration distribution without correction. Obviously, our proposed RBPF algorithm is better than the KF with constant velocity model when the object is observable. Furthermore, our proposed RBPF can keep tracking the object successfully when the target is unobservable while the KF with constant velocity model cannot. Table II.The performance of the proposed localization and tracking algorithm

-100 X (cm)

(c) Fig. 7. The localization and tracking experimental results. (a) The performance comparison between EKF localization and odometer trajectory. (b) The performance comparison between KF, RBPF, and people trajectory (c) Magnification of (b) data set

As shown in Fig. 6(b), the person enters the environment and walks along the dash line as shown in Fig. 5. Ubot is moving toward the goal and tracking the moving object by KF and RBPF. KF predicted position is shown in red rectangle and corrected position is shown in blue rectangle. RBPF tracked position is shown in red boldface circle. Blue points around red boldface circle are particles of RBPF. As shown in Fig. 6(c), when the moving object is sheltered from the wall, the Gaussian distribution of the KF will be

Odometer error EKF Localization error KF tracking error Our proposed RBPF tracking mean error KF tracking error (POMOT case) Our proposed tracking error (POMOT case)

RBPF

Error (cm) 4.09912 1.8245 27.18 23.74

Error std. N/A N/A 26.18 11.43

Error rate (%) 2.99% 1.33% N/A N/A

713.38

870.2

N/A

20.30

12.95

N/A

Table III and Fig. 8 summarize the error data of five experiments. We can found the standard deviation of RBPF tacking error in POMOT case usually is smaller than in

observable case. Because RBPF algorithm tracks object by virtual sink and doublet, it will be smoother prediction than in observable case. Table III.The performance of five experimental KF and RBPF tracking

1st error mean 1st error std. 2nd error mean 2nd error std. 3rd error mean 3rd error std. 4th error mean 4th error std. 5th error mean 5th error std.

KF

RBPF

KF

RBPF

(cm)

(cm)

(POMOT case)(cm)

(POMOT case)(cm)

23.92 17.15 23.47 27.72 31.8 28.63 32.96 34.77 21.28 12.35

22.32 10.16 20.59 9.45 30.39 13.22 23.42 7.17 18.31 11.53

149.01 170.28 416.43 412.62 539.61 368.23 1538.53 1018.44 228.26 136.63

27.31 6.13 9.27 5.63 10.65 5.4 26.99 15.09 9.86 4.38

KF RBPF

35

REFERENCES [1]

[2]

[3] [4] KF RBPF

[5]

40 Standard Deviation Error

Tracking Error

This work was supported in part by Ministry of Economic Affairs, R.O.C. under contract 7301XS3410.

35

30 25 20 15 10 5

[6]

30 25 20 15 10

[7]

5

0

0 1

2

3

4

5

1

2

3

Experiment

4

5

[8]

Experiment

(a)

(b)

Tracking error comparison between KF and EBPF in POMOT case

Tracking standard deviation error comparison between KF and RBPF in POMOT case

KF RBPF

1800

[9] KF RBPF

Tracking standard deviation error

1200

1600 Tracking Error

ACKNOWLEDGMENT

Tracking standard deviation error comparison between KF and RBPF

Tracking error comparison between KF and RBPF

tracking performance of the algorithm is better than constant velocity model based Kalman filter. When the object is unobservable, our proposed RBPF can keep predicting effectively robot motion by stream field.

1400 1200 1000 800 600 400 200 0

1000

[10]

800

[11]

600 400 200

[12]

0 1

2

3

4

5

1

Expeiment

2

3

4

5

Experiment

(c)

(d)

Fig. 8. The average and standard deviation error between KF and RBPF when running 5 times. (a) Average error. (b) Standard deviation error. (c) Average error in POMOT case. (b) Standard deviation error in POMOT case.

VI.

CONCLUSIONS

In this paper, we proposed a self-localization and stream based RBPF tracking algorithm which can track a moving object and localize itself when the object is sheltered from environments. The proposed Rao-Blackwellised particle filter for tracking, with particle filter being applied to estimate nonlinear object motion and Kalman filter to estimate dynamic linear object states. The proposed motion model is composed of a virtual sink, and several doublets. As experimental data shown, when the robot follows the object,

[13]

[14] [15] [16]

Weiguo Lin, Songmin Jia, Takafumi Abe and Kunikatsu Takase “Localization of mobile robot based on ID tag and WEB camera,” in Proc. IEEE Conference on Robotics, Automation and Mechatornics, 2004. M Montemerlo, S. Thrun, and W. Whittaker; “Conditional particle filters for simultaneous mobile robot localization and people-tracking,” in Proc. IEEE Conference on Robotics and Automation, Vol. 1 , pp. 695-701, May. 2002. A. Yilmaz, O. Javed, and M Shah, 徹 bject tracking: A survey,”ACM Computing Surveys (CSUR),Vol. 38 , Issue 4 ,2006 Yaakov Bar-Shalom and Xiao-Rong Li. Multitarget-Multisensor Tracking: Principles and Techniques. YBS, Danvers, MA, 1995. Branko RisticSanjeev, Arulampalam, and Neil Gordon., Beyond the Kalman Filter: Particle Filters for Tracking Applications, Artech House Publishers, 2004. M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, Vol. 50, No. 2, pp. 174-188, Feb. 2002. C.-C. Wang. “Simultaneous Localization, Mapping and Moving Object Tracking, ” Doctoral dissertation, tech. report CMU-RI-TR-04-23, Robotics Institute, Carnegie Mellon University, April, 2004. T-D. Vu, O. Aycard, N. Appenrodt, 徹 nline Localization and Mapping with Moving Object Tracking in Dynamic Outdoor Environments” in Proc. IEEE on Intelligent Vehicles Symposium. pp. 190-195, June 2007 Mazor, E.;, Averbuch, A. , Bar-Shalom, Y.; Dayan, J.; 的nteracting multiple model methods in target tracking: a survey” IEEE Transactions on Aerospace and Electronic System. Vol. 34, Issue 1, pp. 103-123, Jan. 1998. C. Kwok and D. Fox., “Map-based Multiple Model Tracking of a Moving Object. ”Robocup Symposium 2004. Cyrill Stachniss and Giorgio Grisetti and Wolfram Burgard, “Information Gain-based Exploration Using Rao-Blackwellized Particle Filters”, Proceedings of Robotics: Science and Systems, 2005. Nicholas Roy and Caleb Earnest. ”Dynamic Action Spaces for Information Gain Maximization in Search and Exploration.” Proceedings of the American Control Conference, June 2006. D. Megherbi, W.A. Wolovich, “Modeling and automatic real-time motion control of wheeled mobile robots among moving obstacles: theory and applications.” In Proc. IEEE Conference on Decision and Control, Vol. 3 ,pp. 2676-2681, Dec. 1993 Didier Keymeulen and Jo Decuyper 典 he Fluid Dynamics Applied to Mobile Robot Motion: The Stream Field Method. ”, in Proc. IEEE Conference on Robotics and Automation,.1994. S. Waydo and R.M. Murray, “Vehicle motion planning using stream functions.”, in Proc. IEEE Conference on Robotics and Automation, 2003. H.J.S. Feder, J.-J.E. Slotine, 迭 eal-time path planning using harmonic potentials in dynamicenvironments. ” in Proc. IEEE Conference on Robotics and Automation, Vol.1, pp. 874 -881, Apr, 1997.

A Stream Field Based Partially Observable Moving Object Tracking ...

object tracking. In Section III, our proposed tracking algorithm which combines the stream field and RBPF is presented. Then, our proposed self-localization and object tracking ... motion planning and obstacle avoidance in mobile robotic domain [13-15]. .... With that, the robot can realize real-time motion planning. B.

354KB Sizes 0 Downloads 320 Views

Recommend Documents

A Stream Field Based Partially Observable Moving ...
Our Rao-Blackwellised particle filter (RBPF) based tracking algorithm adopts the stream field based motion model. • The robot can localize itself and track an.

A Stream Field Based Partially Observable Moving ...
the traditional tracking algorithms will lead to the divergent estimation of the object痴 ...... IEEE Conference on Robotics, Automation and Mechatornics,. 2004.

Stream Field Based People Searching and Tracking ...
conditioned on simultaneous localization and mapping (SLAM) is complex since it aims ... stream field based motion model for the RBPF based SAT proposed in ...

Tracking of Multiple, Partially Occluded Humans based ...
Institute for Robotics and Intelligent Systems. Los Angeles, CA 90089- ..... while those of the building top set are mainly panning and zooming. The frame size of ...

Moving Object Tracking in Driving Environment
applications such as surveillance, human robot interaction, action recognition ... Histogram-based object tracking methods like .... (The Development of Low-cost.

Moving Object Detection Based On Comparison Process through SMS ...
To shed lightweight on the matter, we tend to gift two new techniques for moving object detection during this paper. Especially, we tend to .... For our experiments, we used a laptop running Windows Vista. ... pursuit technique for proposing new feat

Moving Object Detection Based On Comparison Process through SMS ...
2Associate Professor, Dept of CSE, CMR Institute of Technology, ... video. It handles segmentation of moving objects from stationary background objects.

Object Tracking based on Features and Structures
appearance and structure. II. GRAPH MODEL. Graph models offer high representational power and are an elegant way to represent various kinds of information.

Model generation for robust object tracking based on ...
scription of the databases of the PASCAL object recogni- tion challenge). We try to overcome these drawbacks by proposing a novel, completely unsupervised ...

Object Tracking Based On Illumination Invariant Method and ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 8, August 2014, Pg. 57-66 ... False background detection can be due to illumination variation. Intensity of ... This means that only the estimated state from the.

Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf
Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf. Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf. Open.

Object Tracking Based On Illumination Invariant Method and ... - IJRIT
ABSTRACT: In computer vision application, object detection is fundamental and .... been set and 10 RGB frames are at the output captured by laptop's webcam.

robust video object tracking based on multiple kernels ...
Identification and Security Technology Center,. Industrial .... the loss of the information caused by the occlusion by introducing ... Thus, we associate each kernel with one adaptively ... similarity is defined as the degree of match between the.

robust video object tracking based on multiple kernels with projected ...
finding the best match during tracking under predefined constraints. .... A xδ and. B xδ by using projected gradient [10],. B. A x x. C)C(CC. JC)C(CCI x. 1 x. T.

Implementation of a Moving Target Tracking Algorithm ...
Jun 24, 2010 - Using Eye-RIS Vision System on a Mobile Robot. Fethullah Karabiber & Paolo ..... it gives big advantage in processing speed with comparison.

Development of Object Tracking Algorithm and Object ...
Now we applied the SSD formula for a vector with 3 components. 2. _. 1. ( , ). ||( (, ). ) ( (, ). )|| .... Fig 3.4: The plot Original values vs. calculated disparity values. 12 ...

A variational approach for object contour tracking
Nevertheless, apart from [11], all these solutions aim more at es- timating ..... Knowing a first solution of the adjoint variable, an initial ..... Clouds sequence.

Fragments based Parametric tracking - CiteSeerX
mechanism like [1,2], locates the region in a new image that best matches the ... The fragmentation process finds the fragments online as opposed to fragment- ing the object ... Each time the fragment/class with the maximum within class variance is .

Fragments based Parametric tracking - CiteSeerX
mechanism like [1,2], locates the region in a new image that best matches the .... Each time the fragment/class with the maximum within class variance is selected( ..... In: Proceedings of the International Conference on Computer Vision and.

Object Tracking using Particle Filters
happens between these information updates. The extended Kalman filter (EKF) can approximate non-linear motion by approximating linear motion at each time step. The Condensation filter is a form of the EKF. It is used in the field of computer vision t

Object-Based Unawareness
Aug 24, 2007 - a very different way, taking what we call a ”semi-syntactic” approach. .... In section 4, we verify that our structures satisfy DLR's three axioms.

Object-Based Unawareness
Aug 24, 2007 - call this the class of structures the object-based unawareness structures. ..... for any formula α and variable x that is not free in α, α ↔ ∀xα is ...... Proceedings of the Tenth International Conference on Principles of Knowl