Adaptive Vision-Based Collaborative Tracking Control of An UGV via a Moving Airborne Camera: A Daisy Chaining Approach S. S. Mehta†‡ , G. Hu† , N. R. Gans† , W. E. Dixon†

† Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611-6250 ‡ Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL 32611-0570

Abstract— A cooperative visual servo tracking controller is developed in this paper with the objective to enable an unmanned ground vehicle (UGV) to follow a desired trajectory encoded as a sequence of images (i.e. a prerecorded video) utilizing the image feedback from a moving airborne monocular camera system. An innovative daisy chaining strategy is used to resolve the relative velocity between the moving airborne camera (i.e., a camera attached to a remotely piloted aircraft), the moving UGV, and the desired time varying UGV trajectory. An adaptive Lyapunov-based control strategy is employed to actively compensate for the lack of known depth measurements and the lack of an object model.

I. I NTRODUCTION Conventional unmanned ground vehicles (UGVs) rely heavily on global positioning systems (GPS) and inertial measurement units (IMUs) to follow trajectories encoded in the task-space (i.e., navigational waypoints). However, the quality of service from GPS can become degraded due to satellite coverage (i.e., GPS dropout) or signal corruption (i.e., jamming/spoofing). IMUs can drift and accumulate errors over time in a similar manner as dead reckoning. These issues have motivated the interest and advancement in vision-based navigation. Advances in image extraction/interpretation provide an inroad for position and orientation (i.e., pose) measurement of autonomous vehicles. Numerous researchers have investigated combining multiple sensor modalities such as vision with GPS/IMUs for autonomous navigation and control. While the use of multiple sensor modalities is acceptable for some applications, the practical drawbacks of incorporating additional sensors that are prohibitive for other applications include: increased cost, increased complexity, decreased reliability, and increased processing burden. Motivated by the desire to reduce the number of required sensors, several researchers have explored the use of a camera for autonomous vehicle feedback. Pure image-based visual servo control results have a known problem with potential singularities in the image-Jacobian, and since the feedback is only in the image-space, these methods may require impossible Euclidean motions. Motivated by the desire to eliminate these issues, some efforts have been developed that combine reconstructed Euclidean This research is supported in part by the NSF CAREER award CMS0547448, AFOSR contract numbers F49620-03-1-0381 and F49620-03-10170, AFRL contract number FA4819-05-D-0011, and by research grant No. US-3715-05 from BARD, the United States - Israel Binational Agricultural Research and Development Fund at the University of Florida.

information and image-space information in the control design. The Euclidean information can be reconstructed by decoupling the interaction between translation and rotation components of a homography matrix. This homographybased method yields an invertible triangular image-Jacobian with realizable Euclidean motion. Homography-based visual servo control results that have been developed for UGV include: [2], [3], [6], and [7]. In [6], a visual servo controller was developed to asymptotically regulate the pose of an UGV to a constant pose defined by a goal image, where the camera was mounted on-board an UGV (i.e., the camera-inhand problem). Since many practical applications require a robotic system to move along a predefined or dynamically changing trajectory, the regulation result in [6] was extended in [3] to address the UGV tracking problem. Note that due to the nonholonomic constraints, the UGV tracking problem does not reduce to the UGV regulation problem. In [2], a stationary overhead camera (i.e., the camera-to-hand or fixed camera configuration) was used to track a UGV to a desired pose. The development in this paper is motivated by the desire to obtain pose feedback from a moving airborne camera (e.g., a remotely piloted fixed-wing or rotary aircraft) to enable a UGV to track a desired time-varying trajectory. A significant technical obstacle is an ambiguity in the sensor feedback due to a relative velocity between the moving airborne camera and the time-varying motion of the UGV. That is, a moving airborne camera is used to record a video of a UGV (e.g., a human piloted aircraft recording a video of a human piloted UGV). The control objective is to then use the airborne camera (which will have a different motion than when the video was recorded) to provide sensor feedback so that the UGV follows the same trajectory as the desired UGV. To resolve the daunting relative velocity issue, we use an innovative new daisy chaining idea to link a series of projective homographies so that all the relationships can be expressed in terms of a constant reference frame. Efforts in this paper leverage on the geometric reconstruction developed in [7] for the significantly less complex regulation problem. Specifically, a multi-view geometry based approach is exploited to construct and daisy chain the geometric relationships between various UGV positions/orientations and camera coordinate systems. By relating the coordinate frames in this way, a measurable error system for the nonholonomic UGV can be developed. Based on the open-loop error system, a tracking controller

is developed through the application of Extended Barbalat’s Lemma in a Lyapunov-based analysis. II. G EOMETRIC M ODEL Consider a single camera that is navigating (e.g., by remote controlled aircraft) above1 the planar motion of an unmanned ground vehicle (UGV) as depicted in Fig. 1. The moving coordinate frame I is attached to the airborne camera, the moving coordinate frame IP is attached to the airborne camera which took the desired image sequences, and the fixed coordinate frame IU is a snapshot of LP . he moving coordinate frame F is attached to the UGV at the center of the rear wheel axis (for simplicity and without loss of generality). The UGV is represented in the camera image by four feature points that are coplanar and not colinear. The Euclidean distance (i.e., vl 5 R3 ;l = 1> 2> 3> 4) from the origin of F to one of the feature points is assumed to be known. The plane defined by the UGV motion (i.e., the plane defined by the xy-axis of F ) and the UGV feature points is denoted as . The linear velocity of the UGV along the xaxis of F is denoted by yf (w) 5 R, and the angular velocity $ f (w) 5 R is about the z-axis of F (see Fig. 1). While viewing the feature points of the UGV, the camera is assumed to also view four additional coplanar and noncolinear feature points of a stationary reference object. The four additional feature points define the plane  in Fig. 1. The stationary coordinate frame F  is attached to the object where distance (i.e., vl 5 R3 ;l = 1> 2> 3> 4) from the origin of the coordinate frame to one of the feature points is assumed to be known. The plane   is assumed to be parallel to the plane . When the camera is coincident with IU , a fixed (i.e., a single snapshot) reference position and orientation (i.e.,pose) of the UGV, denoted by Fv , is assumed to be in the camera’s field-of-view. A desired trajectory is defined by a prerecorded time-varying trajectory of Fg that is assumed to be second-order differentiable where yfg (w)> $ fg (w) 5 R denote the desired linear and angular velocity of Fg , respectively. A time-varying coordinate system IP is attached to the camera capturing the desired trajectory of the UGV. The feature points that define   are also assumed to be visible when the camera is a priori located coincident with the pose of the stationary coordinate frame IU and the time-varying coordinate frame IP= To relate the coordinate systems, let U (w), U (w), Uu (w), Uuv ,  Uu , Upg (w), Up (w), Uup (w) 5 VR(3) denote the rotation from F to I, F  to I, I to IU , Fv to IU , F  to IU , Fg to IP , F  to IP , and IP to IU , respectively, {i (w), {i (w) 5 R3 denote the respective time-varying translation from F to I and from F  to I with coordinates expressed in I, {i u (w), {0i u (w), {i uv , {i u 5 R3 denote the respective constant translation from I to IU , F to IU , Fv to IU , and from F  to IU expressed in the coordinates of IU > and {i up (w), {0i up (w), {i pg (w), {i p (w) 5 R3 denote the respective constant translation from I to IU , F to IU , Fg 1 No assumptions are made with regard to the alignment of an UGV plane of motion and the focal axis of the camera as in [5].

Fig. 1.

Camera Coordinate Frame Relationships

to IP , and from F  to IP expressed in the coordinates of IP . From the geometry between the coordinate frames depicted in Fig. 1, the following relationships can be developed p ¯ l = {i + Uvl

p ¯ l = {i + U vl

p ¯ ul = {i u + Uu vl p ¯ uvl = {i uv + Uuv vl  p ¯ pl = {i p + Up vl p ¯ pgl = {i pg + Upg vl 0 0 0  W 0 p ¯ l = {i u + Uu U Uvl = {i u + U vl 0 0 W Upg vl = {0i up + Upg vl = p ¯ pgl = {0i up + Uu Up

(1) (2) (3) (4) (5)

¯ l (w) 5 R3 denote the Euclidean coordiIn (1)-(5), p ¯ l (w), p nates of the feature points of the UGV and the feature points on the plane   expressed in I as £ ¤W {l (w) |l (w) }l (w) (6) p ¯ l (w) , £ ¤ W {l (w) |l (w) }l (w) > (7) p ¯ l (w) , 0

¯ uvl 5 R3 denote the time-varying current Euclidean p ¯ l (w), p coordinates and stationary reference Euclidean coordinates, respectively, of the feature points attached to the UGV expressed in IU as £ 0 ¤W 0 0 0 (8) p ¯ l (w) , {l (w) |l (w) }l (w) £ ¤W {uvl |uvl }uvl > (9) p ¯ uvl , p ¯ ul 5 R3 denotes the Euclidean coordinates of the feature points on the plane   expressed in IU as £ ¤W   }ul > (10) p ¯ ul , {ul |ul ¯ pl (w) 5 R3 denote the Euclidean coordinates of p ¯ pgl (w), p the feature points of the UGV and the feature points on the plane  expressed in IP as £ ¤W {pgl (w) |pgl (w) }pgl (w) (11) p ¯ pgl (w) , £  ¤W    {pl (w) |pl (w) }pl (w) (12) p ¯ pl (w) ,

0

and p ¯ pgl (w) 5 R3 denotes the desired Euclidean coordinates of the feature points attached to the UGV expressed in IU as £ 0 ¤W 0 0 0 p ¯ pgl (w) , {pgl (w) |pgl (w) }pgl (w) = (13)

Remark 1: As in [1], the subsequent development requires that the rotation matrix Uu be known. The rotation matrix Uu can be obtained a priori using various methods (e.g., a second camera, Euclidean measurements) and is considered a mild assumption since it is an off-line measurement.

After some algebraic manipulation [7], the expressions in (1)-(5) can be rewritten as

III. E UCLIDEAN R ECONSTRUCTION

¯p ¯i + U ¯ l p ¯l = {

¯ uv p p ¯ uvl = { ¯i uv + U ¯ ul

¯ pg p p ¯ pgl = { ¯i pg + U ¯ pl

p ¯ ul = {i u + Uu p ¯ l (15)

p ¯ ul = {i up + Uup p ¯ pl 0

p ¯ l = {i u + Uu p ¯l

(14)

(16)

0

p ¯ pgl = {i up + Uup p ¯ pgl (17)

¯ pg (w), Uu (w), Uup (w) 5 VR (3) and ¯ (w), U ¯ uv , U where U { ¯i (w), { ¯i uv , {i u (w), { ¯i pg (w), {i up (w) 5 R3 are new rotational and translational variables, respectively, defined as ¯ = UUW ¯ uv = Uuv UuW U U W W ¯ pg = Upg Up U Uup = Uu Up { ¯i { ¯i uv {i u { ¯i pg {i up

Uu = Uu UW (18)

¯ i = {i  U{ ¯ uv {i u = {i uv  U = {i u  Uu {i

(19) (20) (21)

¯ pg {i p = {i pg  U  = {i u  Uup {i p =

(22) (23)

By using the projective relationships (see Fig. 1.)

0

0

the expressions in (14) and (17) can be expressed as ³ ¯i W ´  ¯+{ p ¯l U q p ¯l = g ¶ μ { ¯i uv W ¯ p ¯ ul Uuv +  qu p ¯ uvl = gu ³ ´ {i u p ¯ ul = Uu +  qW p ¯ l g ³ 0 {i u W ´ p ¯l = Uu + p ¯l q g ¶ μ { ¯i pg W ¯ ¯ pl Upg +  qp p p ¯ pgl = gp μ ¶ {i up W  p ¯ ul = Uup +  qp p ¯ pl gp μ ¶ 0 {i up W p ¯ pgl = Uup + qp p ¯ pgl = gp

pl ,

puvl

(25) (26) (27) (28) (29) (30) (31)

In (24)-(31), g(w), g (w), gp (w), gu (w), gp (w) A % for some positive constant % 5 R, and q (w), qp (w), and qu 5 R3 denote the constant unit normal to the planes  and   as expressed in I, IP , and IU respectively.

pl ,

p ¯ l (w) = }l

0

0

(24)

p ¯ l (w) }l

(32)

The normalized Euclidean coordinates of the feature points for the current, desired, reference UGV, and reference object 0 0 image can be expressed in terms of IU as pl (w), ppgl (w), puvl , pul 5 R3 , respectively, as pl (w) ,

g(w) = qW p ¯l g (w) = qW p ¯ l gp (w) = qW ¯ pgl gp (w) = qW ¯ pl p p p p W  W  guf = qu p ¯ uvl gu (w) = qu p ¯ ul gu (w) = qW ¯ l = qW ¯ pgl u p u p

The relationships given by (25) and (31) provide a means to quantify a translation and rotation error between the different coordinate systems. Since the pose of F, Fg , Fv , and F  cannot be directly measured, a Euclidean reconstruction is developed in this section to obtain the position and rotation error information by comparing multiple images acquired from the hovering monocular vision system. Specifically, comparisons are made between the current UGV image and the reference image in terms of I, between the reference UGV image and the reference object image in terms of IU , and between the desired UGV image and the reference image in terms of IP . The normalized Euclidean coordinates of the feature points for the current UGV image and the reference image can be expressed in terms of I as pl (w) and pl (w) 5 R3 , respectively, as

,

0

0 p ¯ l (w) p ¯ (w) ppgl (w) , 0pgl 0 }l (w) }pgl (w) p ¯ uvl p ¯ pul , ul = }uvl }ul

(33)

Similarly, the normalized Euclidean coordinates of the feature points for the desired and reference image can be expressed in terms of IP as ppgl (w), ppl (w) 5 R3 , respectively, as ppgl (w) =

p ¯ pgl (w) }pgl (w)

ppl (w) ,

p ¯ pl (w)  (w) = }pl

(34)

From the expressions given in (25) and (32), the rotation and translation between the coordinate systems F and F  can be related in terms of the normalized Euclidean coordinates as ¢ }l ¡ ¯ pl = U + {k qW pl }l {z } | = (35) |{z} l K Also, equations (26)-(33) can be used to relate the rotation and translation between pul and puvl as puvl =

 }ul }uvl |{z} uvl

¡ ¢  ¯ uv + {kuv qW pul U u {z } | Kuv

(36)

and between pl (w) and pul as ¢ ¡ }l pul = Uu + {ku qW pl  }ul {z } | = |{z} ul Ku

(37)

Similarly, from the expressions given in (29) and (34), the rotation and translation between ppgl (w) and ppl (w) give  ¡ ¢  }pl ¯ pg + {kpg qW ppgl = U p ppl } {z } (38) | pgl {z } | pgl Kpg and the rotation and translation between ppl (w) and pul give  ¢  ¡ }pl pul = Uup + {kup qW p ppl  }ul {z } | = (39) |{z} upl Kup The expressions for pl (w) and ppgl (w) can be related to 0 0 pl (w) and ppgl (w) as 0 0 }l }pgl 0 pl = 0 Ku0 pl ppgl = 0 Kup ppgl (40) }l }pgl where qW pl W q qW pl  qW p ppl W = Uup + {kup pgl W q = qp ppgl p Ku0 = Uu + {ku l

0 Kup

(41) (42)

In (35)-(42), l (w) > uvl , ul , pgl (w), upl (w) 5 R denote depth ratios, K (w), Kuv , Ku (w), Ku0 (w), Kpg (w), Kup (w) 5 R3×3 denote Euclidean homographies, and {k (w), {kuv (w), {ku (w), {kpg (w), {kup (w) 5 R3 denote scaled translation vectors that are defined as { ¯i { ¯i uv {i u {kuv =  {ku =  {k = g gu g { ¯i pg {i up {kpg = {kup =  = (43) gp gp Each Euclidean feature point will have a projected pixel coordinate expressed in terms of I as £ £ ¤W ¤W sl , xl yl 1 (44) sl , xl yl 1 where sl (w) and sl (w) 5 R3 represents the image-space coordinates of the time-varying feature points of an UGV and reference object, respectively. The projected pixel coordinates of the Euclidean features in the reference image can be expressed in terms of IU as £ £ ¤W ¤W  1 sul , xul yul suvl , xuvl yuvl 1 (45) where suvl and sul 5 R3 represents the constant image-space coordinates corresponding to the reference image of an UGV and the reference object. The projected pixel coordinates of the Euclidean features expressed in IP are £ £ ¤W ¤W  1 spl , xpl ypl spgl , xpgl ypgl 1 (46)

where spgl and spl 5 R3 represents the time-varying image-space coordinates corresponding to the desired Euclidean trajectory of an UGV and the reference object, respectively. To calculate the Euclidean homographies given in (35)-(42) from pixel information, the projected pixel coordinates are related to pl (w), pl (w), puvl , pul , ppgl , and ppl (w) as sl suvl spgl

= Dpl = Dpuvl = Dppgl

sl = Dpl sul = Dpul spl = Dppl

(47) (48) (49)

where D 5 R3×3 is a known, constant, and invertible intrinsic camera calibration matrix. By using (35)-(42), (47), and (49), the following relationships can be developed: ¡ ¢ ¡ ¢ sl = l DKD1 sl suvl = uvl DKuv D1 sul | {z } | {z } J Juv ¡ ¢ spgl = pgl DKpg D1 spl {z } | (50) Jpg ¡ ¢ ¡ ¢ sul = ul DKu D1 sl sul = upl DKup D1 spl {z } | {z } | Ju Jup (51) where J (w), Juv , Jpg (w), Ju (w), Jup (w) 5 U3×3 denote projective homographies. Sets of linear equations can be developed from (50) and (51) to determine the projective homographies up to a scalar multiple. Various techniques can be used to decompose the Euclidean homographies, to obtain l (w) > uvl , pgl (w), ul (w), upl (w), {k (w), {kuv , {ku (w), {kpg (w), {kup (w), ¯ (w), U ¯ uv , Uu (w), U ¯ pg (w), and Uup (w). Given that the rotaU  tion matrix Uu (w) is assumed to be known, the expressions ¯ uv (w) and Uu (w) in (18) can be used to determine Uuv (w) for U and U (w). Once U (w) is determined, the expression for ¯ and Uup (w) in (18) can be used to determine U(w) and U(w)   Up (w). Up (w) can then be used to calculate Upg (w) from ¯ pg in (18). the relation for U  (w), Based on the definitions for U(w), U (w), Upg (w), Up  Uu , and Uuv provided in the previous development, the rotation from F to Fv and from Fg to Fv , denoted by U1 (w) and Ug1 (w), are defined as follows 5 6 cos  sin  0 W Uu UW U = 7  sin  cos  0 8 (52) U1 = Uuv 0 0 1 5 6 cos g sin g 0 W W Uu Up Upg = 7  sin g cos g 0 8 (53) Ug1 = Uuv 0 0 1 where (w) 5 R denotes the right-handed rotation angle about the z-axis that aligns F with Fv , and g (w) 5 R denotes the right-handed rotation angle about the z-axis that aligns Fg with Fv . From the definitions of (w) and g (w), it is clear that ˙ = $ f (54) ˙ g = $ fg

where $ f (w) and $ fg (w) were introduced in Section II. Based  on the fact that U(w), U (w), Upg (w), Up (w), Uu , and Uuv are known, it is clear from (52)-(54) that (w) and g (w) are known signals that can be used in the subsequent control development. To facilitate the subsequent development, (w) and g (w) are assumed to be confined to the following regions  ? (w) 6 

  ? g (w) 6 =

(55)

IV. C ONTROL O BJECTIVE The objective is to develop a visual servo controller that ensures that the coordinate system F tracks the time-varying trajectory of Fg (i.e., p ¯ l (w) measured in I tracks p ¯ pgl (w) measured in IP ). To ensure that p ¯ l (w) tracks p ¯ pgl (w)> from the Euclidean reconstruction given in (35)-(42) the control 0 0 objective can be stated as follows2 : p ¯ 1 (w) $ p ¯ pg1 (w). To quantify the control objective, the translation and rotation W tracking error, denoted by h(w) , [h1 (w) > h2 (w) > h3 (w)] 5 3 R , is defined as follows [4]: h1 ,  1   g1

h2 , 2   g2

h3 ,    g

(56)

where (w) and g (w) are introduced in (52) and (53), respectively, and the auxiliary signals  (w) , [ 1 (w) >  2 (w) >  3 (w)]W ,  g (w) , [ g1 (w)>  g2 (w)>  g3 (w)]W 5 R3 are defined as 0 1 W  (w) , U (w)U (w)UuW p ¯ 1 (w) (57)  }u1 1 W  W 0 ¯ pg1 (w)=  g (w) ,  Upg (w)Up (w)Uu p }u1 Also, in (24), the normal unit vector qu is defined as £ ¤W qu = Uu UW (w)U(w) 0 0 1 £ ¤W W = Uu Up (w)Upg (w) 0 0 1 = (58) From (24), (58), and (57), it can be determined that 3 = g3

gu =  = }u1

(59)

The expressions in (32)-(35), (37)-(42), (47), and (49) can be used to rewrite  (w) and g (w) in terms of the measurable signals 1 (w), u1 (w), up1 (w), pg1 (w), U(w), U (w), Uu ,  Upg (w), Up (w), s1 (w), and spg1 (w) as follows 0 u1 W U (w)U (w)UuW Ku D1 s1  (w) = 1 up1 W  0  g (w) = U (w)Up (w)UuW Kup D1 spg1 =(60) pg1 pg Based on (56), (60), and the fact that (w) and g (w) are measurable, it is clear that h(w) is measurable. By examining (56)-(59), it can be shown that the control objective is achieved if kh(w)k $ 0. Specifically, if h3 (w) $ 0> then it is clear from (56) that U1 (w) $ Ug1 (w). If h1 (w) $ 0 and h2 (w) $ 0, then from (56) and (59) it is clear that  (w) $ g (w). Given that U1 (w) $ Ug1 (w) and that  (w) $  g (w), 2 Any point R can be utilized in the subsequent development; however, to l reduce the notational complexity, we have elected to select the image point R1 , and hence, the subscript 1 is utilized in lieu of l in the subsequent development.

0

0

then (57) can be used to conclude that p1 (w) $ ppg1 (w). 0 0 If p1 (w) $ ppg1 (w) and U1 (w) $ Ug1 (w), then (25)-(31), (40)-(42) can be used to prove that p ¯ l (w) $ p ¯ pgl (w). A. Open-loop Error System To facilitate the development of the open-loop tracking error system, we take the time derivative of (57) as ¸  y v1 ˙ =  +    $ (61) }u1 }u1 × where (4) was utilized, along with the following relationships =

0

p ¯

0

1

0

= U y + U [$]× v1

0 0 U˙ = U [$]×

(62)

and y(w), $(w) 5 R3 denote the respective linear and angular velocity of an UGV expressed in F as £ £ ¤W ¤W (63) y , yf 0 0 $ , 0 0 $f . In (61), the notation [·]× denotes the 3 × 3 skew-symmetric matrix form of the vector argument. Without loss of generality, we assume that the feature point R1 is located at the origin of the coordinate frame attached to the UGV, so that v1 = [0, 0, 0]W . Based on (63), we can rewrite (61) as ˙ 1 =

yf  + 2 $f }u1

˙ 2 = 1 $ f =

(64)

Since the desired trajectory is assumed to be generated in accordance with an UGV motion constraints, a similar expression to (64) can be developed as ˙ g1 =

yfg  +  2g $ fg }u1

˙ g2 = g1 $ fg =

(65)

where yfg (w), $ fg (w) 5 R denote the desired linear and angular velocity of Fg , respectively. After taking the time derivative of (56) and utilizing (54) and (64), the following open-loop error system can be obtained   }u1 h˙ 1 = yf + }u1 (2 $ f  ˙ g1 ) h˙ 2 =  1 $ f + g1 ˙ g h˙ 3 = $ f  ˙ g =

(66)

To facilitate the subsequent development, the auxiliary variable h¯2 (w) 5 R is defined as h¯2 , h2 +  g1 h3 .

(67)

After taking the time derivative of (67) and utilizing (66), the following expression is obtained =

h¯2 = h1 $ f + ˙ g1 h3 .

(68)

Based on (67), it is clear that if h¯2 (w), h3 (w) $ 0, then h2 (w) $ 0. Based on this observation and the open-loop dynamics given in (68), the following control development is based on the desire to show that h1 (w) > h¯2 (w) > h3 (w) are asymptotically driven to zero.

B. Closed-loop Error System Based on the open-loop error systems in (66) and (68), the linear and angular velocity control inputs for an UGV are designed as  ( 2 $ f  ˙ g1 ) yf , ny h1 + h¯2 $ f  }ˆu1

(69)

$ f , n$ h3 + ˙ g  ˙ g1 h¯2

(70)

where ny , n$ 5 R denote positive, constant control gains. In  (w) 5 R is generated by (69), the parameter update law }ˆu1 the following differential equation =

 }ˆu1 =  1 h1 ( 2 $ f  ˙ g1 )

(71)

where  1 5 R is a positive, constant adaptation gain. After substituting the kinematic control signals designed in (69) and (70) into (66), the closed-loop error systems are   h˙ 1 = ny h1 + h¯2 $ f + }˜u1 ( 2 $ f  ˙ g1 ) }u1 = h¯2 = h1 $ f + ˙ g1 h3 h˙ 3 = n$ h3  ˙ g1 h¯2

(72)

   }˜u1 , }u1  }ˆu1 .

(73)

V. S TABILITY A NALYSIS Theorem 1: The control input designed in (69) and (70) along with the adaptive update law defined in (71) ensure asymptotic an UGV regulation in the sense that lim kh (w)k $ 0

(74)

w$4

provided the desired trajectory is selected so that lim |˙ g1 =¯ h2 | $ 0 , lim |¯ h2 | $ 0 (75) w$4 Proof: To prove Theorem 1, the non-negative function Y (w) 5 R is defined as follows w$4

1 2 1  2 1 2 1 2 } h + h¯ + h + }˜ . 2 u1 1 2 2 2 3 2 1 u1

(76)

The following simplified expression can be obtained by taking the time derivative of (76), substituting the closedloop dynamics from (72) into the resulting expression, and then cancelling common terms =

1    ( 2 $ f  ˙ g1 )  n$ h23   }˜u1 }ˆu1 . (77) Y˙ = ny h21 + h1 }˜u1 1

After substituting (71) into (77), the following expression can be obtained Y˙ = ny h21  n$ h23 .

From (79) and the fact that the signal ˙ g1 (w)=h¯2 (w) is uniformly continuous (i.e., ˙ g1 (w), ¨g1 (w), h¯2 (w), h¯2 (w) 5 L4 ), Extended Barbalat’s Lemma [4] can be applied to the last equation in (72) to prove that h2 (w)| $ 0 = |h˙ 3 (w)| $ 0 and |˙ g1 (w) =¯

(80)

Based on 75 and the definition of h¯2 (w) given in (67), the results in (79) and (80) can be used to conclude that |h2 (w)| $ 0= VI. C ONCLUSION

where (68) was utilized, and the depth-related parameter  (w) 5 R, is defined as estimation error, denoted by }˜u1

Y ,

development, the expressions= in (69), (71), and (72) can be =  used to conclude that yf (w), }ˆu1 (w), h˙ 1 (w), h¯2 (w), h˙ 3 (w) 5 L4 . Based on the fact that h1 (w), h3 (w), h˙ 1 (w), h˙ 3 (w) 5 L4 and that h1 (w), h3 (w) 5 L2 , Barbalat’s lemma can be employed to prove that (79) |h1 (w)| , |h3 (w)| $ 0 .

(78)

From (76) and (78), it is clear that h1 (w), h¯2 (w), h3 (w),   (w) 5 L4 and that h1 (w), h3 (w) 5 L2 . Since }˜u1 (w) 5 L4 }˜u1  and }u1 is a constant, the expression in (73) can be used  (w) 5 L4 . From the assumption that to determine that }ˆu1  g1 (w), ˙ g1 (w), g2 (w), g (w), and ˙ g (w) are constructed as bounded functions, and the fact that h¯2 (w), h3 (w) 5 L4 , the expressions in (56), (67), and (70) can be used to prove that h2 (w),  1 (w), 2 (w), (w), $ f (w) 5 L4 . Based on the previous

A cooperative visual servo tracking controller is proven to enable autonomous control of a UGV by using feedback from a moving airborne camera. A general tracking problem is developed in this paper in which a desired video is recorded by a moving airborne camera viewing a moving UGV. When the controller is invoked, the current UGV is then commanded to follow the same desired trajectory, even though the camera observing the UGV has a different motion than when the desired video was recorded. A series of daisy chained homographies is used to resolve the relative velocity issue incurred when using a moving airborne camera to determine information about a moving UGV. An adaptive Lyapunov-based control strategy is employed to actively compensate for uncertain depth measurements and the lack of an object model. R EFERENCES [1] J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, “Adaptive Homography-Based Visual Servo Tracking for Fixed and Camera-inHand Configurations,” IEEE Transactions on Control Systems Technology, accepted, to appear. [2] J. Chen, W. E. Dixon, D. M. Dawson, and V. Chitrakaran, “Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera,” Proc. of the IEEE Conference on Control Applications, Taipei, Taiwan, pp. 1061-1066, 2004. [3] J. Chen, W. E. Dixon, D. M. Dawson, and M. McIntire, “Homographybased Visual Servo Tracking Control of a Wheeled Mobile Robot,” Proc. of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, pp. 1814-1819, October 2003; see also IEEE Transactions on Robotics, accepted, to appear. [4] W. E. Dixon, D. M. Dawson, E. Zergeroglu and A. Behal, Nonlinear Control of Wheeled Mobile Robots, Springer-Verlag, 2001. [5] W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, “Adaptive Tracking Control of a Wheeled Mobile Robot via an Uncalibrated Camera System,” IEEE Transactions on Systems, Man, and Cybernetics Part-B: Cybernetics, Vol. 31, No. 3, June 2001. [6] Y. Fang, D. M. Dawson, W. E. Dixon, and P. Chawda, “HomographyBased Visual Servoing of Wheeled Mobile Robots,” IEEE Transactions on Systems, Man, and Cybernetics -Part B: Cybernetics, Vol. 35, No. 5, pp. 1041-1050, 2005. [7] S. S. Mehta, W. E. Dixon, D. MacArthur, C.D. Crane, “Visual Servo Control of an Unmanned Ground Vehicle via a Moving Airborne Monocular Camera”, Proc. of the IEEE American Control Conference, Minneapolis Minnesota, pp. 5276-5281, 2006.

Adaptive Vision-Based Collaborative Tracking Control ...

robotic system to move along a predefined or dynamically changing trajectory, the regulation result in [6] was extended in [3] to address the UGV tracking ...

259KB Sizes 3 Downloads 271 Views

Recommend Documents

Adaptive Tracking Control Using Synthesized Velocity ...
convergence of the attitude and angular velocity tracking errors despite ... spacecraft systems to coordinated robot manipulators (see [19] for a literature review of ...

Adaptive Spacecraft Attitude Tracking Control with ...
signed for spacecraft attitude tracking using Variable Speed Control Moment Gyros ... (2). (3). (4) where the 's denote the values of at . The symbol denotes the ..... are not physically meaningful since they may not preserve the orthogonality of ...

Adaptive Output-Feedback Fuzzy Tracking Control for a ... - IEEE Xplore
Oct 10, 2011 - Adaptive Output-Feedback Fuzzy Tracking Control for a Class of Nonlinear Systems. Qi Zhou, Peng Shi, Senior Member, IEEE, Jinjun Lu, and ...

Nonlinear Servo Adaptive Fuzzy Tracking
This last assumption is reasonable for motors controlled by amplifiers con- ... abc. = . . i. A , j. B , and are fuzzy sets; k. C m. G. ( ) i. A x μ. , ( ) j. B x μ q , k. C μ and.

Direct adaptive control using an adaptive reference model
aNASA-Langley Research Center, Mail Stop 308, Hampton, Virginia, USA; ... Direct model reference adaptive control is considered when the plant-model ...

Direct Adaptive Control using Single Network Adaptive ...
in forward direction and hence can be implemented on-line. Adaptive critic based ... network instead of two required in a standard adaptive critic design.

Collaborative Tracking of Objects in EPTZ Cameras
Given High Definition (1280x720) Video sequence. (of a sports .... Planar Homography Constraint, 9th European Conference on Computer Vision ECCV 2006,.

Pay-per-Tracking: A Collaborative Masking Model for ...
addthis.com weborama.fr adtech.de gemius.pl outbrain.com criteo.com theadex.com betrad.com smartadserver.com akamaihd.net tumblr.com openx.net turn.com amazon-adsystem.com gstatic.com cedexis.com serving-sys.com adverticum.net casalemedia.com adnxs.c

Collaborative Tracking of Objects in EPTZ Cameras
A brief training period is required where statistics from a few frames are used to model the background .... meaningful tracking in the LRT view. On the other hand ...

An Optimal Approach to Collaborative Target Tracking ...
Semidefinite programming·Second-order cone programming·Multi-agent systems ...... Advanced Optimization Laboratory: Addendum to the SeDuMi User Guide Version 1.1 ... Taipei, Taiwan. http://control.ee.ethz.ch/~joloef/yalmip.php (2004).

pdf-1410\adaptive-software-development-a-collaborative-approach ...
... the apps below to open or edit this item. pdf-1410\adaptive-software-development-a-collaborat ... o-managing-complex-systems-by-james-a-highsmith.pdf.

adaptive active noise control
My thanks are due to Ms. Neha Gupta and the staff of the DRPG office for their untiring effort to .... 3.1.5 FXLMS algorithm for feedback ANC in Headsets . .... phone to sense the residual noise and a secondary speaker to produce the antinoise.

Spacecraft Adaptive Attitude and Power Tracking with ...
verse directional unit vectors expressed in the body frame. Thus, ..... u D W QT .QWQT /¡1 Lrp ... Note that according to the condition number of the matrix C, the.

Adaptive Fragments-Based Tracking of Non-Rigid Objects Using Level ...
Chan-Vese manner, using the framework of level sets to pre- serve accurate ..... ings of the IEEE Conference on Computer Vision and Pattern. Recognition (CVPR), 2006 ... tic space-time video modeling via piecewise GMM. IEEE. Transactions ...

servomotor velocity tracking using adaptive fuzzy ...
Oct 31, 2007 - Phone: 52 55 50 61 37 39, Fax: 52 55 50 61 38 12 ... In robotics applications, a velocity loop controls the speed of the wheels of a mobile robot or the velocity of the motor shaft in ... where the consequences of the fuzzy rules are u

servomotor velocity tracking using adaptive fuzzy ...
for velocity tracking of time!varying references. The control law is ... Integral controller endowed with feedforward compensation. Keywords: Fuzzy systems, Adaptive control, Velocity control, Servosystems, Real! Time control. 1 ... In robotics appli

Context-Aware Access Control for Collaborative ...
Due to availability of semantic search engines and open data like [49], this approach ..... Wikipedia: Access control — Wikipedia, The Free Encyclopedia. http:.

Video tracking control algorithms for Unmanned Air ...
autopilot, a computer system and video systems. ... the structure to set the best tracking trajectory. ..... Conference on Decision and Control, Paradise Island,.

Variational optimal control technique for the tracking of ... - Irisa
many applications of computer vision. Due to the .... consists in computing the functional gradient through finite differences: .... grid point (i, j) at time t ∈ [t0; tf ].

Variational optimal control technique for the tracking of ... - Irisa
IRISA/INRIA Campus de Beaulieu 35042 Rennes Cedex, France npapadak ... a term related to the discrepancy between the state vari- ables evolution law and ...

Tracking Control for Hybrid Systems With State ... - of Maurice Heemels
Index Terms—Asymptotic stability, control system analysis, hy- ... Digital Object Identifier 10.1109/TAC.2012.2223351 ...... changes sign at impacts, and the.

Iterative Learning Control for Optimal Multiple-Point Tracking
on the system dynamics. Here, the improved accuracy in trajectory tracking results has led to the development of various control schemes, such as proportional ...