Multi-Reference Visual Servo Control of an Unmanned Ground Vehicle S. S. Mehta, G. Hu, A. P. Dani, W. E. Dixon Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611-6250

{siddhart,gqhu,ashwin31,wdixon}@ufl.edu A cooperative visual servo regulation controller is developed in this paper with the objective to position an unmanned ground vehicle (UGV) to a desired position and orientation using the image feedback from a moving airborne monocular camera. Fusing an innovative daisy chaining strategy with the geometric reconstruction method, the Euclidean position of the UGV and reference objects is identified to provide simultaneous localization and mapping (SLAM) of the UGV. Asymptotic regulation is proved through a Lyapunov-based analysis of the UGV.

Introduction The capabilities and roles of unmanned ground vehicles (UGVs) are evolving, and require new concepts for autonomous guidance, navigation and control. A significant aspect of the control problem is identification of the Euclidean position and orientation (i.e., pose) of an unmanned ground vehicle (UGV) for autonomous navigation and control. Advances in the image extraction/interpretation technology promote a promising approach to overcome the navigation, control, and localization problem utilizing a vision system. Several researchers have explored the use of a camera for autonomous vehicle feedback. Some examples of pure image-based visual servo control of UGV include:1 ,6 ,8 ,17 −19 . Previous pure image-based visual servo control results have a known problem with potential singularities in the image-Jacobian, and since the feedback is only in the image-space, these methods may require impossible Euclidean motions. In order to eliminate these issues, homography-based visual servo control results have been developed for UGV3 ,4 ,11 . A visual servo controller was developed to asymptotically regulate the pose of an UGV to a constant pose defined by a goal image, where the camera was mounted on-board an UGV (i.e., the camera-in-hand problem)11 . Since many practical applications require a robotic system to move along a predefined or dynamically changing trajectory, the regulation result presented by Fang et al.11 was extended by Chen et al.3 to address the UGV tracking problem. A simple localization scheme based on the image feedback from a global-camera is presented by Guo et al.15 . A multiple mobile robot navigation method is examined by Hada et al.16 using a indoor global positioning system and UGV navigation and localization is achieved by using cameras distributed in the robot working domain. Image feedback from a moving airborne monocular camera is used to provide pose measurements for a moving autonomous system with respect to a stationary reference object22 −24 . In particular, Mehta et al.22 ,23 developed an innovative daisy chaining multiview photogrammetry visual servo control approach for regulation and tracking control of the UGV. In24 , geometric relationships along with a quaternion-based error system is developed for a six degrees of freedom, fully actuated planar patch. For the results in22 −24 , the pose measurements are taken with respect to a stationary reference object and restrictions are imposed on the area of operation/motion for the UGV so that the reference object never leaves the field of view of an on-board camera. Also, the method presented by Mehta et al.22 ,23 assume that the known Euclidean distance of the feature points on the UGV and the stationary reference object are identical, which imposes practical limitations on the implementation of the visual servo controller. This research is supported in part by the NSF CAREER AWARD 0547448, NSF SGER 0738091, AFOSR contract numbers F49620-03-1-0381 and F49620-03-1-0170, AFRL contract number FA4819-05-D-0011, Department of Energy URPR program grant number DE-FG04-86NE37967, and by research grant No. US-3715-05 from BARD, the United States - Israel Binational Agricultural Research and Development Fund.

1 of 12 American Institute of Aeronautics and Astronautics

Figure 1. Cam era to reference ob ject relationships.

The result in this paper further develops the daisy chaining method introduced by Mehta et al.22 −24 to achieve asymptotic regulation of the UGV based on the assumption that the given reference objects can leave the field of view while another reference object enters the field of view. The contribution of this paper is that since the controller development is based on the ability to daisy chain multiple reference objects, the restrictions on the applicitive area of operation are removed. That is, since the development in this paper does not require the airborne camera to maintain a view of a static reference object, the airborne camera/UGV pair are able to navigate over an arbitrarily large area. The presented work also relaxes the assumption of having identical Euclidean distance of the features for the UGV and the reference object. Taking the leverage of the geometric reconstruction method proposed by Dupree et al.10 , the time-varying Euclidean position of the UGV and the stationary position of the reference objects can also be identified with respect to the global coordinate system. Hence, simultaneous localization and mapping (SLAM) of the UGV can be obtained, with the application towards path planning, real time trajectory generation, obstacle avoidance, multi-vehicle coordination control and task assignment, etc. Simulation results are provided to demonstrate the performance of the daisy chaining based multi-reference regulation control of the UGV.

I.

Geometric Model

Consider a single camera that is navigating (e.g., by a remote controlled aircraft) abovea the planar motion of an unmanned ground vehicle (UGV) as depicted in Fig. 1 and Fig. 2. The moving coordinate frame I is attached to the airborne camera and the moving coordinate frame F is attached to the UGV at the center of the rear wheel axis (for simplicity and without loss of generality). The UGV is represented in the camera image by four feature points that are coplanar and not collinear. The Euclidean distance (i.e., s1i ∈ R3 ∀i = 1, 2, 3, 4) from the origin of F to one of the feature points is assumed to be known. The plane defined by the UGV motion (i.e., the plane defined by the xy-axis of F ) and the UGV feature points is denoted as π. The linear velocity of the UGV along the x-axis is denoted by vc (t) ∈ R, and the angular velocity ω c (t) ∈ R is about the z-axis of F (see Figure 1). While viewing the feature points of the UGV, the camera is assumed to also view four additional coplanar and noncollinear feature points of a stationary reference object, such that at any instant of time along the camera motion trajectory at least one such reference target is in the field of view. The four additional feature points define the plane π ∗n in Fig. 1 and Fig. 2. The stationary coordinate frame Fn∗ (n = 1, 2, .., m) is attached to the object where distance from the origin of the coordinate frame to one of the feature points is assumed to be known, i.e., s2ni ∈ R3 ∀i = 1, 2, 3, 4. The plane π ∗n is assumed to be parallel to the plane π. The feature points that define π∗1 , corresponding to a reference object F1∗ (i.e. Fn∗ corresponding to a No

assumptions are made with regard to the alignment of the WMR plane of motion and the focal axis of the camera as

in.3

2 of 12 American Institute of Aeronautics and Astronautics

Figure 2. Cam era to UGV relationships.

Table 1. List of variables.

Coordinate frames F to I Fn∗ to I I to IR Fd to IR Fn∗ to IR Fr to IR

Rotation matrix R (t) Rn∗ (t) Rr (t) Rrd ∗ Rrn Rrr

Translation vector xf (t) x∗f n (t) xf r (t) xf rd x∗f rn xf rr

00

Fr to I F to IR F to Fr F to Fd

00

R (t) 0 Rr (t) 0 R (t) 0 Rrd (t)

xf (t) 0 xf r (t) 0 xf (t) 0 xf rd (t)

n = 1), are also assumed to be visible when the camera is a priori located coincident with the position and orientation (i.e., pose) of the stationary coordinate frame IR . The stationary pose Fr corresponds to a snapshot of the UGV (e.g. at the starting location) visible from the reference camera coordinate system IR . When the camera is coincident with IR , the desired pose of the UGV Fd is assumed to be known. When the UGV is located at the desired pose, the coordinate frame F is coincident with the coordinate frame Fd . To relate the coordinate systems, consider the coordinate frame relationships as given by the following table. From the geometry between the coordinate frames depicted in Fig. 1 and Fig. 2, the following relationships can be developed m ¯ i = xf + Rs1i m ¯ ∗ni = x∗f n + Rn∗ s2ni 0

00

00

m ¯ i = xf + R s1i

m ¯ rdi = xf rd + Rrd s1i ∗ m ¯ ∗rni = x∗f rn + Rrn s2ni 0

0

0

m ¯ ri = xf r + Rr s1i

m ¯ ri = xf rr + Rrr s1i . 0

(1) (2) (3) (4)

In (1)-(4), m ¯ i (t), m ¯ i (t), m ¯ ∗ni (t) ∈ R3 denote the Euclidean coordinates of the feature points of the current UGV (i.e. F), constant reference UGV position, and stationary reference object π∗n (n = 1, 2, .., m), 3 of 12 American Institute of Aeronautics and Astronautics

respectively, expressed in I as

h

m ¯ i (t) =

h

0

m ¯ i (t) =

h

m ¯ ∗ni (t) = 0

xi (t) yi (t) zi (t) 0

0

0

xi (t) yi (t) zi (t)

iT

(5)

iT

(6) iT

∗ ∗ x∗ni (t) yni (t) zni (t)

,

(7)

¯ ri (t), m ¯ rdi ∈ R3 denote the Euclidean coordinates of the constant reference UGV, actual time varying m ¯ ri , m current UGV, and constant desired UGV, respectively, expressed in IR as h iT m ¯ ri = (8) xri yri zri h iT 0 0 0 0 (9) m ¯ ri = xri (t) yri (t) zri (t) iT h , (10) m ¯ rdi = xrdi yrdi zrdi

and m ¯ ∗rni ∈ R3 denotes the constant Euclidean coordinates of the feature points on the stationary reference plane π∗n expressed in IR as h iT ∗ ∗ . (11) m ¯ ∗rni = x∗rni yrni zrni

For simplicity and without loss of generality, we consider two reference targets Fn∗ (where n = 1, 2). After 0 0 some algebraic manipulation, the expressions for m ¯ ∗rni , m ¯ ri (t), and m ¯ i (t) in (1)-(4) can be rewritten as m ¯ ∗r1i

= xf r + Rr m ¯ ∗1i

0

0

0

m ¯ ri = xf r + Rr m ¯i

0

0

= xf + R m ¯i

m ¯i 0

(12)

0

= xf r + Rr m ¯i

m ¯ ri

0

m ¯ ∗r2i = xf r + Rr m ¯ ∗2i 0

(13) 0

m ¯ rdi = xf rd + Rrd m ¯ ri

0

(14)

0

where Rr (t), R (t), Rrd (t) ∈ R3×3 and xf r (t), xf (t), xf rd (t) ∈ R3 denote new rotation and translation variables given as follows: Rr R 0

0

Rrd

∗ = Rrn Rn∗T 00

xf r = x∗f rn − Rr x∗f n

(15)

xf = xf − R xf

(16)

00

0

= R RT 0T

0

0

= Rrd Rr

0

0

xf rd = xf rd − Rrd xf r

(17)

d∗2 = n∗T ¯ ∗2i 2 m

(18)

By using the projective relationships d∗1

= n∗T ¯ ∗1i 1 m

d = nT m ¯i

0

0

0

dr = nrT m ¯ ri

(19)

the relationships in (12) and (14) can be expressed as ¶ µ xf r n∗T 1 m ¯ ∗1i Rr + m ¯ ∗r1i = d∗1 ¶ µ xf r n∗T 2 ∗ m ¯ ∗2i Rr + m ¯ r2i = d∗2 Ã ! 0 0 xf nT 0 m ¯i = R + m ¯i d ! Ã 0 0 xf rd nrT 0 0 m ¯ rdi = m ¯ ri . Rrd + d0r 0

(20) (21) (22) (23) 0

In (20)-(23), d∗1 (t), d∗2 (t), d(t), dr (t) > ε for some positive constant ε ∈ R, and n∗1 (t), n∗2 (t), n(t), nr (t) ∈ R3 denote the time varying unit normal to the planes π ∗1 , π ∗2 , and π, respectively. 4 of 12 American Institute of Aeronautics and Astronautics

II.

Euclidean Reconstruction

The relationships given by (20)-(23) provide a means to quantify a translation and rotation error between the different coordinate systems. Comparisons are made between the current UGV image and the reference image in terms of I, between the a priori known desired UGV pose and the current pose in terms of IR and between the images of the stationary reference object in terms of I and IR . To facilitate the subsequent development, the normalized Euclidean coordinates of the feature points for the current UGV image, the reference UGV image, and the reference object images can be expressed in terms of I as mi (t), m∗1i (t), and m∗2i (t) ∈ R3 , respectively, as mi

=

m∗1i

=

m ¯i zi m ¯ ∗1i ∗ z1i

0

m ¯i 0 zi m ¯∗ m∗2i = ∗2i . z2i 0

mi =

(24) (25)

Similarly, the normalized Euclidean coordinates of the feature points for the current UGV, goal UGV, and 0 reference object image can be expressed in terms of IR as mri (t), mrdi , m∗r1i , and m∗r2i ∈ R3 , respectively, as 0

0

mri

=

m∗r1i

=

m ¯ ri m ¯ rdi mrdi = 0 zrdi zri ∗ m ¯ r1i m ¯∗ m∗r2i = ∗r2i . ∗ zr1i zr2i

(26) (27)

From the expressions given in (20), (25), and (27) the rotation and translation between the coordinate systems I and IR can now be related in terms of the normalized Euclidean coordinates of the reference object F1∗ as22 ∗ ¢ ∗ ¡ z1i m1i . m∗r1i = Rr + xhr n∗T 1 ∗ zr1i | {z } (28) |{z} Hr αri At a future instant in time, when the static reference object F2∗ is in the field of view of the current camera (i.e. I) and the daisy chaining method has been used to relate camera frames I and IR in terms of the reference object F2∗ , then (21), (25), and (27) can be used to relate the rotation and translation between I and IR in terms of the normalized Euclidean coordinates of the reference object F2∗ asb m∗r2i =

∗ ¡ ¢ ∗ z2i m2i Rr + xhr n∗T 2 zr2i | {z } |{z} αri Hr

(29)

where m∗r2i ∈ R3 represent virtual normalized Euclidean coordinates since the stationary reference object F2∗ is not in the field of view of the stationary reference camera IR . The relationship between F and Fr can be expressed as22 ³ 0 ´ 0 zi 0 R + xh nT mi . mi = 0 zi {z } | (30) |{z} 0 0 αi H Similarly, using (23) and (26) the rotation and translation between the coordinate systems F and Fd can now be related in terms of the normalized Euclidean coordinates of the UGV expressed in IR as 0

mrdi =

zri zrdi |{z} αrdi

³ 0 ´ 0 0 0 Rrd + xhrd nrT mri . {z } |

(31)

Hrd

b Homography relationship in (29) relates camera frames I and I ∗ R utilizing the static reference object F2 however, given development can be generalized for any reference object Fn∗ (n = 2, 3, ..m).

5 of 12 American Institute of Aeronautics and Astronautics

0

0

In (28)-(31), αi (t), αrdi (t), αri (t) ∈ R denote depth ratios, H (t), Hrd (t), Hr (t) ∈ R3×3 denote Euclidean 0 0 homographies,13 and xh (t), xhrd (t), xhr (t) ∈ R3 denote scaled translation vectors that are defined as follows 0

0

xh =

xf d

0

0

xhrd =

xf rd d0r

xhr =

xf r d∗1

(32)

where the scaled translation xhr (t) is obtained when the relationship between I and IR is expressed in terms of the static reference object F1∗ , xf r xhr = ∗ . (33) d2 In (33), the scaled translation xhr (t) is obtained when the static reference object F2∗ is in the field of view of current camera frame (i.e. I) and daisy chaining strategy has established a connection between camera frames I and IR in terms of the reference object F2∗ . Each Euclidean feature point will have a projected pixel coordinate expressed in terms of I as h h iT iT ∗ pi = p∗1i = u∗1i v1i (34) ui vi 1 1 h iT ∗ (35) p∗2i = u∗2i v2i 1 where pi (t), p∗1i (t), and p∗2i (t) ∈ R3 represents the image-space coordinates of the time-varying feature points ∗ ∗ of the UGV and reference objects F1∗ and F2∗ , respectively, and ui (t), vi (t) , u∗1i (t), v1i (t), u∗2i (t), v2i (t) ∈ R. Similarly, the projected pixel coordinate of the Euclidean features in the reference image can be expressed in terms of IR as h iT ∗ (36) p∗r1i = u∗r1i vr1i 1 where p∗r1i ∈ R3 represents the constant image-space coordinates of the stationary reference object F1∗ and ∗ u∗r1i , vr1i ∈ R. To calculate the Euclidean homographies given in (28)-(31) from pixel information, the projected pixel coordinates are related to mi (t), m∗1i (t), m∗2i (t), and m∗r1i by the pin-hole camera model as pi p∗2i

= Ami = Am∗2i

p∗1i = Am∗1i p∗r1i = Am∗r1i

(37) (38) 0

0

Also, the pin-hole camera model relationship for the normalized Euclidean coordinates m∗r2i , mi (t), mri (t), 0 0 and mrdi can be formulated in terms of the virtual pixel coordinates p∗r2i , pi (t), pri (t), and prdi as follows: p∗r2i 0

pri

0

= Am∗r2i

0

pi = Ami

0

= Amri

(39)

prdi = Amrdi

(40)

where A ∈ R3×3 is a known, constant, and invertible intrinsic camera calibration matrix. By using (28)-(31), (37), and (38), the following relationships can be developed: p∗r1i = αri Gr p∗1i 0

0

0

pi = αi G pi 0

p∗r2i = αri Gr p∗2i

(41)

0

prdi = αrdi Grd pri

0

(42) 3×3

where Gr (t) = [grij (t)], G (t) = [gij (t)], Grd = [grdij ] ∀i, j = 1, 2, 3 ∈ R denote projective homographies. Sets of linear equations can be developed from (41) and (42) to determine the projective homographies up to a scalar multiple. Various techniques can be used (e.g., see14, 26 ) to decompose the Euclidean ho0 0 0 0 0 mographies, to obtain αri (t) , αi (t), αrdi (t), xhr (t), xh (t), xhrd (t), Rr (t), R (t), Rrd (t). Using the known geometric length s21i and a unit normal n∗1 , obtained from homography decomposition of (28), geometric reconstruction method (See the Appendix) can be utilized to obtain m ¯ ∗1i (t) and d∗1 (t). Hence, the translation xf r (t) between I and IR can be recovered from (32). Also, the Euclidean coordinates m ¯ ri of the UGV 0 corresponding to the stationary reference pose can be obtained from geometric reconstruction. Thus, m ¯ i (t) can be computed from (13). Using (24), (30), (39), and (42), the projective homography can be defined 0 between pi (t) and pi (t), which can be decomposed to obtain a unit normal n(t) and hence the time-varying 0 Euclidean coordinates m ¯ i (t). The Euclidean coordinates m ¯ ri (t), corresponding to the current UGV position 6 of 12 American Institute of Aeronautics and Astronautics

as seen by reference camera IR , can be obtained using (13). Therefore, using (26) and (40) a projective homography relationship can be obtained between the current UGV (i.e. F(t)) and the desired UGV (i.e. Fd ) in terms of a stationary reference camera coordinate system IR given by (42). Further, when the reference object F2∗ appears in the field of view of I, the Euclidean position m ¯ ∗2i (t) can be obtained. Using (12), (25), (27), (29), and (41), a projective homography relationship can be defined between p∗r2i and p∗2i (t), which can be decomposed to obtain rotation and translation Rr (t), xf r (t) between I and IR . Once Rr (t) and xf r (t) have been determined, the future relationship can be expressed with respect to the new reference object (i.e. F2∗ ) and similarly, can be generalized for n = 2, 3, .., m.

III. The kinematic model for the UGV can ⎡ x˙ c ⎢ ⎣ y˙ c θ˙ d

UGV Kinematics be determined ⎤ ⎡ cos θd ⎥ ⎢ ⎦ = ⎣ sin θd 0

from Fig. 1 as ⎤ # 0 " ⎥ υc 0 ⎦ ωc 1

(43)

where x˙ c , y˙c , and θ˙ d denote the time derivatives of xc (t), yc (t), and θd (t) ∈ R, respectively, where xc (t) and yc (t) denote the planar position of F expressed in Fd , and θd (t) ∈ R denotes the right-handed rotation angle about the z-axis of F that aligns F with Fd , and υ c (t) and ω c (t) were introduced in Section I and are 0 depicted in Fig. 1 and Fig. 2. Based on the definition for Rrd (t) provided in the previous development, the rotation from F to Fd can be developed as ⎡ ⎤ cos θd − sin θd 0 0 ⎢ ⎥ (44) Rrd = ⎣ sin θd cos θd 0 ⎦. 0 0 1 0

Based on the fact that Rrd (t) can be obtained from (31) and (42), it is clear from (44) that θd (t) is a known signal that can be used in the subsequent control development. The geometric relationships between the coordinate frames can be used to develop the following expression h

xc

yc

0

iT

0

T = Rrd (xf r − xf rd ) .

(45)

After utilizing, (1), (3), (31), and the assumption (as in2 ) that s11 = [0, 0, 0]T , the following expression can be obtainedc 0 h xc iT yc 0 z T 0 = Rrd ( r1 mr1 − mrd1 ). (46) zrd1 zrd1 zrd1 Since the terms on the right-hand side of (46) are known or measurable (refer to Section II), then yc (t) can be used in the subsequent control development. zrd1

IV.

xc (t) and zrd1

Control Objective

The objective considered in this paper is to develop a visual servo controller that ensures that the pose of a UGV is regulated to a desired pose. A challenging aspect of this problem is that while one reference object moving out of the field of view another reference object comes in the field of view of an on-board camera, thus the problem requires switching the reference in order to relate the current UGV position to the desired UGV position. Also, the UGV pose information is supplied by a moving airborne monocular camera system, hence it represents a problem involving a moving camera and moving target. Mathematically, the 0 0 objective can be expressed as the desire to regulate m ¯ ri (t) to m ¯ rdi (or stated otherwise for xf r (t) → xf rd c Any point s , s 1i 2ni can be utilized in the subsequent development; however, to reduce the notational complexity, we have elected to select the image point s11 , s2n1 , and hence, the subscript 1 is utilized in lieu of i in the subsequent development.

7 of 12 American Institute of Aeronautics and Astronautics

and θd (t) → 0). Based on (43)-(45) the objective can be quantified by a by the following global diffeomorphism ⎤ ⎡ ⎡ ⎤⎡ x c cos θd sin θd 0 e1 z ⎥ ⎢ ⎢ ⎥ ⎢ yrd1 c ⎣ e2 ⎦ , ⎣ − sin θd cos θd 0 ⎦ ⎣ zrd1 0 0 1 e3 θd 0

regulation error e(t) ∈ R3 , defined ⎤

⎥ ⎦.

(47)

If ke(t)k → 0, then (45) and (47) can be used to conclude that xf r (t) → xf rd and θd (t) → 0. Based on (44) and (46), it is clear that e(t) is measurable.

V.

Control Development

After taking the time derivative of (47) determined as ⎡ e˙ 1 ⎢ ⎣ e˙ 2 e˙ 3

and using (43), the open-loop error system for e(t) can be ⎤ ⎡ ⎤ υc zrd1 + ω c e2 ⎥ ⎢ ⎥ (48) ⎦=⎣ ⎦. −ω c e1 ωc

A variety of controllers could now be proposed to yield the regulation result based on the manner in which the open-loop error system given by (48) has been developed. Several controllers are provided in9 including an explanation of how the UGV dynamics could also be easily incorporated into the control design. The following benchmark controller proposed in25 is an example that can be used to achieve asymptotic regulation: , −kv e1 , −kω e3 + e22 sin t

υc ωc

(49) (50)

where kv , kω ∈ R denote positive, constant control gains. After substituting the controller designed in (49) and (50) into (48), the following closed-loop error system is obtained: zrd1 e˙ 1 e˙ 2 e˙ 3

= −kv e1 + zrd1 ω c e2 = −ω c e1 = −kω e3 + e22 sin t .

(51)

Based on the form of (51), Lyapunov-based stability analysis arguments given in22 can be used to prove asymptotic regulation of the UGV.

VI.

Simulation Results

A numerical simulation was performed to illustrate the performance of the regulation controller in (49), (50). The origins of the coordinate frames F, F1∗ , F2∗ , and Fd , and the four coplanar feature points on the planes π, π ∗1 π ∗2 , and π d are chosen such that the Euclidean coordinates of the feature points in F, F1∗ , F2∗ , and Fd are given by si (where i = 1, 2, 3, 4) i.e., the feature points are located at the same distance from the origins of the coordinate frames F, F1∗ , F2∗ , and Fd . The intrinsic camera calibration matrix for the reference camera IR and current camera I(t) is selected as ⎡ ⎤ 750 0 512 ⎢ ⎥ A = ⎣ 0 750 512 ⎦ . (52) 0 0 1

The initial position and orientation of the current UGV coordinate frame F(0) and the position and orientation of the desired UGV coordinate frame Fd with respect to the global coordinate frame G is

8 of 12 American Institute of Aeronautics and Astronautics

F (0)

RG

F (0) Fd

xG

RG

Fd

xG

h

i ◦ ◦ ◦ RGx (30 ) RGy (0 ) RGz (0 ) [deg] h iT = [m] −3.0 1.2 0 h i ◦ ◦ ◦ = RGx (0 ) RGy (0 ) RGz (0 ) [deg] h iT = [m]. 0.4 0 0 =

Moreover, the position and orientation of the stationary reference objects F1∗ and F2∗ with respect to the global coordinate frame G is F1∗

RG

F1∗ F2∗

xG

RG

F2∗

xG

h

i ◦ ◦ ◦ RGx (30 ) RGy (0 ) RGz (0 ) [deg] h iT = [m] −2.0 1.0 0 h i ◦ ◦ ◦ = RGx (0 ) RGy (0 ) RGz (0 ) [deg] h iT = [m]. 0.2 0.5 0 =

The control gains in (49) and (50) were selected as kω = 1.992

kv = 0.95.

(53)

The Euclidean position and orientation of the coordinate frame Fd corresponding to the desired pose of the UGV is shown in Fig. 3. The image-space trajectory of the feature points attached to the plane π, taken by the current camera I, is shown in Fig. 4. The resulting regulation errors are plotted in Fig. 5, which asymptotically approach zero. The linear and angular velocity control inputs are shown in Fig. 6.

VII.

Conclusions

In this paper, the pose of a moving sensorless UGV is regulated to a desired pose using a collaborative visual servo control strategy. To achieve the result, multiple views of a reference object were used to develop Euclidean homographies. The impact of this paper is that it assumes that while one reference object is moving out of the field of view another reference object enters the field of view of an on-board camera, thus increasing the operating range of the UGV. Also, the presented work provides a localization scheme enabling identification of three dimensional Euclidean coordinates of the UGV with respect to a static reference camera. The simulation results demonstrate the asymptotic regulation results of the controller provided in the paper.

References 1 D. Burschka and G. Hager, “Vision-Based Control of Mobile Robots,” Proc. of the IEEE International Conference on Robotics and Automation, pp. 1707-1713, 2001. 2 J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, “Adaptive homography-based visual servo tracking for a fixed camera configuration with a camera-in-hand extension,” IEEE Trans. on Control Systems Technology, vol. 13, no. 5, pp. 814— 825, 2005. 3 J. Chen, W. E. Dixon, D. M. Dawson, and M. McIntire, “Homography-based Visual Servo Tracking Control of a Wheeled Mobile Robot”, Proc. of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, pp. 18141819, October 2003. 4 J. Chen, W. E. Dixon, D. M. Dawson, and V. Chitrakaran, “Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera,” Proceedings of the IEEE Conference on Control Applications, Taipei, Taiwan, pp. 1061-1066, 2004. 5 V. Chitrakaran, D. M. Dawson, W. E. Dixon, and J. Chen, “Identification of a moving object’s velocity with a fixed camera,” Automatica, vol. 41, no. 3, pp. 553 — 562, 2005. 6 A. K. Das, et al., “Real-Time Vision-Based Control of a Nonholonomic Mobile Robot,” Proc. of the IEEE International Conference on Robotics and Automation, pp. 1714-1719, 2001.

9 of 12 American Institute of Aeronautics and Astronautics

2

y [m]

1.5 1

F(0)

F

R

F*1

I

R

F*2

I(0)

0.5 0 −0.5

F(t) −3

−2

−1 x [m]

0

1

Figure 3. Euclidean p osition and orientation of the desired UG V Fd . F(0) denotes the initial p osition of the current UG V, FR denotes the reference p osition of the UG V, which coincides with the initial p osition of the current UG V F(0), I(0) denotes the initial p osition of the current cam era, IR denotes the p osition of the stationary reference cam era, and F1∗ and F2∗ denote the p osition of the stationary reference ob jects.

0 100 200

F*

2

300

F(t)

400 v [pixel]

F(0)

500 600 700 800 900 1000 0

200

400 600 u [pixel]

800

1000

Figure 4. The im age-space tra jectory of the feature p oints attached to the current UGV F(t), taken by the tim e-varying current cam era I. F(0) denotes the initial p osition of the current UGV, F(t) denotes the tim e-varying p osition of the current UG V, and F2∗ denotes the p osition of the stationary reference ob ject.

10 of 12 American Institute of Aeronautics and Astronautics

e (t) [m]

1

1

0.5 0

e3(t) [rad]

e2(t) [m]

0

50

100 time [s]

150

200

50

100 time [s]

150

200

50

100 time [s]

150

200

0 −0.5 −1 0 0.5 0 −0.5 0

Figure 5. Linear (i.e. e1 (t) and e2 (t)) and angular (i.e. e3 (t)) tracking error.

vc(t) [m/s]

0.5 0 −0.5 −1 −1.5

ωc(t) [rad/s]

−2 0

50

100 time [s]

150

200

50

100 time [s]

150

200

1 0 −1 0

Figure 6. Linear (i.e. vc (t)) and angular (i.e. ω c (t)) velocity control inputs.

11 of 12 American Institute of Aeronautics and Astronautics

7 C.

A. Desoer and M. Vidyasagar, Feedback Systems: Input-Output Properties, New York: Academic Press, 1975. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, “Adaptive Tracking Control of a Wheeled Mobile Robot via an Uncalibrated Camera System,” IEEE Transactions on Systems, Man, and Cybernetics -Part B: Cybernetics, Vol. 31, No. 3, pp. 341-352, 2001. 9 W. E. Dixon, D. M. Dawson, E. Zergeroglu and A. Behal, Nonlinear Control of Wheeled Mobile Robots, Springer-Verlag London Limited, 2001. 10 K. Dupree, N. Gans, W. MacKunis and W. Dixon, “Euclidean Calculation of Feature Points of a Rotating Satellite: A Daisy Chaining Approach,” AIAA Journal of Guidance, Controls, and Dynamics, to appear. 11 Y. Fang, D. M. Dawson, W. E. Dixon, and M. S. de Queiroz, “2.5D Visual Servoing of Wheeled Mobile Robots,” Proc. of IEEE Conference on Decision and Control, Las Vegas, NV, pp. 2866-2871, Dec. 2002. 12 Y. Fang, W. Dixon, D. Dawson, and P. Chawda, “Homography-based visual servo regulation of mobile robots,” IEEE Transactions on Systems, Man and Cybernetics, vol. 35, no. 5, pp. 1041 — 1050, 2005. 13 O. Faugeras, Three-Dimensional Computer Vision, The MIT Press, Cambridge Massachusetts, 2001. 14 O. Faugeras and F. Lustman, “Motion and Structure From Motion in a Piecewise Planar Environment”, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508, 1988. 15 X. Guo, Z.Qu, B. Xi, “Research on location method for mobile robots formation based on global-camera”, Proc. of International Symposium on Systems and Control in Aerospace and Astronautics, pp. 347-349, 2006. 16 Y. Hada, K. Takase, “Multiple mobile robot navigation using the indoor global positioning system (iGPS)”, Proc. of International Conference on Intelligent Robots and Systems, Vol. 2. pp. 1005-1010, 2001. 17 G. D. Hager, D. J. Kriegman, A. S. Georghiades, and O. Ben-Shahar, “Toward Domain-Independent Navigation: Dynamic Vision and Control,” Proc. of the IEEE Conference on Decision and Control, pp. 3257-3262, 1998. 18 B. H. Kim, et al., “Localization of a Mobile Robot using Images of a Moving Target,” Proc. of the IEEE International Conference on Robotics and Automation, pp. 253-258, 2001. 19 Y. Ma, J. Kosecka, and S. Sastry, “Vision Guided Navigation for Nonholonomic Mobile Robot”, IEEE Trans. on Robotics and Automation, Vol. 15, No. 3, pp. 521-536, June 1999. 20 Y. Ma, S. Soatto, J. Kosecká, and S. Sastry, An Invitation to 3-D Vision. Springer, 2004. 21 E. Malis and F. Chaumette, “2 1/2 D Visual Servoing with Respect to Unknown Objects Through a New Estimation Scheme of Camera Displacement,” International Journal of Computer Vision, Vol. 37, No. 1, pp. 79-97, June 2000. 22 S. Mehta, W. E. Dixon, D. MacArthur, C. D. Crane, “Visual Servo Control of an Unmanned Ground Vehicle via a Moving Airborne Monocular Camera,” IEEE American Control Conference, Minneapolis, Minnesota, pp. 5276-5281, 2006. 23 S. Mehta, G. Hu, N. Gans, and W. E. Dixon, “Adaptive Vision-Based Collaborative Tracking Control of an UGV via a Moving Airborne Camera: A Daisy Chaining Approach,” IEEE Conference on Decision and Control, San Diego, California, pp. 3867-3872, 2006. 24 S. Mehta, K. Kaiser, N. Gans, W. E. Dixon, “Homography-Based Coordinate Relationships for Unmanned Air Vehicle Regulation,” Proceedings of AIAA Guidance, Navigation, and Control Conference, Keystone, Colorado, AIAA 2006-6718, 2006. 25 C. Samson, “Control of Chained Systems Application to Path Following and Time-Varying Point-Stabilization of Mobile Robots”, IEEE Transactions on Automatic Control, Vol. 40, No. 1, pp. 64-77, January 1995. 26 Z. Zhang and A. R. Hanson, “Scaled Euclidean 3D Reconstruction Based on Externally Uncalibrated Cameras,” IEEE Symp. on Computer Vision, pp. 37-42, 1995. 27 Z. Zhang and A. Hanson, “3d reconstruction based on homography mapping,” in Proc. ARPA Image Understanding Workshop, Palm Springs, CA, 1996. 8 W.

12 of 12 American Institute of Aeronautics and Astronautics

Multi-Reference Visual Servo Control of an Unmanned ...

Department of Mechanical and Aerospace Engineering, University of Florida, ... This research is supported in part by the NSF CAREER AWARD 0547448, ...... Navigation, and Control Conference, Keystone, Colorado, AIAA 2006-6718, 2006.

490KB Sizes 0 Downloads 261 Views

Recommend Documents

Teach by Zooming Visual Servo Control for an ...
AIAA Guidance, Navigation, and Control Conference and Exhibit ..... In (22), ˆHd (t) ∈ R3x3 denotes the following estimated Euclidean homography:11 ..... Improvements in the Stability Analysis of a New Class of Model-Free Visual. Servoing ...

visual servo control of nonholonomic mobile robots
and/or mechanical constraints of the robotic platform. Consider the system of Fig ..... Transactions on Robotics and Automation 8, 313–326. Samson, Claude and ...

Design, Fabrication, and Visual Servo Control of an XY ...
Technology Development Fund under Grant 016/2008/A1. Q. Xu and Y. Li are with the ... mercially available. As an alternative, a low-cost microscope .... internal force in limb 1 can be calculated by a potential energy analysis as. (12) with. (13).

Visual Servo Control for the Hovering of an Outdoor ...
A semi-autonomous robotic airship for environment mon- itoring missions. In IEEE International Conference on. Robotics and Automation' Leuven' Belgium' May ...

Daisy Chaining Based Visual Servo Control Part I - IEEE Xplore
Email: {gqhu, siddhart, ngans, wdixon}@ufl.edu. Abstract—A quaternion-based visual servo tracking con- troller for a moving six degrees of freedom object is ...

Daisy Chaining Based Visual Servo Control Part II - IEEE Xplore
Email: {gqhu, ngans, siddhart, wdixon}@ufl.edu. Abstract— In this paper, the open problems and applications of a daisy chaining visual servo control strategy ...

MORPHEUS Servo Control GUI.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

A Geometric Approach to Visual Servo Control in the ...
and point of arrival η ∈ R3 of the camera coordinate frame. The angle of incidence condition represents a pencil of lines describing the surface of a right circular cone, such that the generatrix identifies possible orientations of the camera opti

Camera Independent Visual Servo Tracking of ...
In this paper, a visual servo tracking problem is developed with the objective to enable ... systems more autonomous. ..... estimated rotation tracking error system.

Omnidirectional Visual-Servo of a Gough–Stewart ...
This allows a large field of view to be obtained and avoids the occlusion problems .... tioning a single omnidirectional camera (vision system providing 360◦.

Vision-only Navigation and Control of Unmanned Aerial ...
Department of Computer Science & Electrical Engineering, OGI School of Science & Engineering, ... Xi'an China in 1995 and M.S. degree in the Institute of.

A Geometric Approach to Visual Servo Control in the ... - IEEE Xplore
University of Florida. Shalimar, FL-32579, USA. J. W. Curtis. Munitions Directorate. Air Force Research Laboratory. Eglin AFB, FL-32542, USA. Abstract—In this paper, we formulate a visual servo control problem when a reference image corresponding t

euclidean calculation of the pose of an unmanned ...
to advances in computer vision and control theory, monocular ... the need for GPS backup systems. ... comprehensive analysis of GPS backup navigation.

Reconfigurable optical add-drop multiplexers with servo control and ...
Dec 31, 2004 - ... cost advantages. Conventional OADMs in the art typically employ ..... 4Ai4B shoW schematic illustration of tWo embodi ments of a WSR-S ...

Reconfigurable optical add-drop multiplexers with servo control and ...
Dec 31, 2004 - channel micromirror be individually controllable in an ana log manner ..... associated signal processing algorithm/softWare for such processing ...

Reconfigurable optical add and drop modules with servo control and ...
Dec 31, 2004 - cantly enhances the information bandwidth of the ?ber. The prevalence of ... OADM that makes use of free-space optics in a parallel construction. .... control assembly serves to monitor the optical power levels of the spectral ...

Video tracking control algorithms for Unmanned Air ...
autopilot, a computer system and video systems. ... the structure to set the best tracking trajectory. ..... Conference on Decision and Control, Paradise Island,.

Reconfigurable optical add and drop modules with servo control and ...
Dec 31, 2004 - currently in the art are characteristically high in cost, and prone to ..... For purposes of illustration and clarity, only a selective. feW (e.g., three) of ...

Control Design for Unmanned Sea Surface Vehicles ... - IEEE Xplore
Nov 2, 2007 - the USSV, and the actual hardware and software components used for control of ... the control design problem was developed in our previous.

Visual PID Control of a redundant Parallel Robot
Abstract ––In this paper, we study an image-based PID control of a redundant planar parallel robot using a fixed camera configuration. The control objective is to ...

Servo Whistler Training.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Servo Whistler ...

8 Servo Drive Breakout
Nov 27, 2017 - programmable development hardware more accessible. Elint Labz as a platform helps developers & young engineers from prototyping to product development. We provide open source hardware solutions and small quantity manufacturing services

pdf-1497\cooperative-path-planning-of-unmanned-aerial-vehicles ...
... apps below to open or edit this item. pdf-1497\cooperative-path-planning-of-unmanned-aerial ... astronautics-and-aeronautics-by-antonios-tsourdos.pdf.