Camera Independent Visual Servo Tracking of Unmanned Air Vehicle/Smart-Munitions Siddhartha S. Mehta† and Warren E. Dixon‡ Master Degree Student, ‡ Advisor Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611-6250 USA †

In this paper, a visual servo tracking problem is developed with the objective to enable unmanned aircraft/smart-munitions to follow a desired trajectory encoded as a sequence of images (i.e., a prerecorded video). The developed visual servo tracker provides 6-DOF motion commands to a low-level autopilot that establishes stability of the closed-loop dynamics. Multi—view photogrammetric methods are used to develop the geometric relationships between feature points in the live image compared with corresponding feature points in the prerecorded sequence of images. Nonlinear robust control methods are used to develop an exponential image-space path tracker despite unknown time-varying depth information and uncertainty in the intrinsic camera calibration matrix.

I.

Introduction

Recent advances in visual servo control have been motivated by the desire to make vehicular/robotic systems more autonomous. One problem with designing robust visual servo control systems is to develop methods to compensate for possible uncertainty in the calibration of the camera. For example, exact knowledge of the camera calibration parameters is required to relate pixelized image-space information to the task-space. The inevitable discrepancies in the calibration matrix result in an erroneous relationship between the image-space and task-space. Furthermore, an acquired image is a function of both the task-space position of the camera and the intrinsic calibration parameters; hence, perfect knowledge of the intrinsic camera parameters is also required to relate the relative position of a camera through the respective images as it moves. In8 and,9 projective invariance is used to construct an error function that is invariant of the intrinsic parameters meeting the control objective despite variations in the intrinsic parameters. However, the goal is to construct an error system in an invariant space, and unfortunately, as stated in8 and,9 several control issues and a rigorous stability analysis of the invariant space approach “have been left unresolved.” In,6 a model-free visual servo controller is developed to regulate the feature points acquired by an onboard camera to the corresponding feature points from the desired reference image in presence of uncertainty in camera calibration parameters. Also in,7 a camera independent teach by zooming (TBZ) visual servo controller is developed in presence of uncertainty in camera calibration parameters to regulate the feature points from the current image to the feature points acquired by the fixed camera after changing the focal length. Previous visual servo control approaches that account for camera uncertainty are developed for setpoint regulation problems and do not consider the path/trajectory followed by the on-board camera. The tracking problem may be more effective for missions in urban unstructured environments where the goal is of flight planning with obstacle avoidance. Therefore in this paper, a teach by showing (TBS) approach is proposed to track the desired trajectory based on a reference trajectory/image sequence obtained by a prior flight in presence of uncertainty in camera calibration parameters. For example, a plane provides a video that encodes a desired trajectory and then the objective is to use another plane having on-board camera and a controller to plan aircraft trajectories so that the current images match the video. Applications of this strategy could include navigating ground or air vehicles based on desired image sequences taken by other ground or air vehicles (e.g., a plane captures a desired image sequence that is used to navigate a camera on-board unmanned air vehicle (UAV) or smart-munition). 1 of 9 American Institute of Aeronautics and Astronautics

The path planning objective is to track the image features acquired by an on-board camera to the corresponding image feature coordinates in the desired image sequence acquired a prior. The path planning algorithm is developed based on the assumption that parametric uncertainty exists in the camera calibration since these parameters are difficult to precisely obtain in practice. Since the objective is formulated in terms of images acquired from an uncalibrated camera, the ability to construct a meaningful relationship between the estimated and actual rotation matrix is problematic. To overcome this challenge, the control objective is formulated in terms of the normalized Euclidean coordinates. Specifically, desired normalized Euclidean coordinates are defined as a function of the mismatch in the camera calibration. This is a physically motivated relationship, since an image is a function of both the Euclidean coordinates and the camera calibration.

II. A.

Model Development

Camera Model

In this section, geometric relationships are developed between the three orthogonal coordinate systems F, Fd and F ∗ as shown in figure 1, where F is a moving coordinate system attached to an on-board camera (e.g., a camera mounted on an unmanned air vehicle (UAV)), Fd is a desired moving coordinate system associated with the desired trajectory, and F ∗ is a fixed reference coordinate system associated with the desired final position/orientation. A reference plane π is defined by four target points Oi ∀i = 1, 2, 3, 4. Each target point will have a projected pixel coordinate expressed in terms of F (denoted by ui (t), vi (t) ∈ R), Fd (denoted by udi (t), vdi (t) ∈ R), and F ∗ (denoted by u∗i , vi∗ ∈ R) that are defined as elements of pi (t) (actual time-varying image points), pdi (t) (desired time-varying image point trajectory), and p∗i (constant reference image points), respectively, as follows pi (t) , pdi (t) ,

h

h

iT

ui (t) vi (t) 1

iT

udi (t) vdi (t) 1 h iT p∗i , u∗i vi∗ 1 .

(1)

The projected 2D pixel coordinates of the target points introduced in (1) can be related to the 3D task-space ¯ i (t) , [Xi (t), Yi (t), Zi (t)]T ∈ R3 in F, m ¯ di (t) , [Xdi (t), Ydi (t), Zdi (t)]T coordinates of Oi , denoted by m 3 ∗ ∗ ∗ ∗ T 3 ∗ ∈ R in Fd , and m ¯ i , [Xi , Yi , Zi ] ∈ R in F , by the following transformations pi = Ami ,

and p∗i = Ad m∗i

pdi = Ad mdi ,

(2)

where A, Ad ∈ R3×3 are constant and invertible intrinsic camera calibration matrices and mi (t), mdi (t), m∗i ∈ R3 , denote the following normalized 3D task-space coordinates of Oi expressed in terms of F, Fd , and F ∗ , respectively mi

,

mdi

,

m∗i

,

m ¯i = Zi



∙ m ¯ di = Zdi ∙ m ¯ ∗i = Zi∗

Xi Zi

Yi Zi

1

¸T

Xdi Zdi

Ydi Zdi

1

Xf i Zi∗

Yf i Zi∗

1

(3) ¸T

and

¸T

where the standard assumption is made that Zi (t), Zdi (t), and Zi∗ are positive. The desired 3D task-space coordinates m ¯ di (t) are assumed to be first order differentiable. Since the intrinsic calibration matrix of a camera is difficult to accurately obtain, the development in this paper is based on the assumption that the intrinsic calibration matrices are unknown. Since Ad cannot be determined exactly in practice, from (2) the normalized Euclidean coordinates m∗i cannot be obtained, where p∗i defines the desired image-space coordinates. The tracking problem is developed so that the rotation and translation errors between the desired trajectory image and the image acquired by an on-board camera

2 of 9 American Institute of Aeronautics and Astronautics

Figure 1. Coordinate System

go to zero. If the image from an on-board camera and the constant reference image acquired at the end of a desired trajectory correspond, then following expression can be developed from (2): mi = mni , A−1 Ad m∗i

(4)

where mni ∈ R3 denotes the normalized Euclidean coordinates of the object feature points expressed in Fn , where Fn is a coordinate system attached to an on-board camera when the image taken from an on-board camera corresponds to the constant reference image. Remark 1 Thus from (4) it can be seen that due to different intrinsic camera calibration parameters A and Ad used during the teaching stage and servoing stage, a finite euclidean mismatch exists between mi and m∗i given by A−1 Ad . Since A and Ad are unknown, mi (t), mdi (t), m∗i cannot be computed by (2). Hence, we define the constant best guess estimates for intrinsic camera calibration matrices as Aˆ and Aˆd such that the normalized Euclidean coordinates m ˆ i (t) , m ˆ di (t), and m ˆ ∗i ∈ R3 are given by m ˆi m ˆ di m ˆ ∗i

˜ i = Aˆ−1 pi = Am −1 = Aˆd pdi = A˜d mdi = Aˆ−1 p∗i = A˜d m∗i d

˜ A˜d ∈ R3×3 are defined as where the calibration error matrices A, ⎡ ⎤ A˜11 A˜12 A˜13 ⎢ ⎥ A˜ , Aˆ−1 A = ⎣ 0 A˜22 A˜23 ⎦ 0 0 1 ⎡ ⎤ A˜d11 A˜d12 A˜d13 ⎢ ⎥ A˜d , Aˆ−1 0 A˜d22 A˜d23 ⎦ . d Ad = ⎣ 0 0 1

III.

(5) (6) (7)

(8)

(9)

Homography Development

From Figure 1 a relationship can be developed between the euclidean coordinates m∗i and mi as follows m∗i = Rmi + xf 3 of 9 American Institute of Aeronautics and Astronautics

(10)

where R(t) ∈ R3×3 and xf (t) ∈ R3 denote the rotation and translation, respectively, between F and F ∗ . By utilizing (3), the expression in (10) can be expressed as follows µ ¶³ ´ T Zi m∗i = (11) R + xh n∗ mi ∗ Zi | {z } | {z } H αi f where xh (t) , d(t) ∈ R3 and d(t) ∈ R denotes the distance from F to π along the unit normal n(t). The following relationship can be developed by utilizing (11) and (2), respectively

x (t)

p∗i = αi Gpi

(12)

where G ∈ R3×3 is the projective homography matrix defined as G(t) , Ad H(t)A−1 . The expressions in (2) can be used to rewrite (12) as m∗i = αi A−1 (13) d GAmi . Using expression given in (4), a relationship between mni and mi can be obtained as follows mni

= αi (A−1 GA) mi | {z } Hn

where Hn (t) , A−1 G(t)A denotes the Euclidean homography matrix that can be expressed as Hn = Rn + xhn nT

where xhn =

xf n d

(14)

where Rn (t) ∈ R3×3 and xf n (t) ∈ R3 denote the rotation and translation, respectively, from F to Fn . Since mi (t) and m∗i cannot be determined because the intrinsic camera calibration matrices A and Ad are uncertain, the estimates m ˆ i (t) and m ˆ ∗i defined in (5) and (7), respectively, can be utilized to obtain the following: ˆ nm m ˆ ni = αi H ˆ i. (15) ˆ n (t) ∈ R3×3 denotes the following estimated Euclidean homography:1 In (15), H ˜ n A˜−1 ˆ n = AH H

(16)

ˆn = R ˆn + x H ˆhn n ˆT

(17)

ˆ n (t) ∈ R where R denotes a rotation matrix from F to Fn , n ˆ (t) ∈ R denotes the unit normal from F to π, and x ˆhn (t) ∈ R3 denotes a scaled translation vector from Fn to F that is expressed in Fn . That is, ˆ n (t), n ˆ n (t). R ˆ (t), and x ˆhn (t) can be computed from H Similarly, homography can be developed between the constant reference image and desired image trajectory i.e. between F ∗ and Fd , respectively. Since the camera calibration matrix Ad is unknown using estimates of the normalized desired trajectory Euclidean coordinates m ˆ di (t) ∈ R3 , we get the relation between m∗i and mdi as follows: ˆ dm m ˆ ∗i = αdi H ˆ di . (18) 3×3

3

ˆ d (t) ∈ R3×3 denotes the following estimated Euclidean homography:1 In (18), H ˜ d A˜−1 ˆ d = AH H

(19)

ˆd = R ˆd + x H ˆhd n ˆ Td

(20)

ˆ d (t) ∈ SO(3) denotes a rotation matrix from Fd to F ∗ , n where R ˆ d (t) ∈ R3 denotes the unit normal from 3 Fd to π, and x ˆhd (t) ∈ R denotes a scaled translation vector from F ∗ to Fd that is expressed in F ∗ . That ˆ ˆ d (t). is, Rd (t), n ˆ d (t), and x ˆhd (t) can be computed from H

4 of 9 American Institute of Aeronautics and Astronautics

IV.

Path Planning Objective

The objective is to enable unmanned aircraft/smart-munitions to follow a desired trajectory encoded as a sequence of images i.e. to generate motion commands that ensure F tracks Fd , where Fd is moving according to a desired time-varying trajectory that is constructed relative to the reference camera position/orientation given by F ∗ . To quantify this objective a rotation tracking error, denoted by e˜ω (t) ∈ R3 , is defined as follows e˜ω , eω − eωd .

(21)

In (21), eω (t) ∈ R3 denotes the rotation mismatch between F and Fn with respect to F and eωd (t) ∈ R3 denotes the rotation mismatch between Fd and F ∗ with respect to Fd , as follows eω , uθ

eωd , ud θd

(22)

where u (t), ud (t) ∈ R3 represent unit rotation axes, and θ (t) , θd (t) ∈ R denote the respective rotations about u(t) and ud (t) that are assumed to be confined to the following regions −π < θ (t) < π

− π < θd (t) < π .

(23)

The desired rotation mismatch eωd (t) is assumed to be first order differentiable. The parameterization u (t) θ (t) given in (22) is related to the rotation matrix R (t) (computed from the homography as described in (11)) by the following expression θ R = I3 + sin θ [u]× + 2 sin2 [u]2× 2

(24)

where the notation Ii denotes an i × i identity matrix, and the notation [ζ]× denotes the following skewsymmetric cross-product matrix. For details regarding the computation of u (t) and θ (t) (or ud (t) and θd (t)) from a given rotation matrix R (t) (or Rd (t)), see.2 The position tracking objective is quantified as the hybrid position errora ev ∈ R3 as follows ev , me − med .

(25)

In (25), me (t) ∈ R3 denotes the coordinates of an image point on π extended by the Euclidean depth coordinate and med ∈ R3 denotes the extended image coordinates of the corresponding desired image point.

V. A.

Path Planning

Rotation Velocity Planning

The objective for the rotation tracking is to ensure that orientation of the live coordinate frame F matches with the orientation of the desired trajectory Fd , which is given by (21). Since eω (t) and eωd (t) defined in (22) can not be computed exactly in presence of uncertainty in the camera calibration, we use the following estimated rotation tracking error system e˜ωn = eˆω − eˆωd (26) ˆ

ˆ

where eω (t) and eωd (t) can be evaluated from the following expressions eˆω eˆωd

, u ˆˆθ , u ˆd ˆθd

(27) (28)

ˆ n (t) and R ˆ d (t), which are computed from the where u ˆ(t), ˆθ(t), u ˆd (t), and ˆθd (t) are computed from R decomposition of homography matrices given by (17) and (20), respectively. Since from (16) and (17), it ˆ n (t) is similar to Rn (t) (i.e., R ˆ n (t) has the same trace and eigenvalues as Rn (t)), the can be seen that R estimates u ˆ(t) and ˆθ(t) can be related to u (t) and θ (t) as follows:1 ˜ u ˆ = µAu

1 ° where µ = ° °˜ ° °Au°

(29)

a The translation error introduced in (25) is described as a hybrid, because it is a composed of the normalized coordinates (obtained from the image-space) along with the reconstructed task-space depth signal.

5 of 9 American Institute of Aeronautics and Astronautics

ˆθ = θ.

(30)

Thus, the relationship between the rotation errors eˆω (t) ∈ R3 and eω (t) ∈ R3 can be developed as ˜ ω eˆω = µAe

(31)

where the facts (29), (30) and (22) have been utilized ˆ d (t) is similar to Rd (t) (i.e., R ˆ d (t) has the same trace and eigenvalues as Since from (19) and (20), R ˆ Rd (t)), the estimates u ˆd (t) and θd (t), stated in (27), can be related to ud (t) and θd (t) as follows:1 u ˆd = µd A˜d ud

1 ° where µd = ° °˜ ° °Ad ud °

(32)

ˆθd = θd .

(33)

Thus the relationship between the rotation errors eˆωd (t) ∈ R3 and eωd (t) ∈ R3 can be developed as eˆωd = µd A˜d eωd

(34)

Taking the time derivative of eω defined in (22) e˙ ω (t) = −Lω (t)ω c (t) where Lω (t) ∈ R3×3 is defined as1 Lω = I3 −



⎜ θ [u]× + ⎜ ⎝1 − 2

In equation (36) the sinc(θ) term is given by (37) as

sinc(θ) =

(35) ⎞

sinc (θ) ⎟ 2 µ ¶⎟ ⎠ [u]× . θ sinc2 2

(36)

sin(θ) . θ

(37)

Since u(t) and θ(t) can not be computed due to uncertainty in camera calibration parameters, the estimate ˆ ω (t) ∈ R3×3 is used for Lω (t) in (35) to obtain the following expression L .

where

ˆ ω (t)ω c (t) eˆω (t) = −L

(38)

⎞ ³ ´ ⎜ sinc ˆθ ⎟ ˆθ ⎟ 2 ⎜ ˆ ⎜ Ã !⎟ Lω = I3 − [ˆ u]× . u] + 1 − ⎟ [ˆ 2 × ⎜ ˆ θ ⎠ ⎝ 2 sinc 2

(39)



.

Taking the time derivative of (26) and substituting (38) in the resulting expression for eˆω (t) the open loop error system is obtained as follows: . . ˆ ω ω c − eˆωd . e˜ωn = −L (40) The angular velocity is designed as follows .

.

ˆ −1 ˜ωn − eˆωd ). ωc = L ω (kω e

(41)

where eˆωd (t) ∈ R3 denotes the time derivative of the estimated desired mismatch introduced in (26), which can be computed from the time derivative of (28). Substituting ω c (t) from (41) into (40) yields the following closed-loop error system .

e˜ωn = −kω e˜ωn . 6 of 9 American Institute of Aeronautics and Astronautics

(42)

Solving the differential equation (42), the transient and steady state response of the rotation tracking error introduced in (26) can be proven to be confined to the following exponentially decaying envelope e˜ωn (t) = e˜ωn (0) exp(−kω t) .

(43)

By substituting (31) and (34) into (21) for eω (t) and eωd (t), the following expression can be obtained à ! 1 ˜−1 A˜−1 A˜−1 d (44) eˆωd . e˜ω = A e˜ωn − + µ µ µd From (43), following upper bound can be obtained e˜ω ≤ ξ 1 exp(−kω t) + ξ 2

(45)

where ξ 2 depends on the calibration mismatch parameters and the desired trajectory as indicated in (44). Remark 2 Practically, (43) and (45) indicate that the image-space errors will be exponentially regulated; however, some mismatch will exist in the euclidean-space orientation of the unmanned air vehicle/smartmunition with regard to the orientation of the vehicle that generated the prerecorded video due to a mismatch in the camera calibration. B.

Linear Velocity Planning

To quantify the translation error between F and Fd , a new hybrid position tracking error, denoted by ev (t) ∈ R3 , is defined as follows ev , me − med . (46) Since, me and med can not be determined exactly in presence of unknown camera calibration we use the estimates of me and med in (46) to give eˆv , m ˆe −m ˆ ed (47) h iT where m ˆ ed (t) = m ∈ R3 denotes the extended image coordinates of the corˆ ed1 (t) m ˆ ed2 (t) m ˆ ed3 (t) responding desired image point as follows m ˆ ed ,



ˆ d1 X Zˆd1

Yˆd1 Zˆd1

ln(Zˆd1 )

¸T

(48)

ˆ ed2 (t) are the estimated normalized Euclidean coordinates given in (3) that can be where m ˆ ed1 (t) and m computed from (2), and m ˆ ed3 (t) can be obtained by finding the estimate Zˆd1 (t) as follows 1 ˆ∗ Zˆd1 = Z αd1 1

(49)

where Zˆ1∗ ∈ R denotes a constant estimate of the unknown depth Z1∗ , and αd1 (t) is determined from the h iT ∈ R3 decomposition of the homography in (18). Also in (47), m ˆ e (t) = m ˆ e1 (t) m ˆ e2 (t) m ˆ e3 (t) denotes the coordinates of an image point on π extended by the Euclidean depth coordinate as follows m ˆe ,



ˆ1 X Zˆ1

Yˆ1 Zˆ1

ln(Zˆ1 )

¸T

(50)

ˆ e2 (t) are the estimated normalized Euclidean coordinates stated in (3) that can be where m ˆ e1 (t) and m computed from (2) and m ˆ e3 (t) can be computed by finding the estimate Zˆ1 (t) ∈ R of the depth Z1 (t). The estimate Zˆ1 (t) is developed in the same manner as in (49) as follows: 1 ˆ∗ Zˆ1 = Z α1 1

(51)

where Zˆ1∗ ∈ R denotes a constant estimate of the unknown depth Z1∗ , and α1 (t) is determined from the decomposition of the homography in (15). 7 of 9 American Institute of Aeronautics and Astronautics

After taking the time derivative of (47), the following expression can be obtained ⎡ ⎤ 1 0 −m ˆ e1 . . . . 1 ⎢ ⎥ . eˆv = m ˆe −m ˆ ed = m ˆ1 −m ˆ ed ⎣ ⎦ 0 1 − m ˆ e2 Zˆ1 0 0 m ˆ e3

(52)

.

where m ˆ 1 (t) is given by the following expression2 .

m ˆ 1 = −vc + [m ˆ 1 ]x ω c .

(53)

After substituting (53) into (52), the following open-loop tracking error system can be developed .

.

ˆ v vc + L ˆ vω ω c − m eˆv = L ˆ ed . ˆ vω (t) ∈ R3×3 denote the following matrices ˆ v (t), L In (54), L ⎡ ⎤ −1 0 m ˆ e1 ⎥ ˆv , 1 ⎢ L ⎣ 0 −1 m ˆ e2 ⎦ Zˆ1 0 0 −m ˆ e3 ⎡ ⎤ ˆ e2 −1 − m ˆ 2e1 m ˆ e2 m ˆ e1 m ⎥ ˆ vω , ⎢ L ⎣ 1+m ˆ 2e2 −m ˆ e1 m ˆ e2 −m ˆ e1 ⎦ −m ˆ e2 m ˆ e3 m ˆ e1 m ˆ e3 0

(54)

(55)

(56)

Based on the open-loop error system in (54), the camera linear velocity input is designed as follows: ³ ´ . ˆ −1 ˆ e ˆ − L ω + m ˆ −k . (57) vc , L v v vω c ed v .

After substituting (57) into (54) the closed loop system for eˆv (t) is obtained as follows: .

eˆv = −kv eˆv .

(58)

Then solving the resulting differential equation, the transient and steady state response of the position tracking error introduced in (47) can be proven to be confined to the following exponentially decaying envelope eˆv (t) = eˆv (0) exp(−kv t) . (59) The position tracking objective is to now show that ev (t) ∈ R3 goes to zero. From (5), (6), (47), (48), and (50), ev (t) can be related to eˆv (t) as follows: eˆv = Bev (60) where B ∈ R3x3 is a constant invertible matrix defined as ⎡ A˜11 A˜12 ⎢ B=⎣ 0 A˜22 0

0

⎤ 0 ⎥ 0 ⎦. 1

(61)

From (59) - (61), the following expression for ev (t) can be obtained ev (t) = B −1 eˆv (0) exp(−kv t) .

VI.

(62)

Conclusion

In this paper, multi-view photogrammetric visual servoing control strategies were developed to yield exponential tracking of the rotation and translation error systems despite unknown depth information and uncertainty related to the camera calibration. Specifically, the control objective is formulated to match the feature points acquired by the live camera to the feature points from the desired trajectory defined by a 8 of 9 American Institute of Aeronautics and Astronautics

prerecorded video. The visual servo tracking problem is camera independent since this approach assumes different intrinsic camera calibration matrices during the teaching and visual servoing stage. To this end, a desired camera position/orientation is defined where the images correspond, but the Euclidean position differs as a function of the mismatch in the calibration of the cameras. Applications of this strategy could include navigating unmanned air vehicles (UAV)/smart-munitions, based on desired images taken by other ground or air vehicles (e.g., a plane captures the desired video that is used to navigate an on-board camera UAV).

References 1 E. Malis and F. Chaumette, “Theoretical Improvements in the Stability Analysis of a New Class of Model-Free Visual Servoing Methods,” IEEE Transactions on Robotics and Automation, Vol. 18, No. 2, pp. 176-186, April 2002. 2 Y. Fang, A. Behal, W. E. Dixon and D. M. Dawson, “Adaptive 2.5D Visual Servoing of Kinematically Redundant Robot Manipulators,” Proc. of the IEEE Conference on Decision and Control, Dec. 2002, pp. 2860-2865. 3 E. Malis, “Contributions à la Modélisation et à la Commande en Asservissement Visuel,” Ph.D. Dissertation, University of Rennes I, IRISA, France, Nov. 1998. 4 E. Malis, F. Chaumette, and S. Bodet, “2 1/2 D Visual Servoing,” IEEE Transactions on Robotics and Automation, 15(2), April 1999, pp. 238-250. 5 J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, Inc, Englewood Cliff, NJ, 1991. 6 Y. Fang, W. E. Dixon, D. M. Dawson, and J. Chen, “An Exponential Class of Model-Free Visual Servoing Controllers in the Presence of Uncertain Camera Calibration,” Proceedings of the IEEE Conference on Decision and Control, Maui, Hawaii, Dec., 2003, pp. 5390-5395. 7 W. E. Dixon, “Teach by Zooming: A Camera Independent Alternative to Teach By Showing Visual Servo Control,” Proc. of IEEE International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, October 2003, pp. 749-754. 8 E. Malis, “Visual servoing invariant to changes in camera intrinsic parameters,” Proceedings of the International Conference on Computer Vision, Vancouver, Canada, July 2001, pp. 704-709. 9 E. Malis, “Vision-Based Control Using Different Cameras for Learning the Reference Image and for Servoing,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots Systems, Hawaii, November 2001, pp. 1428-1433. 10 W. E. Dixon, Y. Fang, D. M. Dawson, and J. Chen, “Adaptive Range Identification for Exponential Visual Servo Control,” Proceedings of the IEEE International Symposium on Intelligent Control, Houston, Texas, October 2003, pp. 46-51.

9 of 9 American Institute of Aeronautics and Astronautics

Camera Independent Visual Servo Tracking of ...

In this paper, a visual servo tracking problem is developed with the objective to enable ... systems more autonomous. ..... estimated rotation tracking error system.

176KB Sizes 1 Downloads 125 Views

Recommend Documents

Multi-Reference Visual Servo Control of an Unmanned ...
Department of Mechanical and Aerospace Engineering, University of Florida, ... This research is supported in part by the NSF CAREER AWARD 0547448, ...... Navigation, and Control Conference, Keystone, Colorado, AIAA 2006-6718, 2006.

Generalized Kernel-based Visual Tracking
Communications, Information Technology, and the Arts and the Australian. Research Council .... not smooth w.r.t. spatial mis-registration (see Fig. 1 in [19]).

Design, Fabrication, and Visual Servo Control of an XY ...
Technology Development Fund under Grant 016/2008/A1. Q. Xu and Y. Li are with the ... mercially available. As an alternative, a low-cost microscope .... internal force in limb 1 can be calculated by a potential energy analysis as. (12) with. (13).

Unified direct visual tracking of rigid and deformable ...
Feb 24, 2010 - general models combine various classes of image warps and ensure robustness ..... a typical purely geometric direct image registration system.

Robust Visual Tracking via Hierarchical Convolutional ...
400. 500. Frame number. 0. 0.2. 0.4. 0.6. 0.8. 1. 1.2. Confidence score. Tracking confidence plot - ClifBar. Tracking confidence score. Overlap ratio (IoU). 0. 50. 100. 150. 200. 250. 300. Frame number. 0. 0.2. 0.4. 0.6. 0.8. 1. 1.2. Confidence score

Face Tracking and Recognition with Visual Constraints in Real-World ...
... constrain term can be found at http://seqam.rutgers.edu/projects/motion/face/face.html. ..... [14] Y. Li, H. Ai, T. Yamashita, S. Lao, and M. Kawade. Tracking in.

Head-Mounted Eye-Tracking with Children: Visual ...
knife fork bowl knife bowl approaching reaching carrying grasping fork knife bowl plate. -3000. Eye Gaze. Action. -2000. -1000. 0. 1000. 2000. 3000. Time to Grasp (ms) setting bowl plate fork fork spoon bowl approaching reaching carrying grasping pla

Fusion with Diffusion for Robust Visual Tracking
A weighted graph is used as an underlying structure of many algorithms like ..... an efficient algorithm with time complexity O(n2), which can be used in real time.