A Geometric Approach to Visual Servo Control in the Absence of Reference Image⋆ S. S. Mehta†

J. W. Curtis

Research and Engineering Education Facility University of Florida Shalimar, FL-32579, USA

Munitions Directorate Air Force Research Laboratory Eglin AFB, FL-32542, USA

Abstract—In this paper, we formulate a visual servo control problem when a reference image corresponding to the desired position and orientation (i.e., pose) of a camera is not available. The desired pose is described in terms of the introduced angle of incidence and point of arrival of a camera coordinate frame. The presented problem is motivated by the application of homing missiles where a reference image may not be available and the angle of contact with a target is crucial to maximizing the target penetration. A new geometric approach is presented to formulate a rotation error system that minimizes the required control effort while achieving the desired angle of incidence. A nonlinear adaptive control law is developed to provide 6 degrees of freedom camera motion while compensating for an unknown target depth; stability analysis is presented to prove asymptotic regulation of the system states. High-fidelity numerical simulation results are provided to verify the performance of the proposed visual servo controller. Index Terms—visual servo control, adaptive control, Euclidean homography.

I. I NTRODUCTION A typical model-free visual servoing problem is constructed as a teach by showing (TBS) problem, in which a camera is a priori positioned at the desired location to acquire a “reference image”, which is used to reposition the camera at the desired location by means of visual servo control [1]–[4]. Another class of systems known as model-based visual servo controllers exploit the target model knowledge to determine the present and desired Euclidean position and orientation (i.e., pose) of a camera for achieving the control objective [5], [6]. For a variety of applications, however, it may not be feasible to acquire the reference image by a priori positioning a camera to the desired location and also the target model may not be available for model-based control. Such applications include guidance of homing missiles, robotic manipulation for repetitive but non-identical manufacturing processes, robotic hazardous material handling, robotic fruit harvesting, etc., where the classical visual servo control framework based on the error dynamics obtained using the present and reference image may be prohibitive. A new approach that does not rely on the TBS paradigm called teach by zooming (TBZ) visual servo controller was presented by Dixon et al. in [7], [8] to position/orient a camera based on a reference image taken by another camera. ⋆ This

research is supported in part by the US Air Force, Eglin AFB, grant number FA8651-08-D-0108/025. † Corresponding author: [email protected].

978-1-4577-0653-0/11/$26.00 ©2011 IEEE

Although based on the classical structure of the error dynamics, the reference image is acquired by exploiting the zooming capability of an additional camera and therefore does not necessitate an on-board camera to be positioned at the desired location. The visual servo control problem presented in this paper is particularly motivated by the guidance and control of homing missiles, where the target model and the reference image corresponding to the desired pose may not be available. Further, a missile airframe may be required to impact a target at a desired angle to maximize the target penetration as proved analytically in [9] and [10], thereby establishing a constraint on the terminal ballistics. Hence, in the absence of reference image, the desired pose is described by introducing new Euclidean (3D) parameters, namely, angle of incidence ψ ∈ R and point of arrival η ∈ R3 of the camera coordinate frame. The angle of incidence condition represents a pencil of lines describing the surface of a right circular cone, such that the generatrix identifies possible orientations of the camera optical axis. A new geometric method is introduced to determine the desired orientation of camera coordinate frame that minimizes the required control effort. To this end, a nonlinear adaptive 2 1/2D visual servo control law is derived that satisfies the control objective while estimating the unknown target depth. II. E UCLIDEAN R ECONSTRUCTION Consider a set of orthogonal coordinate frames denoted by F , F ∗ , and Ft as depicted in Fig. 1, where the unit vectors codirectional with the x, y, and z-axes are indicated by ‘ˆ’. The time-varying coordinate frame F is rigidly attached to an onboard camera (e.g., a camera mounted on a missile airframe), the stationary coordinate frame F ∗ is attached to an arbitrary pose of the camera, and the coordinate frame Ft is attached to a target. Without loss of generality, the frame F ∗ is assumed to coincide with the initial pose of an on-board camera F |t=0 and referred to as a reference camera. The z-axis of frame F is assumed to coincide with the optical axis of a camera. The linear and angular velocity of the camera coordinate frame F is denoted by vc (t), ωc (t) ∈ R3 , respectively. The target is represented in an image by four1 feature points such that 1 Image analysis methods can be used to determine planar objects (e.g. through color, texture differences). These traditional computer vision methods can be used to help determine and isolate the four coplanar feature points.

3113

the corresponding Euclidean features, Oi ∀i = 1, 2, 3, 4, are coplanar and non-collinear. The plane defined by Oi is denoted as π and the unknown Euclidean distance of Oi from the origin of Ft be si ∈ R3 ∀i = 1, 2, 3, 4 as shown in Fig. 1.

be related in terms of the normalized Euclidean coordinates as follows:  ∗  zi ¯ + xh nT mi . m∗i = R zi | {z } (7) | {z } αi H

where αi (t) ∈ R denotes a depth ratio, H(t) ∈ R3×3 denotes the Euclidean homography [11], and xh (t) ∈ R3 denotes a x ¯ scaled translation vector that is defined as xh = df . Each Euclidean feature point Oi will have a projected pixel coordinate expressed in terms of F and F ∗ as  T  T pi , ui vi 1 , p∗i , u∗i vi∗ 1 (8)

where pi (t), p∗i ∈ R3 represent the image-space coordinates of the target feature points expressed in F and F ∗ , respectively, and ui (t), vi (t) , u∗i , vi∗ ∈ R. To calculate the Euclidean homography given in (7) using the pixel information in (8), the projected pixel coordinates are related to mi (t) and m∗i by the pin-hole camera model as

Fig. 1. Coordinate frame relationship between the reference camera F ∗ , current camera F , and target Ft . ∗

3

To relate the various coordinate frames, let R(t), R ∈ SO denote the rotation from F to Ft and F ∗ to Ft , respectively, and the corresponding translation from F to Ft and F ∗ to Ft be denoted by xf (t), x∗f ∈ R3 , respectively. From the geometry depicted in Fig. 1 the following relationships can be developed: m ¯ i = xf + Rsi ,

m ¯ ∗i = x∗f + R∗ si

(1)

where m ¯ i (t), m ¯ ∗i ∈ R3 denote the Euclidean coordinates of the features points Oi expressed in F and F ∗ , respectively, as  T m ¯ i (t) , xi (t) yi (t) zi (t) , (2)   T . (3) m ¯ ∗i , x∗i yi∗ zi∗

From (1), the expression for m ¯ i (t) and m ¯ ∗i can be written as ¯m ¯f + R ¯i m ¯ ∗i = x

(4)

¯ = R∗ RT ∈ SO3 denote where x ¯f = x∗f − R∗ xf ∈ R3 and R the translation and rotation vectors, respectively, between F and F ∗ . By using the projective relationship d = nT m ¯ i , the expression in (4) can be written as  ¯f T  ¯+x ¯i (5) n m m ¯ ∗i = R d where d(t) > ǫ for some positive ǫ ∈ R denotes the depth and n(t) ∈ R3 is the unit normal from F to the plane π as shown in Fig. 1. To facilitate the subsequent development, the normalized Euclidean coordinates of the feature points can be expressed in terms of F and F ∗ , denoted by mi (t), m∗i ∈ R3 , respectively, as follows: mi ,

m ¯i , zi

m∗i ,

m ¯ ∗i . zi∗

(6)

From the expressions given in (5) and (6), the rotation and translation between the coordinate frames F and F ∗ can now

pi = Ami ,

p∗i = Am∗i

(9)

where A ∈ R3×3 is a known, constant, and globally invertible intrinsic camera calibration matrix. By using (7) and (9), the following relationship can be developed:  pi = αi AHA−1 p∗i | {z } (10) G

where G (t) = [gij (t)] ∈ R3×3 ∀i, j = 1, 2, 3 denotes a projective homography. Sets of linear equations can be developed from (10) to determine the projective homography up to a scalar multiple. Various techniques can be used (e.g., see [12]) to decompose the Euclidean homographies, to obtain ¯ αi (t), n(t), xh (t), R(t). In the absence of reference image, the rotation and scaled ¯ translation, R(t) and xh (t), respectively, obtained in (10) do not provide meaningful information to regulate a camera to the desired pose. However, the by-products of homography decomposition, αi (t) and n(t), can provide the relative position and orientation of the target frame Ft from F . It will be discussed in Sections IV and V as how the following control objective can be achieved using these quantities. III. C ONTROL O BJECTIVE Definition 1: The angle of incidence (ψ ∈ R) of a camera coordinate frame is defined as the angle made by the optical axis with a feature point plane, e.g., the angle between plane π and the z-axis of frame F as shown in Fig. 1. Definition 2: The point of arrival (η ∈ R3 ) is defined as the location of a camera frame such that a Euclidean feature of interest Oi measured in the camera coordinate frame is at the desired Euclidean coordinates m ¯ d ∈ R3 . The objective of the presented visual servo controller is to ensure that the position and orientation of the camera coordinate frame F is regulated to the desired pose such that

3114

F intersects2 feature plane π (i.e., η ∈ π) at the desired angle of incidence ψd . The objective can be achieved if ψ(t) → ψd

(11)

and a feature point of interest Oj is regulated to the principal point (i.e., m ¯ d = 0) in the sense that mj (t) → (md = 0) and zj (t) → (zd = 0)

(12)

3

where md ∈ R and zd ∈ R denote the normalized desired Euclidean coordinates and the desired depth of Oj for j = 1, 2, 3, or 4, respectively. Dividing the second condition in (12) by zj∗ > 0, the generalized control objective can be written as3 ! zd (13) mj (t) → md and αj (t) → αd = ∗ zj where αd ∈ R denotes the desired depth ratio. Expression in (12) is merely a special case of (13); although the controller presented in this paper is motivated by (11) and (12), it assumes a generic framework that satisfies (11)-(13). IV. ROTATION C ONTROLLER Based on (11), the objective is to intersect feature plane π at the desired angle of incidence ψd , however, the orientation of F corresponding to ψd is not defined uniquely. At the desired angle of incidence, the optical axis (i.e., z-axis) of a camera can be represented by a pencil of lines intersecting the feature plane π at an angle ψd describing the generatrix of a right circular cone Sc as shown in Fig. 2. The apex of Sc lies at the origin of frame F and the unit normal n(t), defined in (5), coincides with the height of cone such that any orientation of frame F with z-axis directed along the generatrix of the lateral surface will satisfy the desired angle of incidence condition. In this paper, the desired orientation of F is identified with an objective to minimize the rotation control effort by finding the minimum rotation between the z-axis and generatrix of Sc . In other words, determine the minimum distance between a unit vector zˆ(t) along the optical axis and the directrix of cone Sc . The problem can be simplified by constructing a plane Pc parallel to the base of Sc at a unit height from the apex. The intersection of the generatrix with Pc describes a circle C centered at n(t) ∈ R3 of radius r = cot ψd ∈ R. Therefore, the objective can be written as to determine a point z¯d ∈ R3 on C that minimizes the distance between C and the unit axis zˆ(t), such that the orientation of optical axis pointed along z¯d will satisfy the required angle of incidence criterion. Consider unit vectors l and m such that l, m, and n form the basis of a right-handed orthogonal coordinate frame. The circle C can then be parameterized as ζ = n + cot ψd (cos φl + sin φm) = n + cot ψd w(φ)

(14) (15)

2 As stated earlier, the control development is motivated by the missile-target pursuit guidance problem where the target m ¯ d is regulated to the principal point [13]. In practice, due to the camera field-of-view restrictions zd = ǫ ∈ R, where ǫ > 0. 3 The control objective of α (t) → α resembles the Teach by Zooming j d controller [7], [8], where αd is obtained by changing the camera focal length.

Fig. 2. Geometrical construction to identify the desired orientation of the optical axis that intersects feature plane π at the desired angle ψd .

such that the squared distance δ(φ) ∈ R from zˆ to each point on C corresponding to φ ∈ [0, 2π) is given by4 δ(φ) = (n − zˆ)2 + cot ψd2 + 2(n − zˆ) · w(φ) cot ψd .

(16)

The desired point z¯d on C can be obtained by minimizing the positive-valued, differentiable function δ(φ) in (16). Taking the derivative of (16) with respect to φ we get δ ′ (φ) = 2(n − zˆ) · w′ (φ) cot ψd

(17)



where w (φ) denotes the derivative of w(φ), such that w ·w′ = 0. Since n · w = 0 it can be shown that w is parallel to the projection of (ˆ z − n) onto the plane Pc , n · (ζ − n). The point z¯d ∈ C at a minimum distance5 from zˆ can be obtained as [14] z¯p − n (18) z¯d = n + cot ψd |¯ zp − n| where z¯p ∈ R3 is the projection of zˆ onto the plane Pc as shown in Fig. 2 such that z¯p − n = zˆ − n − (n · (ˆ z − n))n.

(19)

As stated previously, the rotation control objective is to regulate the orientation of camera coordinate frame F such that the optical axis (i.e., z-axis) is along z¯d . The control objective can be achieved by expressing the rotation error eω (t) ∈ R3 as the angular mismatch between zˆ and a unit vector, say zˆd , along z¯d obtained in (18) as T  (20) eω , uθ = eωx eωy eωz where eωx (t), eωy (t), eωz (t) ∈ R are the components of eω (t) about the x, y, and z-axis, respectively. In (20), u (t) ∈ R3 represents a unit rotation axis such that u = zˆ ∧ zˆd , and θ(t) =

4 From (16), the singularity condition can be derived as n(t) = z ˆ, such that each point on C is equidistant from zˆ. Hence, any point on C, requiring an identical control effort, can be chosen as z¯d . 5 Additional constraints on the desired position and orientation of F , e.g., φ ∈ {[0, a], [b, 2π)} such that [0, a] ∪ [b, 2π) 6= [0, 2π), can be accounted for when solving for the optimal solution of (16) and (17).

3115

cos−1 hˆ z , zˆd i ∈ R denotes the rotation angle about u(t) such that 0 ≤ θ (t) ≤ π. The open-loop error dynamics for eω (t) can be expressed as e˙ ω = −Lω ωc

(21)

where Lω (t) ∈ R3×3 is defined as θ Lω = I3 − [u]× + 2

sinc(θ) 1− sinc2 ( 2θ )

!

[u]2× .

(22)

In (22), the sinc(θ) term denotes the unnormalized sinc function. Given the open-loop rotation error dynamics in (21), the control input ωc (t) is designed as

where Lv (t), Lvω (t) ∈ R3×3  −1 Lv ,  0 0  me1 me2 Lvω ,  1 + m2e2 −α1 me2

are defined as  0 me1 −1 me2  , 0 −α1 −1 − m2e1 −me1 me2 α1 me1

 me2 −me1  . 0

(31)

(32)

Based on the open-loop error system in (30), the linear velocity control input can be designed as vc = −α1 L−1 ˆ1∗ Lvω ωc ) . v (Λv ev + z

(33)

3×3

ωc = Λω eω

(23)

where Λω ∈ R3×3 denotes a positive diagonal gain matrix. Substituting (23) into (21) gives the following expression for the closed-loop error dynamics: [15] e˙ ω = −Lω Λω eω .

(24)

V. T RANSLATION C ONTROLLER The translation error signal ev (t) ∈ R3 is defined as the difference between the actual and desired Euclidean camera positions as T  (25) ev , me − mde = evx evy evz

where evx (t), evy (t), evz (t) ∈ R are the components of ev (t) along the x, y, and z-axis, respectively. In (25), me (t) ∈ R3 denotes the extended normalized coordinates of an image point expressed in terms of F and is defined as6  T me , me1 (t) me2 (t) me3 (t) T  y (26) = xz11 z11 α1

and mde ∈ R3 denotes the extended normalized coordinates of the corresponding desired image point as  d1 yd1 T αd mde = xzd1 . (27) zd1

Substituting (26) and (27) into (25) yields7 T  y1 yd1 d1 α1 − αd ev = xz11 − xzd1 z1 − zd1

(28)

where α1 (t), αd ∈ R are the current and desired depth ratios defined in (7) and (13), respectively. After taking the time-derivative of (28) and substituting the following expression [16] m ¯˙ 1 = −vc + [m ¯ 1 ]× ω c the open-loop error system can be developed as 1 z1∗ e˙ v = Lv vc + z1∗ Lvω ωc α1

(29)

(30)

In (33), Λv ∈ R is a positive diagonal control gain matrix, ωc (t) ∈ R3 is defined in (23), and the time-varying estimate zˆ1∗ (t) ∈ R of the unknown constant feature depth z1∗ is obtained using the following direct adaptive update law zˆ˙1∗ = ΓeTv Lvω ωc

where Γ ∈ R3×3 is a positive diagonal adaptation gain matrix. After substituting (33) into (30) the closed-loop system can be obtained as z˜∗ 1 (35) e˙ v = − ∗ Λv ev + 1 Lvω ωc z1 z1 ∗ where z˜1∗ (t) ∈ R is the parameter estimation error defined as z˜1∗ = z1∗ − zˆ1∗ .

(36)

VI. S TABILITY A NALYSIS Theorem 1: The controller presented in (23), (33), and (34) ensures that the camera coordinate frame F is asymptotically regulated to the desired pose in the sense that lim eω (t), ev (t) = 0.

t→∞

(37)

Proof: Consider a non-negative function denoted by V (t) as follows: 1 1 1 (38) V = eTω eω + eTv ev + ∗ Γ−1 z˜1∗2 . 2 2 2z1 After taking the time-derivative of (38) and substituting (24), (34), and (35) in the resulting expression for e˙ ω (t), e˙ v (t), and zˆ˙1∗ (t), respectively, we get   1 z˜1∗ T T ˙ V = −eω Λω eω + ev − ∗ Λv ev + Lvω ωc z1 z1 ∗  1 (39) − ∗ Γ−1 z˜1∗ ΓeTv Lvω ωc z1

where the expression given in (36) along with the fact that eTω Lω = eTω are utilized. Upon further simplification and cancellation of the common terms, the following expression can be obtained: 1 V˙ = −eTω Λω eω − ∗ eTv Λv ev ≤ 0 z1

6 To

develop the translation controller a single feature point can be utilized. Without loss of generality, the subsequent development will be based on the image point O1 , and hence, the subscript 1 will be utilized in lieu of j. 7 In agreement with the control objectives defined in (11) and (12), i.e., corresponding to the goal of intersecting the plane π, the translation error in   x1 y1 α1 T . (28) can be written as ev = z1 z1

(34)

(40)

therefore, from (38) and (40), it can be shown that eω (t), ev (t), z˜1∗ (t) ∈ L∞ and eω (t), ev (t) ∈ L2 . Using (36) and the fact that z1∗ , z˜1∗ (t) ∈ L∞ , it can be proved that zˆ1∗ (t) ∈ L∞ . Since eω (t),

3116

eωx(t)

F ∗ are shown in ‘’ and the final position of the target is denoted by ‘◦’. Fig. 8 shows such trajectories for various angle of incidence conditions (10◦ , 30◦ , 50◦ , 90◦ ), where ψd = 90◦ corresponds to the singular case described in Footnote 4.

eωy(t)

ev (t) ∈ L∞ , the expressions in (7), (20), (22), (23), (26), (28), (31), (32), (33) can be used to prove that me (t), Lω (t), Lv (t), Lvω (t), ωc (t), vc (t) ∈ L∞ . Also, using (24), (35), (34), and the previous facts, it can be shown that the closed-loop system and the parameter update law are bounded, e˙ ω (t), e˙ v (t), zˆ˙1∗ (t) ∈ L∞ (i.e., eω (t) and ev (t) are uniformly continuous). Since e˙ ω (t), e˙ v (t) ∈ L∞ and eω (t), ev (t) ∈ L2 , Barbalat’s Lemma can be invoked to conclude that limt→∞ eω (t), ev (t) = 0, hence, the result in (37) follows directly. VII. S IMULATION R ESULTS

60

0.2 0 −0.2 −0.4 −0.6 0

20

40

60

20

40

60

time(s)

time(s)

80

100

120

140

80

100

120

140

80

100

120

140

eωz(t)

0

time(s)

Fig. 3. Rotation error between the current and desired orientation of an optical axis, where the dotted lines indicate rotation error in the presence of Gaussian noise.

evx(t)

1 0.5 0

evy(t)

−0.5 0

20

40

60

20

40

60

20

40

60

time(s)

80

100

120

140

80

100

120

140

80

100

120

140

0.05 0 0

time(s)

evz(t)

0.5 0 −0.5 −1 0

time(s)

Fig. 4. Translation error between the current and desired position of the camera coordinate frame F , where the dotted lines indicate translation error in the presence of Gaussian noise.

0.01

ω (t) [rad/s]

0

cx

where ∆[ ] represents a diagonal matrix. The simulation results of the presented visual servo controller are shown in Figs. 3-7, where the simulation was performed without and with the addition of white Gaussian image noise of 0.1 pixel standard deviation. The orientation mismatch between the camera optical axis and a unit vector along z¯d is shown in Fig. 3. It can be seen that the rotation error about z-axis eωz (t) = 0 ∀t ∈ R, this is intuitive from the fact that the axis of rotation u(t) is in the null space of the optical axis. Fig. 4 shows the plot of translation error, where evx (t), evy (t), and evz (t) are defined in (25). From Figs. 3 and 4, it can be concluded that the coordinate frame F is asymptotically regulated to the desired position and orientation, such that limt→∞ eω (t), ev (t) = 0. The angular and linear velocity of the camera measured in the coordinate frame F are shown in Figs. 5 and 6, respectively. Fig. 7 shows the image space trajectory of the target Euclidean features Oi ∀i = 1, 2, 3, 4, where the coordinates p∗i obtained at the reference camera pose

40

−1 0

cy

It is assumed that the desired angle of incidence of the camera coordinate frame F with respect to the feature point plane π is ψd = 70◦ and the point of arrival is such that the camera frame F intersects the feature plane at Oj ∀j = 1 or in other words the depth ratio α1 (t) → 0. The control gain matrices Λω , Λv ∈ R3×3 , and the parameter adaptation gain matrix Γ ∈ R3×3 are selected as follows:   Λω = ∆ 0.04 0.08 0.01     Λv = ∆ 24 24 6 , Γ = ∆ 20 20 20 (44)

20

1

ω (t) [rad/s]

The position and orientation of the reference camera coordinate frame Fr (i.e., F |t=0 ) with respect to I is given by xIf r ∈ R3 and RfI r ∈ R3×3 , respectively, as  T m (42) xIf r = 10 0 −20   0.2401 −0.9464 −0.2160 RfI r =  0.5962 −0.0318 0.8022  . (43) −0.7660 −0.3214 0.5567

0 −0.4 0

−0.01 0

20

40

60

0.04 0.02 0 −0.02 −0.04 −0.06 0

20

40

60

20

40

60

ωcz(t) [rad/s]

A numerical simulation was performed to demonstrate the performance of the proposed visual servo controller. A stationary target coordinate frame Ft is assumed to be located, with respect to the origin of a local inertial frame I, at a position xIf t ∈ R3 given by  T xIf t = 6 4 −5 m. (41)

0.2 −0.2

time(s)

time(s)

80

100

120

140

80

100

120

140

80

100

120

140

1 0 −1 0

time(s)

Fig. 5. Angular velocity of the camera ωc (t) = [ωcx ωcy ωcz ]T measured in the coordinate frame F , where the dotted lines indicate angular velocity in the presence of Gaussian noise.

VIII. C ONCLUSION A visual servo control problem is formulated to regulate a camera to the desired pose in the absence of reference image

3117

0 −5

150

150

vcy(t) [m/s]

−15 0

20

40

60

time(s)

80

100

120

140

0

v(t) [pixel]

−10 v(t) [pixel]

vcx(t) [m/s]

5

100

50

100

50

−1 −2 0

20

40

60

time(s)

80

100

120

0

140

−50

0

50 u(t) [pixel]

0 −100

100

−50

0 50 u(t) [pixel]

100

150

vcz(t) [m/s]

6 4 2

(a)

0 −2 0

20

40

60

time(s)

80

100

120

(b)

140 200 150

Fig. 6. Linear velocity of the camera measured vc (t) = [vcx vcy vcz ]T in the coordinate frame F , where the dotted lines indicate linear velocity in the presence of Gaussian noise.

v(t) [pixel]

v(t) [pixel]

150

100

100

50

50

180

0

0 −100

160

−50

0 50 u(t) [pixel]

100

150

−200 −150 −100

−50 0 u(t) [pixel]

50

100

150

140

(c)

v(t) [pixel]

120

(d)

100

Fig. 8. Image space trajectory of the target Euclidean features Oi ∀i = 1, 2, 3, 4 in the camera frame F corresponding to different desired angles of incidence ψd ; (a) 10◦ , (b) 30◦ , (c) 50◦ , and (d) 90◦ (singular case).

80 60 40 20 0 −150

−100

−50

0 u(t) [pixel]

50

100

150

Fig. 7. Image space trajectory of the target Euclidean features Oi ∀i = 1, 2, 3, 4 in the camera frame F . The image coordinates of the target measured in F ∗ are shown in ‘’, while the coordinates at the end of regulation are shown as ‘◦’; dotted line indicates the trajectory of the point to be regulated.

by defining new Euclidean constraints, namely, the angle of incidence and the point of arrival. The angle of incidence criterion defines a pencil of lines forming the generatrix of a right circular cone. Therefore, a new geometric approach is presented to determine the desired orientation of a camera frame such that the required control effort is minimized. A nonlinear adaptive control law is derived to regulate a camera to the desired pose and estimate the unknown target depth. Lyapunov-based stability analysis guarantees asymptotic stability of the system states and the extensive simulation results validate feasibility of the proposed controller. Since the control objective is defined in terms of the relative pose with respect to a reference camera, the camera can not be regulated to an absolute pose except when the control objective is to intersect the feature plane. The applications that can benefit from the proposed controller include: missile/smartmunition guidance, robotic fruit harvesting, manufacturing automation, and robotic hazardous material handling. R EFERENCES [1] P. I. Corke, “Visual control of robot manipulators – a review,” in Visual Servoing. World Scientific, 1994, pp. 1–31. [2] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” Robotics and Automation, IEEE Transactions on, vol. 12, no. 5, pp. 651–670, oct. 1996.

[3] E. Malis, F. Chaumette, and S. Bodel, “2 1/2 D visual servoing,” Robotics and Automation, IEEE Transactions on, vol. 15, no. 2, pp. 238–250, apr. 1999. [4] K. Hashimoto, “A review on vision-based control of robot manipulators,” Advanced Robotics, vol. 17, no. 10, pp. 969–991, 2003. [5] D. DeMenthon and L. Davis, “Model-based object pose in 25 lines of code,” Int. J. Computer Vision, vol. 15, pp. 123–141, 1995. [6] E. Malis, “Survey of vision-based robot control,” in ENSIETA European Naval Ship Design Short Course, 2002. [7] W. Dixon, “Teach by zooming: a camera independent alternative to teach by showing visual servo control,” in Intelligent Robots and Systems, Proceedings of the IEEE/RSJ International Conference on, vol. 1, oct. 2003, pp. 749–754. [8] S. Mehta, W. Dixon, T. Burks, and S. Gupta, “Teach by zooming visual servo control for an uncalibrated camera system,” in Guidance, Navigation and Control Conference and Exhibit, Proceedings of the AIAA, 2005, pp. AIAA–2005–6095. [9] M. Held and R. Fischer, “Penetration theory for inclined and moving shaped charges,” Propellants, Explosives, Pyrotechnics, vol. 11, no. 4, pp. 115–122, 1986. [10] W. P. Walters, W. Flis, and P. Chou, “A survey of shaped-charge jet penetration models,” International Journal of Impact Engineering, vol. 7, no. 3, pp. 307–325, 1988. [11] O. Faugeras, Three-Dimensional Computer Vision. The MIT Press, Cambridge Massachusetts, 2001. [12] O. Faugeras and F. Lustman, “Motion and structure from motion in a piecewise planar environment,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 2, no. 3, pp. 485–508, 1988. [13] S. S. Mehta, W. MacKunis, and J. W. Curtis, “Adaptive vision-based missile guidance in the presence of evasive target maneuvers,” in 18th IFAC World Congress, 2011, to appear. [14] D. Eberly, “Distance between point and circle or disk in 3D,” in Geometric Tools, LLC, 1998, pp. 1–4. [15] E. Malis and F. Chaumette, “Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods,” Robotics and Automation, IEEE Transactions on, vol. 18, no. 2, pp. 176–186, apr. 2002. [16] Y. Fang, A. Behal, W. Dixon, and D. Dawson, “Adaptive 2.5D visual servoing of kinematically redundant robot manipulators,” in Decision and Control, Proceedings of the 41st IEEE Conference on, vol. 3, dec. 2002, pp. 2860–2865.

3118

A Geometric Approach to Visual Servo Control in the ... - IEEE Xplore

University of Florida. Shalimar, FL-32579, USA. J. W. Curtis. Munitions Directorate. Air Force Research Laboratory. Eglin AFB, FL-32542, USA. Abstract—In this paper, we formulate a visual servo control problem when a reference image corresponding to the desired position and orientation (i.e., pose) of a camera is not ...

452KB Sizes 1 Downloads 324 Views

Recommend Documents

A Geometric Approach to Visual Servo Control in the ...
and point of arrival η ∈ R3 of the camera coordinate frame. The angle of incidence condition represents a pencil of lines describing the surface of a right circular cone, such that the generatrix identifies possible orientations of the camera opti

Daisy Chaining Based Visual Servo Control Part I - IEEE Xplore
Email: {gqhu, siddhart, ngans, wdixon}@ufl.edu. Abstract—A quaternion-based visual servo tracking con- troller for a moving six degrees of freedom object is ...

Daisy Chaining Based Visual Servo Control Part II - IEEE Xplore
Email: {gqhu, ngans, siddhart, wdixon}@ufl.edu. Abstract— In this paper, the open problems and applications of a daisy chaining visual servo control strategy ...

An Efficient Geometric Algorithm to Compute Time ... - IEEE Xplore
An Efficient Geometric Algorithm to Compute Time-optimal trajectories for a Car-like Robot. Huifang Wang, Yangzhou Chen and Philippe Sou`eres.

A New Approach in Synchronization of Uncertain Chaos ... - IEEE Xplore
Department of Electrical Engineering and. Computer Science. Korea Advanced Institute of Science and Technology. Daejeon, 305–701, Republic of Korea.

A New Approach in Discrete Chaos System Control by ... - IEEE Xplore
A New Approach in Discrete Chaos System Control by Differential Evolution Algorithm. Fei Gao. Department of Electrical Engineering and. Computer Science.

A Distributed Self-Healing Approach to Bluetooth ... - IEEE Xplore
Abstract—This paper proposes a distributed self-healing tech- nique for topology formation in dynamic Bluetooth wireless personal area networks (BT-WPANs) ...

Dynamic Interactions between Visual Experiences ... - IEEE Xplore
Abstract—The primary aim of this special session is to inform the conference's interdisciplinary audience about the state-of-the-art in developmental studies of ...

The 4D-approach to dynamic machine vision - IEEE Xplore
Universitat der Bundeswehr, Munchen. W.Heisenberg Weg 39, D-85577 Neubiberg, GERMANY. Fax: +49 89 6004 2082. Abstract. In this survey paper covering ...

A Diff-Serv enhanced admission control scheme - IEEE Xplore
The current Internet provides a simple best-effort service where the network treats all data packets equally. The use of this best effort model places no per flow ...

Direct Visual Servoing with respect to Rigid Objects - IEEE Xplore
Nov 2, 2007 - that the approach is motion- and shape-independent, and also that the derived control law ensures local asymptotic stability. Furthermore, the ...

Toward “Pseudo-Haptic Avatars”: Modifying the Visual ... - IEEE Xplore
In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could ...

A Computation Control Motion Estimation Method for ... - IEEE Xplore
Nov 5, 2010 - tion estimation (ME) adaptively under different computation or ... proposed method performs ME in a one-pass flow. Experimental.

Proprioceptive control for a robotic vehicle over ... - IEEE Xplore
Inlematioasl Conference 00 Robotics & Automation. Taipei, Taiwan, September 14-19, 2003. Proprioceptive Control for a Robotic Vehicle Over Geometric ...

Rate-Dependent Hysteresis Modeling and Control of a ... - IEEE Xplore
a Piezostage Using Online Support Vector Machine and Relevance .... model and suppress the rate-dependent hysteretic behavior of a piezostage system, and ...

Symposium on Emerging Topics in Control and Modeling - IEEE Xplore
Dec 2, 2010 - 132 IEEE CONTROL SYSTEMS MAGAZINE » DECEMBER 2010 student-led event ... sion were the technical cosponsors of the event, and the ...

A Morphology-Based Approach for Interslice ... - IEEE Xplore
damental cases: one-to-one, one-to-many, and zero-to-one corre- spondences. The proposed interpolation process is iterative. One iteration of this process ...

Adaptive Output-Feedback Fuzzy Tracking Control for a ... - IEEE Xplore
Oct 10, 2011 - Adaptive Output-Feedback Fuzzy Tracking Control for a Class of Nonlinear Systems. Qi Zhou, Peng Shi, Senior Member, IEEE, Jinjun Lu, and ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

Multi-Reference Visual Servo Control of an Unmanned ...
Department of Mechanical and Aerospace Engineering, University of Florida, ... This research is supported in part by the NSF CAREER AWARD 0547448, ...... Navigation, and Control Conference, Keystone, Colorado, AIAA 2006-6718, 2006.

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...