EUCLIDEAN CALCULATION OF THE POSE OF AN UNMANNED GROUND VEHICLE: A DAISY CHAINING APPROACH

K. Dupree, S. Mehta, C. Crane, and W. E. Dixon email: {kdupree, siddhart, ccrane, wdixon}@ufl.edu Department of Mechanical and Aerospace Engineering University of Florida, Gainesville, FL 32611 While a Global Positioning System (GPS is a widely used sensor modality for navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. Current methods have a limited ability to relate feature points as they enter and leave the camera field of view. This paper details a vision-based position and orientation (i.e., pose) estimation method for Unmanned Ground Vehicle (UGV) navigation and control. This estimation method accounts for a limited camera field of view by releasing tracked features that are about to leave the field of view and tracking new features. At each time instant that new features are selected for tracking, the previous pose estimate is updated. The vision-based estimation scheme can provide input directly to the vehicle guidance system.

I. INTRODUCTION While a Global Positioning System (GPS) is a widely used sensor modality for unmanned ground vehicle (UGV) navigation, the frequently cited and highly comprehensive study referred to as the Volpe Report1 indicates several vulnerabilities of GPS associated with signal disruptions. The Volpe Report delineates the sources of interference with the GPS signal into two categories, unintentional and deliberate disruptions. Some of the unintentional disruptions include ionosphere interference (also known as ionospheric scintillation) and radio frequency interference (broadcast television, VHF, cell phones, two-way pagers); whereas, some of the intentional disruptions involve jamming, spoofing, and meaconing. Some of the ultimate recommendations of this report were to, “create awareness among members of the domestic and global transportation community of the need for GPS backup systems. . . ” and to “conduct a comprehensive analysis of GPS backup navigation. . . ” which included ILS (Instrument Landing Systems), LORAN (LOng RAnge Navigation), and INS (Inertial Navigation Systems). Given the advancements in computer vision and estimation and control theory, monocular camera systems have received growing interest as a local alternative/collaborative sensor to GPS systems. One issue that has inhibited the use of a vision system as a navigational aid is the difficulty in reconstructing inertial measurements from the projected image. Current approaches to estimating the vehicle states through a camera system utilize the motion of feature points in an image. One limiting factor in this method is the current inability

to relate feature points as they enter and leave the camera field of view (FOV). This is a significant practical problem that can lead to degraded performance or instability of visual servo control and vision-based estimation algorithms. The fundamental challenge and potential pervasive impact of developing methods to keep the feature points in the FOV has inspired a variety of research efforts. For example, several researchers2,3 exploit partitioned or switching visual servoing methods to keep the object in the FOV. Other researchers4–9 , use potential fields or navigation functions to ensure the visibility of the feature points during closed-loop control. Zoom control10 can be used in to adjust the focal length of the camera to keep all features in the FOV during the control task. Some researchers have also investigated methods to enlarge the FOV. For example image mosaicing11–14 can be used to capture multiple images that are stitched together to obtain a larger image, or multiple images can be fused from multiple cameras15 . In addition to research focused on keeping the feature points in the FOV, other research has focused on methods that are invariant to occlusion or feature points leaving the FOV. For example, a linear probabilistic recursive estimation method16 can be used for reconstructing three-dimensional structure and motion from an integrated sequences of images. Smooth task functions with weighted features that allows visibility changes in the image features during the control task can also be developed17 . A variety of feature point tracking algorithms have also been proposed that can predict feature point locations (at least for short-time partial occlusions) based on shape information and/or the time history of the object (i.e., the object dynamics)18–23. An image-based estimation strategy is developed in this paper (and our preliminary efforts24 ) to determine the position and orientation (i.e., pose) of a UGV by using multiple images from a single camera based on knowledge of initial GPS measurements. Image geometry and homographic relationships are combined with a daisy-chaining method24–26 to achieve the results. Specifically, image geometry and homography relationships are used to determine the Euclidean coordinates of a stationary object in the camera FOV, and the daisy-chaining method is used to link the successive images as new feature points (i.e., new stationary objects) come into and leave the FOV. Then the current pose of the UGV can be related to the pose of the UGV when GPS information was available. The advantages of this approach include: the pose

2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008

557

its position and orientation with respect to π j will change. The camera frame and its rotation with respect to Ij , at any instance after the initial view, are denoted by Vj (t), and Rj (t) ∈ SO(3) respectively. The Euclidean positions of the feature points of Ij , with respect to Vj∗ and Vj (t), can be written respectively as m ¯ ∗ji = x∗f j + Rj∗ sji

m ¯ ji = xf j + Rj sji ,

(3)

¯ ji (t) ∈ R3 denote the respective Euclidean where m ¯ ∗ji , m coordinates of the feature points on the plane πj expressed in Vj∗ and Vj (t) respectively as ¤T ¤ £ £ ∗ ∗ zji m ¯ ∗ji , x∗ji yji m ¯ ji , xji yji zji .T (4) Fig. 1.

The Euclidean coordinates of the feature points of I1 measured in the V1 (t) coordinate frame can be written as

Coordinate Frame Relationships

m ¯ 1i = xf 1 + R1 s1i . of the UGV can be estimated even if the initial object leaves the field of view, and knowledge of the UGV dynamics is not required.

After some algebraic manipulation of (1) and (5) the expression for m ¯ 1i in (5) can be rewritten as ¯1 m ¯f 1 + R ¯ ∗1i , m ¯ 1i = x

II. GEOMETRIC MODEL Consider a single camera attached to a moving UGV observing a planar patch of feature points attached to a stationary object , which is initially identified by four coplanar and non colinear feature points on the plane π 1 where the reference frame V1∗ is attached to the camera1 . As depicted in Figure 1, the frame I1 is attached to the plane π1 . The feature points have coordinates s1i ∈ R3 ∀i ∈ {1, 2, 3, 4} in the I1 coordinate frame. The vector from the origin of V1∗ to the origin of I1 is x∗f 1 ∈ R3 where x∗f 1 is measured in the V1∗ coordinate frame, and the rotation from I1 to V1∗ is R1∗ ∈ SO(3). The geometry between the coordinate frames can be used to determine the relationship m ¯ ∗1i = x∗f 1 + R1∗ s1i ,

(5)

(1)

where m ¯ ∗1i ∈ R3 denotes the Euclidean coordinates of the feature points on the plane π 1 expressed in V1∗ as ¤T £ ∗ ∗ z1i . (2) m ¯ ∗1i , x∗1i y1i As the UGV moves, another set of four coplanar and noncolinear feature points will become visible. The new plane is denoted by πj ∀ j ∈ {2, 3, ..., ∞} . The frame attached to π j is denoted by Ij . The reference frame attached to the UGV corresponding to the first view of Ij will now be denoted as Vj∗ . The vector from Vj∗ to the origin of Ij is x∗f j ∈ R3 , where x∗f j is measured in the Vj∗ coordinate frame, and the rotation from Ij to Vj∗ is Rj∗ ∈ SO(3). As the UGV moves, 1 Image processing techniques can often be used to select coplanar and non-colinear feature points within an image. However, if four coplanar target points are not available then the subsequent development can also exploit the virtual parallax method27 where the non-coplanar points are projected onto a virtual plane.

(6)

¯ 1 (t) ∈ SO(3) and x where R ¯f 1 (t) ∈ R3 are new rotation and translation variables, respectively, defined as ¯ 1 = R1 R1∗T R

(7)

¯ 1 x∗f 1 . x ¯f 1 = xf 1 − R

(8)

By using the projective relationship depicted in Figure 1 d∗1 = n∗T ¯ ∗1i , 1 m the expression in (6) can be rewritten as ¶ µ x ¯f 1 ∗T ¯ m ¯ ∗1i . m ¯ 1i = R1 + ∗ n1 d1

(9)

(10)

In (10), n∗1 ∈ R3 denotes the constant unit normal to the plane π 1 expressed in Vj∗ and d∗1 > 0 ∈ R is the distance to the plane π 1 along n∗1 . A general relationship can be written for any set of feature points as à ! x ¯ f j ∗T ¯j + n (11) m ¯ ∗ji . m ¯ ji = R d∗j j III. EUCLIDEAN RECONSTRUCTION III. A. Homography Decomposition

The relationship given by (11) provides a means to quantify the translation and rotation between a coordinate system’s location at two points in time and space. Since the pose of Vj (t) and Vj∗ cannot be directly measured, a Euclidean reconstruction is developed in this section to obtain their position and orientation by comparing multiple images acquired from the camera. Specifically, comparisons are made between mji (t) and m∗ji . To facilitate the subsequent development, the normalized Euclidean coordinates

2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008

558

of the feature points for Vj (t) and Vj∗ can be expressed as mji (t) and m∗ji ∈ R3 , respectively, as mji ,

m ¯ ji zji

m∗ji ,

m ¯ ∗ji ∗ . zji

(12)

From the expressions given in (11) and (12), the rotation and translation between the coordinate systems Vj (t) and Vj∗ can now be related in terms of the normalized Euclidean coordinates as à ! ∗ zji x ¯f j ∗T ¯ m∗ji , Rj + ∗ nj mji = zji dj (13) |{z} | {z } αji Hj

In (13), αji (t) ∈ R denote depth ratios, and Hj (t) ∈ R3×3 denote Euclidean homographies28 . Each Euclidean feature point will have a projected pixel coordinate expressed as £ £ ¤T ¤T ∗ 1 p∗ji , u∗ji vji , pji , uji vji 1 (14) where pji (t) represent the time-varying image-space coordinates of the feature points of the plane π j in the Vj (t) frame, and p∗ji ∈ R3 represent the image-space coordinates of the feature points of the plane π j in the Vj∗ frame. To calculate the Euclidean homography given in (13) from pixel information, the projected pixel coordinates are related to mji (t) and m∗ji by the pin-hole camera model as pji = Amji

p∗ji = Am∗ji ,

(15)

where A ∈ R3×3 is a known, constant, invertible intrinsic camera calibration matrix. By using (13) and (15), the following relationship can be developed: ¡ ¢ pji = αji AHj A−1 p∗ji | {z } , (16) Gj

where Gj (t) ∈ R3×3 denotes a projective homography. Using four feature points, sets of linear equations can be developed from (16) to determine the projective homographies up to a scalar multiple. Techniques8,29 can be used to decompose the Euclidean homographies to obtain αji (t) , x ¯f j (t) ¯ ∗T for i = 1, 2, 3, 4 and j = 1, 2...∞. d∗ , Rj (t), and nj j

III.B. Euclidean Reconstruction of the π 1 Plane Consider an UGV equipped with GPS and a camera capable of viewing a landscape. A technique is developed in this section to estimate the position and orientation using camera data when the GPS signal is not available. A camera has a limited field of view, and motion of a vehicle can cause observed feature points to leave the image. The method presented here chains together pose estimates from sequential sets of tracked points. This approach allows the system to halt tracking a set of image features if it is likely to leave the image and begin tracking a new set of features while maintaining the pose estimate. Thus, the estimation can continue indefinitely and is not limited by the camera’s field of view.

The subsequent development assumes that the UGV begins operating with functional GPS and at least two images can be taken before the GPS goes offline. Without loss of generality, the GPS unit is assumed to be fixed to the origin of V1∗ . The homography of those images is decomposed and α1i (t) , x ¯f 1 (t) ¯ , R1 (t), and n∗T are obtained. Due to the fact that 1 d∗ 1 GPS information from that movement is available, x ¯f 1 (t) is x ¯f 1 (t) known, and d∗ can be used to solve for the unknown 1 constant depth, d∗1 . Since d∗1 is now known, and m∗1i is ∗ as measurable, (9) and (12) can be used to determine z1i ∗ z1i =

d∗1 . ∗ n∗T 1 m1i

(17)

The Euclidean coordinates of the feature points on π 1 expressed in V1∗ can now be reconstructed by substituting (17) into (12) as m∗1i =

d∗1 m∗ . ∗T n1 m∗1i 1i

(18)

To facilitate the subsequent daisy chaining method, the translation from V1∗ to I1 and the orientation of I1 with respect to V1∗ can also be determined. Without loss of ¯ ∗11 . The generality, the origin of I1 is assumed to be m ∗ orientation of I1 with respect to V1 introduced in (1) can be expressed as ¤ £ (19) R1∗ = ix1 iy1 iz1 . In (19), ix1 ∈ R3 and iz1 ∈ R3 are defined as ix1 =

m ¯ ∗12 − m ¯ ∗11 ∗ km ¯ 12 − m ¯ ∗11 k

iz1 = −n∗1 ,

(20) (21)

and iy1 ∈ R3 is defined as the cross product of ix1 and iz1 as m ¯ ∗12 − m ¯ ∗11 . (22) iy1 = −n∗1 × ∗ km ¯ 12 − m ¯ ∗11 k

In (21) and (22), the normal vector n∗1 , and therefore iz1 , is known from the homography decomposition of (13) and (16). The result in (18) can be substituted into (20) and (22) to determine ix1 and iy1 . Based on (19), the rotation R1∗ can now be determined. Based on the fact that R1∗ can be determined ¯ 1 (t) is known from the homography decomposition and R of (13) and (16), R1 (t) can now be determined from (7). ¯ ∗11 , the translation Because the origin of I1 is selected as m ∗ ∗ ∗ ¯ 11 by definition. Given xf 1 from V1 to I1 is equal to m x ¯f 1 ∗ ¯ ∗ that d1 , R1 (t), d∗ (t), and xf 1 have been determined, (8) 1 can be used to solve for xf 1 (t). Once these quantities have been solved for, the position and orientation of the UGV can always be estimated as long as I1 is in the field of view. III.C. Euclidean Reconstruction of the π j plane As the UGV moves, the π 1 plane will leave the FOV and a new plane will become visible. To exploit the daisy chaining strategy to link the pose of V1 (t) with the pose of the subsequent frames Vj (t) ∀j = {2, 3, ...}, the position and orientation of Vj (t) also needs to be determined. However,

2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008

559

Fig. 3.

Daisy Chaining

In (25), l{m 5 R3 and l}m 5 R3 are defined as Fig. 2. Geometry used to extract Euclidean position and orientation of a plane using a known plane.

¯ m1 p ¯ m2  p °> l{m = ° °p ¯ m2  p ¯ m1 °

(26)

the pose of Vm (w) can not be determined using the same methods used to determine the pose of V1 (w) because GPS measurements are no longer available. To address this issue, it is assumed that at some instant in time the  m1 plane and  m plane are both visible. The pose of Vm1 (w) can then be used to solve for the pose of Vm provided the UGV undergo some motion while they are both in the FOV, as depicted in Figure 2. Let that motion start at time w 5 R and finish at time w+ 5 R= Due to the fact that m1 and m are both in the FOV during the motion, at time w Vm is coincident with Vm1 (w ) > and at time w+ Vm (w+ ) is coincident with ¯i m (w+ ) can be solved for as Vm1 (w+ ) = Therefore { ¡ ¢¡ ¡ ¢ ¡ ¢¢ ¯ m1 w { ¯i (m1) w+  { ¯i (m1) w = { ¯i m = U

l}m = qm >

(27)

The homography decomposition of (13) for that motion { ¯ (w+ ) yields i mg , where gm is not known. However, since m { ¯i m (w+ ) can be calculated, gm can be determined. Since gm is now known, and pml is measurable, (9) and (12) can be  as used to determine }ml  }ml

gm = W  = qm pml

(23)

The Euclidean coordinates of the feature points on  m expressed in Vm can now be reconstructed as pml =

gm pml =  qW m pml

(24)

To facilitate the subsequent daisy chaining method, the translation from Vm to Im and the orientation of Im with respect to Vm can also be determined. The orientation of Im with respect to Vm introduced in (3) can be expressed as ¤ £ (25) Um = l{m l|m l}m =

and l|m 5 R3 is defined as the cross product of l{m and l}m as ¯ m1 p ¯ m2  p °= (28) l|m = qm × ° °p ¯ ° ¯ p m2

m1

In (27) and (28), the normal vector qm is known from the homography decomposition of (13) and (16). Since l}m is known from the homography decomposition, the result in (24) can be substituted into (26) and (28) to determine l{m and l|m . Based on (25), the rotation Um can now be determined. Based on the fact that Um can be determined, and ¯ m (w) is known from the homography decomposition of (13) U and (16), Um (w) can now be determined. Because the origin ¯ m1 , the translation from from Vm to Im of Im is selected as p introduced in (3) as {i m is equal to p ¯ m1 by definition. The ¯ m (w), {¯i m(w) and { have been determined, facts that gm , U im gm can now be used to solve for {i m (w).

IV. DAISY-C HAINING In this section, the daisy-chaining method is used to estimate the pose of Vm (w) when GPS is denied and when the original feature points leave the FOV (Fig. 3). Specifically, the previous development can be used to determine the pose of Vm (w) by chaining it to the pose of V1 > which is known from the initial GPS data. The subsequent development is based on the assumption that at some moment in time the feature points on both  m1 and  m are visible. The first instant when the feature points on both m1 and m are visible is when m first enters the FOV and therefore, the pose of Vm1 (w) is Vm (i.e., Vm1 (w) and Vm are coincident so the translation and rotation of both with respect to Im are the same).

2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008

560

For example, at some instant when the feature points on  1 are visible, a reference image is acquired. The UGV moves and another image is taken. Due to the fact that GPS measurements are available, the methods in Section III-B can be used and the Euclidean coordinates of the feature points on 1 can be reconstructed and expressed in V1 (w). The subsequent images of the feature points on 1 can then be used to construct the homography relationship given in (13), which allows the relationship between V1 (w) and I1 to be calculated. This means that the position of V1 (w) can be calculated based on I1 after GPS measurements go offline. At some other point in time, the feature points on  1 and 2 are both in the FOV, and a second reference image is acquired. At this point V1 (w) and V2 are coincident. Assuming that the next image is acquired when both 1 and  2 have undergone some motion and the feature points on both planes are still visible (this movement is not shown in Fig. 3), then the methods in Section III-C can be used to reconstruct the Euclidean coordinates of the feature points on 2 expressed in V2 . The relative pose of I2 with respect to I1 expressed in I1 can now be determined. Given the Euclidean coordinates of the feature points on 1 and  2 expressed in V1 (w), the constant relative pose of I1 with respect to I2 can be determined. That is, the constant vector 3 {I1 12 5 R from the origin of I1 to the origin of I2 expressed in I1 and determined when V2 (w) and V1 are coincident is ¡  ¢ W (29) {I1 12 = U1 {i 2  {i 1 = In (29), U1W is used to express the difference {i 2  {i 1 in I1 ({I1 12 is constant in I1 but is time varying when expressed in I2 5 VR(3) from I1 to I2 is Vm (w)). The constant rotation UI1  determined when V2 (w) and V1 are coincident as I2 UI1 = U2W U1 =

(30)

As the UGV continues to move, the feature points in  1 will leave the FOV, and the feature points on  2 and  3 will eventually be in the FOV at the same time; a third reference image is acquired. In order to estimate the pose of Vm (w)> the vector from the origin of V1 to the origin of Vm (w), as well as the rotation from V1 to Vm (w) are required. As stated earlier, the pose of I2 with respect to I1 expressed in I1 has been determined, so the vector from the origin of V1 to the origin V2 (w) of can be expressed in V1 as   I1  I2 W {V2 V1 = {i 1 + U1 {12  U1 UI1 U2 {i 2 >

(31)

even though 1 is no longer in the FOV. The rotation from V1 to V2 (w) is equivalent to the rotation from V1 to I1 multiplied by the rotation from I1 to I2 > multiplied by the rotation from I2 to V2 (w)= All of these rotations are known, so the rotation from V1 to V2 (w) can be determined as V2 I2 W UV1 = U2 UI1 U1 =

Assuming that the next image is acquired when both  2 and 3 have undergone some motion and the feature points on both planes are still visible, then the methods in Section III-C can be used to reconstruct the Euclidean coordinates

of the feature points on  3 expressed in V3 . The relative pose of I3 with respect to I2 expressed in I2 as well as the pose of I3 with respect to I1 expressed in I1 can then be determined. Given the Euclidean coordinates of the feature points on  2 and 3 expressed in V2 (w), the constant relative pose of I3 with respect to I2 can be determined. That is, the 3 constant vector {I2 32 5 R from the origin of I2 to the origin of I3 expressed in I2 and determined when V3 and V2 (w) are coincident is ¡  ¢ W (32) {I2 23 = U2 {i 3  {i 2 = I3 The constant rotation UI2 5 VR(3) from I2 to I3 deter mined when V3 and V2 (w) are coincident is I3 UI2 = U3W U2 =

(33)

The constant relative pose of I1 with respect to I3 can 3 also be determined. The constant vector {I1 31 5 R from the origin of I1 to the origin of I3 expressed in I1 is equivalent to the vector from the origin of I1 to the origin of I2 plus the vector from the origin of I2 to the origin of I3 > both expressed in I1 = Both of these vectors are known, so the vector from the origin of I1 to the origin of I3 expressed in I1 can be determined as I1 I1 I1 I2 I2 {I1 13 = {12 + {23 = {12 + UI1 {23 = I3 The constant rotation UI1 5 VR(3) from I1 to I3 is equivalent to the rotation from I2 to I3 multiplied by the rotation from I1 to I2 = Both of these rotations are known, so the rotation from I1 to I3 can be determined as I3 I2 I3 UI1 = UI1 UI2 =

(34)

As the UGV continues to move, the feature points in  2 will leave the FOV, and the feature points on  3 and  4 will eventually be in the FOV at the same time. As stated earlier, the pose of I3 with respect to I1 expressed in I1 has been determined, so the vector from the origin of V1 to the origin of V3 (w) can be expressed in V1 as   I1  I3 W {V3 V1 = {i 1 + U1 {13  U1 UI1 U3 {i 3

even though  2 and 1 are no longer in the FOV. This vector is equivalent to the vector from the origin of V1 to the origin of I1 plus the vector from the origin of I1 to the origin of I3 > plus the vector from the origin of I3 to the origin of V3 (w) all expressed in I= The rotation from V1 to V3 (w) is equivalent to the rotation from V1 to I1 multiplied by the rotation from I1 to I3> multiplied by the rotation from I3 to V3 (w)= All of these rotations are known, so the rotation from V1 to V3 (w) can be determined as V3 I3 W UV1 = U3 UI1 U1 =

This procedure can be continued indefinitely to calculate the vector from the origin of V1 to the origin of the Vm (w) frame expressed in the V1 frame. The general equation for the vector from V1 to Vm (w) is   I1  Im W {I1 1m = {i 1 + U1 {1m  U1 UI1 Um {i m =

2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008

(35) 561

Likewise, the general equation for the rotation from V1 to Vm (w) is Vm Im W UV1 = Um UI1 U1 = (36)

V. C ONCLUSION A daisy-chaining method is combined with image geometry techniques to estimate the pose of a UGV in a GPS denied environment. Homography methods are used to extract the pose of a set of feature points assuming that at least two images can be taken before the GPS goes offline. Image geometry is used to continue to extract information about feature points on another plane using the calculated information of a previous plane. These methods allow for relationships between frames to be determined as the UGV moves. By chaining the developed relationships to the position when GPS data is know, the Euclidean position of the UGV can be estimated. ACKNOWLEDGEMENTS This research is supported in part by the NSF CAREER award CMS-0547448, AFOSR contract numbers F4962003-1-0381 and F49620-03-1-0170, AFRL contract number FA4819-05-D-0011, and by research grant No. US-371505 from BARD, the United States - Israel Binational Agricultural Research and Development Fund at the University of Florida. This work was also accomplished as part of the Department of Energy University Research Program in Robotics (URPR), grant number DE-FG04-86NE37967. REFERENCES [1] J. A. Volpe, “Vulnerability Assessment of the Transportation Infrastructure Relying on the Global Position System,” U. S. Department of Transportation, (2001). [2] G. Chesi, K. Hashimoto, D. Prattichizzo, and A. Vicino, “Keeping features in the field of view in eye-in-hand visual servoing: a switching approach,” IEEE Trans. Robot, Vol. 20, No. 5, pp.908–914 (2004). [3] N. R. Gans, and S. A. Hutchinson, “A switching approach to visual servo control,” Proc. of the IEEE Int. Symp. on Intelligent Control, Canada, Oct. 27–30, pp.770–776 (2002). [4] J. Chen, D. M. Dawson, W. E. Dixon, and V. K. Chitrakaran, “Navigation function-based visual servo control,” Automatica, Vol. 43, No. 7, pp.1165–1177 (2007). [5] N. Cowan, J. Weingarten, and D. Koditschek, “Visual servoing via navigation functions,” IEEE Trans. Robot. Autom., Vol. 18, No. 4, pp.521–533 (2002). [6] Y. Mezouar, and F. Chaumette, “Path planning for robust image based control,” IEEE Trans. Robot. Autom., Vol. 18, No. 4, pp.534–549 (2002). [7] Y. Mezouar, and F. Chaumette, “Optimal Camera Trajectory with Image-Based Control,” The International Journal of Robotics Research, Vol. 22, No. 10, pp.781–803 (2003). [8] Z. Zhang, and A. Hanson, “Scaled Euclidean 3D reconstruction based on externally uncalibrated cameras,” Proc. of the IEEE Int. Symp. on Computer Vision, Florida, Nov 21-23, pp.37–42 (1995). [9] P. I. Corke, and S. A. Hutchinson, “A new partitioned approach to image-based visual servo control,” IEEE Trans. Robot. Autom., Vol. 17, No. 4, pp.507–515 (2001). [10] S. Benhimane, and E. Malis, “Vision-based control with respect to planar and nonplanar objects using a zooming camera,” Proc. of the IEEE Int. Conf. on Advanced Robotics, Portugal, Jun. 30–Jul. 3, pp. 991–996 (2003). [11] S. Hsu, H. S. Harpreet, and R. Kumar, “Automated mosaics via topology inference,” IEEE Comput. Graph. Appl., Vol. 22, No. 2, pp.44–54 (2002).

[12] Y. Schechner, and S. Nayar, “Generalized Mosaicing: High Dynamic Range in a Wide Field of View,” Int. J. of Comput. Vision, Vol. 53, No. 3, pp.245–267 (2003). [13] A. Smolic, and T. Wiegand, “High-resolution video mosaicing,” Proc. of the IEEE Int. Conf. on Image Processing, Greece, Oct. 7–10, pp.872–875 (2001). [14] M. Irani, P. Anandan, J. Bergen, R. Kumar, and S. Hsu, “Efficient representations of video sequences and their applications,” Signal Processing : Image Communication, Vol. 8, pp.327–351 (1996). [15] R. Swarninathan, and S. Nayar, “Non-metric calibration of wide-angle lenses and polycameras,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Colorado, Jun. 23–25, pp.413–419 (1999). [16] A. Chiuso, P. Favaro, H. Jin, and S. Soatto, “Structure from motion causally integrated over time,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 24, No. 4, pp.523–535 (2002). [17] N. Garcia-Aracil, E. Malis, R. Aracil-Santonja, and C. Perez-Vidal, “Continuous visual servoing despite the changes of visibility in image features,” IEEE Trans. Robot, Vol. 21, No. 6, pp.1214–1220 (2005). [18] M. Isard, and A. Blake, “Condensation – conditional density propagation for visual tracking,” Int. J. of Comput. Vision, Vol. 29, No. 1, pp. 5–28 (1998). [19] O. Camps, H. Lim, C. Mazzaro, and M. Sznaier, “A CaratheodoryFejer approach to robust multiframe tracking,” Proc. of the IEEE Int. Conf. on Computer Vision, France, Oct. 13–16, pp.1048–1055 (2003). [20] R. Lublinerman, M. Sznaier, and O. Camps, “Dynamics based robust motion segmentation,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, New York, Jun. 17–22, pp.1176–1184 (2006). [21] V. Morariu, and O. Camps, “Modeling correspondences for multicamera tracking using nonlinear manifold learning and target dynamics,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, New York, Jun. 17–22, pp.545–552 (2006). [22] M. Niethammer, A. Tannenbaum, and S. Angenent, “Dynamic active contours for visual tracking,” IEEE Trans. Autom. Control, Vol. 51, No. 4, pp.562–579 (2006). [23] B. North, A. Blake, M. Isard, and J. Rittscher, “Learning and classification of complex dynamics,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 22, No. 9, pp.1016–1034 (2000). [24] K. Dupree, N. R. Gans, W. MacKunis,and W. E. Dixon, “Euclidean Calculation of Feature Points of a Rotating Satellite: A Daisy Chaining Approach,” Proc. of the American Controls Conf., New York, Jul. 11– 13, pp.3874–3879 (2007). [25] G. Hu, S. Mehta, N. R. Gans, and W. E. Dixon, “Daisy Chaining Based Visual Servo Control Part I: Adaptive Quaternion-Based Tracking Control,” Proc. of the IEEE Multi-conference on Systems and Control, Singapore, Oct. 1–3, pp.1474–1479 (2007). [26] G. Hu, N. R. Gans, S. Mehta, and W. E. Dixon, “Daisy Chaining Based Visual Servo Control Part II: Extensions, Applications and Open Problems,” Proc. of the IEEE Multi-conference on Systems and Control, Singapore, Oct. 1–3, pp.729–734 (2007). [27] E. Malis, and F. Chaumette, “2 1/2 D Visual Servoing with Respect to Unknown Objects Through a New Estimation Scheme of Camera Displacement,” Int. J. Comput. Vision, Vol. 37, No. 1, pp.79–97 (2000). [28] O. Faugeras, Three-Dimensional Computer Vision, The MIT Press, Cambridge, USA (2001). [29] O. Faugeras, and F. Lustman, “Motion and Structure From Motion in a Piecewise Planar Environment,” Int. J. Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp.485–508 (1988).

2008 EP&R and R&RS Topical Meeting, Albuquerque, New Mexico, March 9-12, 2008

562

euclidean calculation of the pose of an unmanned ...

to advances in computer vision and control theory, monocular ... the need for GPS backup systems. ... comprehensive analysis of GPS backup navigation.

493KB Sizes 0 Downloads 228 Views

Recommend Documents

Multi-Reference Visual Servo Control of an Unmanned ...
Department of Mechanical and Aerospace Engineering, University of Florida, ... This research is supported in part by the NSF CAREER AWARD 0547448, ...... Navigation, and Control Conference, Keystone, Colorado, AIAA 2006-6718, 2006.

Large Sample Properties of the Three-Step Euclidean ...
is related to the fact that its estimating function is equivalent to a smooth function of sample means. This is not the case for the .... For brevity, we only highlight in the text those assumptions that are relevant to the exposition and relegate th

Numerical calculation of the melting phase diagram of ...
May 22, 2003 - melting phase diagram involves calculation of the free en- ergy for both the liquid and ..... 25, a better agreement with experimental data is only possible if one .... hexagonal phase and subsequent transformation to less mo-.

Hayek, Two Pages of Fiction, The Impossibility of Socialist Calculation ...
Page 1 of 7. Page 2 of 7. Page 2 of 7. Page 3 of 7. Page 3 of 7. Hayek, Two Pages of Fiction, The Impossibility of Socialist Calculation.pdf. Hayek, Two Pages of ...

The Effect of Motion Dynamics in Calculation of ...
Detailed studies have investigated the dynamic effects of locomotion, as well as many fast-paced sports motions, on physiological loading. This study examines the significance of considering the dynamics of simulated industrial handwork when calculat

Euclidean Embedding of Co-occurrence Data - Journal of Machine ...
In this case the specific biological process provides a common “meaning” for several different types of data. A key difficulty in constructing joint embeddings of ...... risk polynomial nips regularization variational marginal bootstrap papers re

Euclidean Embedding of Co-occurrence Data - Journal of Machine ...
single common Euclidean space, based on their co-occurrence statistics. The joint distributions are ... co-occurrence data. We treat the observed object pairs as drawn from a joint distribution that is determined by the underlying low-dimensional map

Euclidean Embedding of Co-occurrence Data - Journal of Machine ...
in video sequences. Their method also minimizes an .... to the current matrix, similarly to the PSD projection algorithm of Xing et al. (2002). Pseudo-code ..... policy actions agent game policies documents mdp agents rewards dirichlet. Figure 6: ...

Clarification regarding calculation of quantum.PDF
Page 1 of 14. Digitalplayground trading mothers for daughters. Austin and ally s04e11.Big booty beatdown.65730397309 - Download Digitalplayground tradingmothers for daughters.Gangs of newyork 2002. 1080p eng.We need three generations to educate, to c

The Problem of Collision Avoidance in Unmanned ...
evaluate its plans often in order to account for new craft in the airspace, changes ... appears at first glance: finding a best path is NP-complete [10, p. 869] ..... In addition to the obvious memory savings, pruning causes a smaller number of.

Calculation of Equivalent Hydraulic Conductivity as ...
theory, describing flow in an unconfined system bounded by a free surface. The ..... Groundwater Flow Simulation Codes (UNIX, VAX-OPEN VMS, PCWindows.

On the calculation of the bounds of probability of events ...
Apr 26, 2006 - specialist may also be able to extract PDFs, though experts will ..... A(x), for x ∈ X, represents the degree to which x is compatible with the.

Further Improvements in the Calculation of Censored ...
Dec 29, 2010 - The first implementation of a global optimization technique ..... in E. Then for each of these solutions construct the neighborhood set, B, and choose a random ... point for BRCENS to make more improvements, and call this procedure TB.

Guidelines on the Calculation of Quota and Levy Bill
Jul 1, 2017 - Illustration: Mr Tan runs a factory, licensed by ... Illustration: If in addition to his 20 local ... T2 = (50% x total workforce) – T1. T2 = (25% x total ...

A comprehensive method for the elastic calculation of ...
The elastic-contact hypothesis takes into account the existence of contact .... in the most unfavourable cases, checking an area D/8 wide (where D is the diameter of the wheel) on each ..... the 8th international wheelset congress 1 II.4 1-15.

pdf-1497\cooperative-path-planning-of-unmanned-aerial-vehicles ...
... apps below to open or edit this item. pdf-1497\cooperative-path-planning-of-unmanned-aerial ... astronautics-and-aeronautics-by-antonios-tsourdos.pdf.

Euclidean preferences
Jul 20, 2006 - A popular model in Political Science is the “spatial model of preferences”. It amounts to consider ... statistical tools for estimation of voters ideal points or party positions (see ..... Moreover, if xi ∈ Bi, its image yi is on

Dynamic reallocation in teams of Unmanned Air Vehicles
mathematical models, so the operators must approve or modify the plan and the ... presented in [4] where predictive models describe the dynamics of assets ...

Vision-only Navigation and Control of Unmanned Aerial ...
Department of Computer Science & Electrical Engineering, OGI School of Science & Engineering, ... Xi'an China in 1995 and M.S. degree in the Institute of.