Proceedings of ICAR 2003 The 11th International Conference on Advanced Robotics Coimbra, Portugal, June 30 - July 3, 2003

Line Following Visual Servoing for Aerial Robots Combined with Complementary Sensors Geraldo F. Silveira ∗ , Jos´e R. Azinheira † , Patrick Rives ‡ , Samuel S. Bueno ∗ ∗

Renato Archer Research Center – LRVC/CenPRA, Rod. D. Pedro I, Km 143,6, CEP 13082-120, Campinas/SP, Brazil. {First name.Last name}@cenpra.gov.br †

Instituto Superior T´ecnico – IDMEC/IST, Av. Rovisco Pais, 1049-001, Lisboa, Portugal. [email protected]

INRIA Sophia Antipolis 2004 Route des Lucioles BP 93 06902 Sophia Antipolis Cedex, France. [email protected]

Abstract This article addresses the problem of visual line following for aerial robotic vehicles, considering an airship as platform. Previous work showed that targets characterized by three parallel lines make possible to assure a unique pose for the aerial robot. Here, twoline targets are considered, what leaves two d.o.f. unconstrained by the visual servo control. In this case, other than visual signals provided by an onboard camera, complementary sensorial information is needed to fully control the vehicle states. The sensor suite comprises also an airspeed sensor and a sensor to measure the airship rotation around its longitudinal axis. A two-line following sensor-based scheme using an output error LQ regulator with PID structure is detailed. Simulation results, obtained by using a complete airship dynamic model, are presented.

1

The 2D visual servoing technique applied to the 3D line following problem presented in [5] requires a minimum of three lines in the field of view to assure a unique pose for the robot (except for the position along the lines for obvious reasons). This paper deals with more realistic inland scenarios where targets have usually no more than two lines available for tracking (e.g., the borders of a river, pipeline, etc.). In this case, it is shown that, beside the translation collinear to the 3D lines, another degree of freedom is left unconstrained by the visual servoing.

Introduction

Image-based visual servoing techniques have been applied to control different robotic systems. Previous works in the field of visual servoing for aerial robots have comprised the stabilization problem [1, 2] and vertical landing [3] using small indoor blimps and helicopters. Concerning outdoor robotic airships, the authors also proposed a hovering solution [4] and a strategy for line-following tasks [5].

1160

Figure 1: Airship in autonomous flight. The strategy proposed in this work is validated in simulation by using a realistic dynamic model of the Project AURORA [6] outdoor airship (see Fig. 1). This project focuses on the establishment of the technologies required for substantially autonomous operation of unmanned robotic airships for environmental

monitoring and aerial inspection missions. Airships are non-holonomic, underactuated, dynamic systems where further difficulties arise from their intrinsic high sensitivity to disturbances such as wind and gust. This article proposes a solution for controlling the motions of aerial robots in 3D line following tasks using visual signals, provided by an onboard camera, with the aid of complementary sensors to cope with the two unconstrained degrees of freedom that appear by tracking only two lines in the image. One sensor is used to measure the airspeed and another one to measure the airship rotation around its longitudinal axis (roll angle). An output error LQ regulator with a PID structure is implemented to satisfy the performance requirements, which are little oscillatory behavior with zero steady-state error for ramp altitude reference profiles and for constant disturbances. Concerning the image error for windy cases, a simple solution is also proposed for completeness, which requires a sensor on the ground to measure wind speed and direction. The remainder of the article is organized as follows. Section 2 covers the aspects related to visual servoing. Section 3 briefly presents the dynamics of the aerial vehicle used to validate the strategy along with references for further details, while in Section 4 the design of the vision-based controller is shown. The results are illustrated in Section 5 and final conclusions are then summarized in Section 6.

2

The mapping between the motion of the image feature parameters and the motion of a frame attached to the robot is modeled as [7, 8] (1)

where J = (∂s/∂ξ c )(∂ξ c /∂ξ) = LR is the image Jacobian, ξ ∈ 3 × SO(3) is the pose of the robot frame w.r.t. an inertial frame, L is the interaction matrix, R is the transformation matrix from camera frame to robot frame, and ∂s/∂t represents possible target motion. Using the polar representation for the scene right and left lines as in [9], the image feature parameters vector can be constructed as s(ξ, t) = [ϑ1 , ρ1 , ϑ2 , ρ2 ]T . An analytical expression for the interaction matrix when the image features are general algebraic curves can be derived (for more details, see [10]). In the case of straight lines, a general form of the interaction matrix may then be obtained for each line:

1161

··· ] ··· ]

(2)

where λϑ and λρ can be computed from the equation of the ground plane in the camera frame. In the present case, the interaction matrix L = ¤T £ T associated to the visual signals Lϑ1 , LTρ1 , LTϑ2 , LTρ2 s is a 4 × 6 matrix which allows to control only four degrees of freedom of the airship configuration space. The two remaining degrees of freedom not constrained by the vision task, which are the forward position and a combination of the roll angle and the lateral position, belong to the null space of L, N (L), and have to be controlled by other airship sensors.

3

Dynamics of the Testbed

The development and validation of the visual servoing strategy proposed in this work uses a simulation environment based on a realistic 6 d.o.f. dynamic model of the AURORA airship. Its non-linear dynamics is based on experimental data acquired from wind tunnel for an airship with the same diameter/length ratio (1:4) and adjusted to the model (see [11] for further details), which may be written as Ma v˙ =

Visual Servoing Aspects

˙ + ∂s s˙ (ξ, t) = J(ξ)ξ(t) ∂t

Lϑ = [ λϑ cos ϑ λϑ sin ϑ −λϑ ρ ··· −ρ cos ϑ −ρ sin ϑ −1 Lρ = [ λρ cos ϑ λρ sin ϑ −λρ ρ · · · (1 + ρ2 ) sin ϑ −(1 + ρ2 ) cos ϑ 0

·

¸ · ¸ · ¸ · ¸ · ¸ f f f f fa (3) + g + p + k + w tw tk tp tg ta

where v = [ν, ω]T = [u, v, w, p, q, r]T is the vector of inertial velocities, composed by the linear and angular velocities of the vehicle in the body frame; Ma ∈ 6×6 is the generalized apparent mass matrix that includes both the airship actual inertia and the virtual inertia elements associated with the dynamics of buoyant vehicles; while the second term represents the forces and torques produced by aerodynamics, gravity, propulsion, kinematics inertia, and wind. As control inputs, the lighter-than-air platform has the deflection tail surfaces (elevator δe , aileron δa , and rudder δr ), a pair of main thrusters whose |δT |max = 40N, and a small stern thruster δt as depicted in Fig. 2. Although the airship has a ‘×’-shaped aerodynamic surfaces, they generate the equivalent commands of the classical ‘+’-shaped ones. The engines can be vectored within δv ∈ [−30◦ , +120◦ ]. The primary objective of the work is to perform line following tasks with the vehicle under aerodynamic flight, whose control input in this case is u =

δv

Sfrag replacements

− +

+ −

Considering a motionless target ∂s/∂t = 0, and using an image Jacobian obtained for the desired image features s = s∗ (t) computed at the final airship pose ∗ b ∗ , D∗ ), one has δξ ∗ = [δp0 , δθ ∗ ]T , i.e. J = J(s

δe

Z

δT

δt

Figure 2: Airship actuators.

4

6

ts ∗ t

·

¸ S−1 0 trim δv dt, (9) 0 Ttrim

b0 (s∗ (t), D ∗ (t))δξ, es = J

Sensor-based Controller Design

v˙ = g(v, p, θ, u, w) −1

p˙ = S(θ) ν θ˙ = T(θ)ω

(4) (5) (6)

with initial conditions v(t0 ) = v0 , p(t0 ) = p0 , θ(t0 ) = θ 0 , where p = [N, E, D]T represents the 3D position (North, East and Down) of the robot w.r.t. the earth frame. S and T are transformation matrices that depend on the robot attitude, which is described by the roll, pitch and yaw angles θ = [φ, θ, ψ]T .

Construction of a linearized system

For the design of the controller, the system is linearized around the trim condition, which is of a flight aligned with the ground lines, at constant airspeed and altitude, what leads to the continuous linear system δ x˙ = Fδx + Gδu,

(7)

with F ∈ 11×11 , G ∈ 11×4 , δx = [δv, δp0 , δθ]T ∈ 11 , and the position vector without the longitudinal component δp0 = [δE, δD]T (without loss of generality, the lines are taken as oriented to North). The δ(·) corresponds to perturbations around the trim values, that is x = xtrim + δx and

Z

has to

The non-linear dynamics of the system (3) along with its kinematics may be written as

4.1

t

b ∗ , D∗ ) s˙ dt = J(s

from the integration of Eq. (1) along with (5) and (6) linearized. Then, defining the vector of image feature error as es (δξ, t) = s(δξ)−s∗ (t) and considering δξ ∗ = 0, the error in the image can be related to airship position through

+ −

δr

[δe , δT , δa , δr ]T . The wind disturbance w ∈ be also considered.

ts ∗

u = utrim + δu.

(8)

1162

(10)

b0 (s∗ (t), D ∗ (t)) comes from the columns of where J ∗ b J(s (t), D ∗ (t)) discarding the one corresponding to the longitudinal position (since δp0 = [δE, δD]T ). In order to control the unconstrained d.o.fs. (see Section 2), those errors are also included in the final system [eu , eφ ]T = W(δx − δx∗ ) = [δu − δu∗ , δφ − δφ∗ ]T . Then, the task error vector may be constructed as e = [eu , eφ , es ]T . A PID compensator design problem is more conveniently formulated in terms of an augmented states equation. Defining a new variable as an integrative of the errors

eI =

Z

e dt =

Z

[eu , eφ , es ]T dt

(11)

and from Eqs. (1) and (10), an augmented continuous system may be written ·

 ¸ δ x˙ = e˙ I







 e   eI  =    e˙

F W b0 (s∗ (t), D ∗ (t)) 0 J

W b0 (s∗ (t), D ∗ (t)) 0 J 0 W b ∗ (t), D ∗ (t)) 0 J(s

 ¸ · ¸ 0 ·  δx + G δu 0 eI 0  0 · ¸  δx  . 6   eI 0

(12)

The augmented, complete system is then discretized through a zero-order hold with sampling period of Ts = 0.1s, specified after analyzing the system bandwidth, which results in the following open-loop system

4.3 x(k + 1) = Φx(k) + Γu(k) b e(k) = Cx(k)

where Φ = eFTs and Γ = augmented matrix.

4.2

R Ts 0

(13) (14)

eFη dηG. The (·) denotes

The LQ regulator approach

It is used an optimal discrete-time constant-gain Linear Quadratic regulator with control effort resulted from the minimization of a cost function ¤ 1 X£ T x (k)Qx(k) + uT (k)Ru(k) (15) 2 ∞

min J(u) = u

k=0

subject to the constraint expressed by Eq. (13), where the weighting matrices Q ≥ 0 and R > 0 are to be selected. For output error regulation, a common sob T QC. b The optimal control lution is to choose Q = C input is given as ◦

δu (k) = −K∞ x(k),

T

K∞ = R + Γ S∞ Γ

¤−1

T

Γ S∞ Φ

(17)

derived from the solution of the discrete-time algebraic Riccati equation © £ S∞ = Φ T S∞ − S ∞ Γ R + ª ¤−1 + ΓT S∞ Γ ΓT S∞ Φ + Q.

(18)

However, due to the quadratic appearance of S∞ , it is usually difficult to find an analytical solution, and a numerical solution is required. Using the eigenvector decomposition method, S∞ = ΛX

−1

For the purpose of sensor-based line following tasks, where only the output variables are available for feedback, some form of state estimation has to be employed. By using a Least Square formulation, the best estimation of x(k) can be found by minimizing the sum of the squares of the fit error in Eq. (14)

min L = x

(19)

where [X, Λ]T is the matrix of eigenvectors associated with the stable eigenvalues of the control Hamiltonian matrix, the control gain in Eq. (17) can be computed (see [12] for further details on optimal control).

1163

¤ £ ¤ 1£ b T e − Cx b e − Cx 2

(20)

£ ¤ £ ¤ b T −C b = 0, results in = e− Cx whose procedure, ∂L ∂x £ T ¤−1 b C b b b x(k) = C Ce(k) or, more generally, the MoorePenrose pseudo-inverse b + e(k). b x(k) =C

(21)

b + e(k). u◦ (k) = utrim − K∞ C

(22)

Thus, the final control input obtained from Eqs. (8), (16) and (21) is

(16)

provided that the pair {Φ, Γ} is completely controllable. The steady-state gain (k → ∞) is £

The output error command through Least-Squares estimation

5

Results

The simulation results shown here were correspond to a camera placed in the gondola at 1.5m below and 1.25m ahead the airship center of volume. In order to avoid steady-state image feature errors in the parameters ϑi , i = 1, 2, which are closely related to the altitude, the camera tilt angle was set according to the pitch angle obtained in the linearization step θtilt = −θtrim . In all simulation cases, the initial Euler angles were set to θ 0 = θ trim = [0, 0, 0]T , initial velocity vector v0 = [10m/s, 0, wtrim , 0, 0, 0]T , and complementary signals provided by onboard airspeed and roll angle sensors are to be controlled taking φ∗ = 0 and u∗ = Vt∗ = 10m/s, where Vt is the true airspeed. Modeling a line as an intersection of two planes, the corresponding desired visual signals were obtained for an aligned flight with both lines lying simetrically in the image plane, that is at E ∗ = 0, D ∗ = −h∗ (t), θ ∗ (t) = [0, θ ∗ (t), 0]T , through

s∗ (t) =

 ´ ³ ∗  ϑ∗i (t) = arctan ²i l cos∗ θ (t) 2h (t)  ρ∗i (t) = √

−²i l sin θ ∗ (t)

4h∗ 2 (t)+l2

cos2 θ ∗ (t)

(23)

where i = 1, 2, ²i = −1, +1, and l = 3m is the horizontal distance between the lines. Although several other simulations were carried out, only two of them are here illustrated due to space limitation: for a nominal case and a case considering the most severe disturbances that the platform is supposed to face during its normal operation. In addition, the effectiveness of the camera pan usage as a further d.o.f. to zero the error in the image parameters ρi , i = 1, 2, during windy conditions is also demonstrated. In this case, the complementary sensor suite includes also the wind speed and direction w.r.t. the lines measured on the ground.

5.1

Nominal conditions

The results for nominal conditions, depicted in the Fig. 3, were obtained for the initial 3D airship position p0 = [−300, −10, −25]T m with the aim to align w.r.t. the lines at the beginning of the task, and to follow a ramp altitude profile (rate of −0.75m/s) after 30s of system evolution. The desired altitude to be reached was 5m.

The adherence of the acquired visual (in Subfigs. 3(a) and 3(b)) and complementary sensors signals (Subfigs. 3(c) and 3(d)) to the desired ones, illustrated by dashed lines, can be observed. The reference 3D path and the corresponding evolution of the airship position are shown in Subfigs. 3(e) and 3(f).

5.2

Severe conditions

The initial conditions for a severe case, whose results are depicted in the Fig. 4, were the same as in Subsection 5.1, but with horizontal wind speed of vw = 3m/s with an incidence angle of βw = 15◦ w.r.t. the lines, gust of 1m/s and Gaussian sensor noise in each line (i.e., independent channels) up to 5% of its nominal value for ϑi and up to 1% for ρi , i = 1, 2. A zero-order hold discrete filter was then introduced in the derivative term of the control law. −3

40

0.5

x 10

0

30

−0.5

20 −1

10 −1.5

0 −2

−10 −2.5 −3

40

0.5

30

0

x 10

−20

−30 −300

−3

−200

−100

0

100

200

300

400

−3.5 −300

(a) ϑi and ϑ∗i (deg)

−0.5

20 −1

−200

−100

0

100

200

300

400

(b) normalized ρi and ρ∗i

5

10.5

10 −1.5 10

0 −2 9.5

−10

0 −2.5 9

−20

−3 8.5

−30 −300

−200

−100

0

100

200

300

400

500

600

700

−3.5 −300

(a) ϑi and ϑ∗i (deg)

−200

−100

0

100

200

300

400

500

600

700

−5

(b) normalized ρi and ρ∗i

5

8

10.5

7.5

10

−10 −300

−200

−100

0

100

200

300

400

7 −300

(c) φ and φ∗ (deg)

9.5

0

−200

−100

0

100

200

300

400

(d) Vt and Vt∗ (m/s) 30

6 9

4

25 8.5

2 −5 8

20

0

−2

7.5

15 −4 −10 −300

−200

−100

0

100

200

300

400

500

600

700

7 −300

−200

−100

0

100

200

300

400

500

600

700

10

−6

(c) φ and φ∗ (deg)

(d) Vt and Vt∗ (m/s)

−8

30

6

5 −10

4

25

−12 −300

−200

−100

0

100

200

300

400

0 −300

−200

−100

0

100

200

300

400

2

(e) E e E ∗ (m)

20

0

Figure 4: Parameters evolution for severe conditions. Horizontal axes represent distance (m) along the lines.

−2

15 −4

10

−6

(f) h and h∗ (m)

−8

5 −10

−12 −300

−200

−100

0

100

(e) E e

200

300

E∗

400

(m)

500

600

700

0 −300

−200

−100

0

100

200

(f) h and

300

h∗

400

500

600

700

(m)

Figure 3: Results for nominal conditions. Horizontal axes represent distance (m) along the lines.

1164

Even though the sensory signals were kept under control, as demonstrated through Subfigs. 4(a) to 4(d), it is observed in the Subfig. 4(b) a steady-state error on the parameter ρi , i = 1, 2. The reason is the constant wind disturbance acting on the airship.

5.3

Pan solution

Acknowledgments

A simple solution for zeroing the error in ρi , i = 1, 2 for windy conditions (Subfig. 4(b)) consists of using the additional degree of freedom provided by the camera pan unit so that the airship remains aligned with the relative wind direction while the camera is oriented toward the lines, that is ψpan = − arcsin

µ

vw sinβw Vt



.

(24)

As a remark, a simple increment on the integrative gains of the control law, while keeping the pan aligned w.r.t. the airship, would induce an oscillatory behavior due to the conflicting objectives: orientation of the vehicle w.r.t. the lines, against inherent alignment of aerial vehicles with the relative wind. The validation of such strategy is demonstrated in the Fig. 5, made with the following initial conditions: p0 = [−300, 0, −10]T m, and wind conditions of vw = 3m/s with βw = 15◦ started after 5s of system evolution (' −250m along the lines). −3

1

−3

x 10

1

0.5

0.5

0

0

−0.5

−0.5

−1

−1

−300

−250

−200

−150

−100

−50

0

50

100

150

(a) without pan usage.

200

x 10

−300

This work is partially funded by the Brazilian agency CNPq under grants 680260/01-3 (CNPq – ProTeM-CC / INRIA) and 910094/99-3 (CNPq / ICCTI). Dr. Jos´e R. Azinheira was also supported by the Portuguese Operational Science Program (POCTI), co-financed by the European FEDER Program.

References [1] H. Zhang and J. P. Ostrowski, “Visual servoing with dynamics: Control of an unmanned blimp,” in Proc. of the IEEE International Conf. on Robotics and Automation, Michigan, USA, May 1999, pp. 618–623. [2] T. Hamel and R. Mahony, “Visual servoing of an underactuated dynamic rigid-body system: An image-based approach,” IEEE Trans. on Robotics and Automation, vol. 18, no. 2, pp. 187–198, April 2002. [3] O. Shakernia et al., “Vision guided landing of an unmanned aerial vehicle,” in Proc. of the IEEE 38th Conf. on Decision and Control, Arizon, USA, December 1999, pp. 4143– 4148. [4] J. R. Azinheira et al., “Visual servo control for the hovering of an outdoor robotic airship,” in Proc. IEEE International Conf. on Robotics and Autom., Washington, USA, 2002, pp. 2787–2792. [5] G. F. Silveira et al., “Optimal visual servoed guidance of outdoor autonomous robotic airships,” in Proc. of the American Control Conf., USA, 2002, pp. 779–784.

−250

−200

−150

−100

−50

0

50

100

150

200

(b) with the solution.

Figure 5: Zeroing the error in ρi , i = 1, 2, for windy nominal conditions. Horizontal axes represent distance (m) along the lines.

[6] S. S. Bueno et al., “Project AURORA: Towards an autonomous robotic airship,” in Workshop on Aerial Robotics, IEEE Int. Conf. on Intelligent Robots and Systems, Lausanne, Switzerland, October 2002. [7] B. Espiau, F. Chaumette, and P. Rives, “A new approach to visual servoing in robotics,” IEEE Trans. on Rob. and Autom., vol. 8, no. 3, pp. 313–326, 1992. [8] S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Trans. on Rob. and Autom., vol. 12, no. 5, pp. 651–670, October 1996.

6

Conclusions

This work presented a strategy to perform line following tasks by aerial robots using visual targets characterized by two lines. To control the two unconstrained d.o.f. resulted from such image features, a proper complementary sensor set is used. Considering a robotic airship as platform, a LQoptimal control law is derived for the augmented system, where a stability proof is demonstrated. Simulation results are shown at the end of the article to validate the proposed methodology. Real flight experiments using this scheme will be conducted in a near future.

1165

[9] P. Rives and J.-J. Borrelly, “Visual servoing techniques applied to an underwater vehicle,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, vol. 3, Albuquerque, EUA, Abril 1997, pp. 1851–1856. [10] P. Rives, R. Pissard-Gibollet, and L. Pelletier, “Sensorbased tasks: From the specification to the control aspects,” in 6th Int. Symposium on Robotics and Manufacturing, Montpellier, France, May 1996. [11] S. B. V. Gomes and J. J. G. Ramos, “Airship dynamic modeling for autonomous operation,” in Proc. of the IEEE International Conf. on Robotics and Automation, Leuven, Belgium, May 1998, pp. 3462–3467. [12] A. E. Bryson and Y. C. Ho, Applied Optimal Control – Optimization, Estimation and Control. Washington D.C., EUA: Halsted Press, 1975.

Line Following Visual Servoing for Aerial Robots ...

IEEE International Conf. on Robotics and Automation,. Michigan, USA, May 1999, pp. 618–623. [2] T. Hamel and R. Mahony, “Visual servoing of an under-.

375KB Sizes 0 Downloads 206 Views

Recommend Documents

a visual servoing architecture for controlling ...
servoing research use specialised hardware and software. The high cost of the ... required to develop the software complicates the set-up of visual controlled ..... Papanikolopoulos, N. & Khosla, P.- "Adaptive Robotic Visual. Tracking: Theory ...

Generic Decoupled Image-Based Visual Servoing for Cameras ... - Irisa
h=1 xi sh yj sh zk sh. (4). (xs, ys, zs) being the coordinates of a 3D point. In our application, these coordinates are nothing but the coordinates of a point projected onto the unit sphere. This invariance to rotations is valid whatever the object s

A Daisy-Chaining Visual Servoing Approach with ...
Following the development in Section 2.2 and 2.3, relationships can be obtained to determine the homographies and depth ratios as4 pi = αi (A ( ¯R + xhn∗T) ...

THE EFFICIENT E-3D VISUAL SERVOING Geraldo ...
with. ⎛. ⎢⎢⎨. ⎢⎢⎝. Ai = [pi]× Ktpi bi = −[pi]× KRK−1 pi. (19). Then, triplet of corresponding interest points pi ↔ pi (e.g. provided by Harris detector together with.

Visual Servoing from Robust Direct Color Image Registration
as an image registration problem. ... article on direct registration methods of color images and ..... (either in the calibrated domain or in the uncalibrated case).

Visual Servoing from Robust Direct Color Image Registration
article on direct registration methods of color images and their integration in ..... related to surfaces. (either in the calibrated domain or in the uncalibrated case).

Improving Visual Servoing Control with High Speed ...
[email protected]. Abstract— In this paper, we present a visual servoing control ... Electronic cameras used in machine vision applications employ a CCD ...

Visual Servoing over Unknown, Unstructured, Large ...
single camera over large-scale scenes where the desired pose has never been .... Hence, the camera pose can be defined with respect to frame. F by a (6 ...

Direct Visual Servoing with respect to Rigid Objects - IEEE Xplore
Nov 2, 2007 - that the approach is motion- and shape-independent, and also that the derived control law ensures local asymptotic stability. Furthermore, the ...

Stable Visual Servoing of an Overactuated Planar ...
using an AD2-B adapter from US Digital. Algorithms are coded using the ... Visual Control of Robots: High performance Visual Servoing. Taunton, Somerset ...

THE EFFICIENT E-3D VISUAL SERVOING Geraldo ...
Hence, standard 3D visual servoing strategies e.g. (Wilson et al. ... As a remark, the use of multiple cameras for pose recovery e.g. binocular (Comport et al.

Stable Visual Servoing of an Overactuated Planar ...
forward kinematics parameters lead to position and orientation errors. Moreover, solving the ...... IEEE Robotics & Automation Magazine, December 2006. [19].

On-line Trajectory Planning for Aerial Vehicles: a Safe ...
∗Research Assistant, Division of Optimization and Systems Theory, KTH, Student .... corner points to exploit), they are not suitable for the mountain terrain data ...

See Visual Flow Diagram on the following pages -
Prioritze KME's based on Strategic Direction Sequence. 2. Prioritized KME's assigned to small team workgroups. 3. Small teams brainstorm activities necessary ...

(Aerial) Data for Disaster Response
Fully automated methods based on supervised learn- .... We could add a second quality assurance control by using our ..... A comprehensive analysis of building.

visual servo control of nonholonomic mobile robots
and/or mechanical constraints of the robotic platform. Consider the system of Fig ..... Transactions on Robotics and Automation 8, 313–326. Samson, Claude and ...

Piksi™ for UAV Aerial Surveying -
... and Ground Control. 6. 4.2 Mission Planning and Camera Configuration. 6. 5.0 Post-Processing Techniques ..... the edge of the mission range. This placement ...

Aerial Filming.pdf
aerial farm photography. aerial filming. aerial filming drone. aerial filming services. aerial footage. Page 3 of 5. Aerial Filming.pdf. Aerial Filming.pdf. Open.

Aerial Cinematography.pdf
be noise ordinances, harassment, or peeping tom laws that might apply. If you are ... Aerial Cinematography.pdf. Aerial Cinematography.pdf. Open. Extract.

GREENLAWN-AERIAL-VIEW.pdf
_J=zU. IJ. rl,lJ F. IJ. '-rt". *. 'qe."'ffi. Page 1. GREENLAWN-AERIAL-VIEW.pdf. GREENLAWN-AERIAL-VIEW.pdf. Open. Extract. Open with. Sign In. Main menu.

Adjacent Segment Degeneration Following Spinal Fusion for ...
cervical and lumbar spine, the long-term sequelae of these procedures has been considered of secondary ..... long-term data available regarding the impact of total disc arthroplasty on adjacent segment disease. ..... disc replacement with the modular

Evaluation-Screening for the following Vacant PositionsPrincipal ...
Evaluation-Screening for the following Vacant PositionsPrincipal II_Ass. Principal II.pdf. Evaluation-Screening for the following Vacant PositionsPrincipal II_Ass.

aerial silk curriculum.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. aerial silk curriculum.pdf. aerial silk curriculum.pdf. Open. Extract.