XVI CONGRESSO BRASILEIRO DE ENGENHARIA MECÂNICA 16th BRAZILIAN CONGRESS OF MECHANICAL ENGINEERING

VISUAL SERVO CONTROL OF NONHOLONOMIC MOBILE ROBOTS Geraldo F. Silveira Robotics and Computer Vision Laboratory/ITI PO Box 6162, 13081-970, Campinas-SP Brazil [email protected]

J. Reginaldo H. Carvalho Robotics and Computer Vision Laboratory/ITI PO Box 6162, 13081-970, Campinas-SP Brazil [email protected]

Pedro Mitsuo Shiroma Robotics and Computer Vision Laboratory/ITI PO Box 6162, 13081-970, Campinas-SP Brazil [email protected]

Patrick Rives INRIA Sophia-Antipolis, 2004 Route des Lucioles, 06565 Valbonne Cedex-France [email protected]

Samuel Siqueira Bueno Robotics and Computer Vision Laboratory/ITI PO Box 6162, 13081-970, Campinas-SP Brazil [email protected] Abstract. This work describes the application of the visual servoing of a mobile robot having as input the image from a camera mounted on a pan-tilt. The solution is based on the so called Interaction Matrix (IM). The velocities of the image features parameters are utilized to compute control signals for the robot actuators without any state reconstruction or camera calibration schemes. The paper also presents a study about the influence of the DOFs added by the pan-tilt on the maneuverability of the robotics ensemble, in two case studies: i) the pan and robot steering motions shares the same rotation axis. ii) the rotation axis for pan motion and robot are different. Experiments were done with a Nomad 200 mobile robot with a B W CCD camera and a pan-tilt. This methodology finds application in several real scenarios, like trajectory tracking, parking and docking of AGVs. We are also investigating the application of the methodology for the automatic landing of an aerial unmanned vehicle. Keywords. Mobile robots, visual servoing, robotic vision.

1. Introduction How to use vision has always been a major research area in robotics. However, the computational demand necessary for image processing and parameters extraction makes the utilization of computer vision solutions a very difficult task in real problems. This happens mainly due to the combination of high data density in images, the complete lack of depth information and the lack of a rigid linkage between the mobile robot and the environment. Most applications work in the robot’s configuration space and, thus, its state reconstruction demands a huge computational effort, making real-time implementation prohibitive. Moreover, usually camera parameters are very sensitive to lighting conditions and automatic calibration procedures are still an open field of research. For the case of visual-based control of mobile robots, one has that most used mobile robots are subject to nonholonomic constraints, bringing us to an area of research with open problems, specially related to dynamics modeling and controller design. In this paper, the methodology proposed in (Espiau et al., 1992) is applied to the visual servoing of a class of wheeled nonholonomic mobile robot. The solution is based on the so called Interaction Matrix (IM), whose purpose is to extend the robotics Jacobian to the image plane, utilizing certain geometric primitives as reference images. The velocities of the image features parameters are then utilized to compute control signals for driving the robot actuators without any state reconstruction or camera calibration schemes. The paper also presents a study about the influence of the DOFs1 added by the pan-tilt on the maneuverability of the robotic ensemble in two case studies:  ) the pan and robot steering motions share the same rotation axis.  ) the rotation axis for pan motion and robot are different. It is shown that, although in (  ) the Jacobian is singular, the positioning is still possible by subdividing the main problems into two. However, as both subproblems are not decoupled, the solution is not ensured for all initial conditions. On the other hand, for ( ) the non-singularity of the Jacobian ensures the solution of the problem when utilizing the IM methodology. The setup used in the experiments is based on a Nomad 200 mobile robot with a monocular non-calibrated black and white CCD camera mounted on a pan-tilt and a Data Translation DT-3155 frame grabber. In terms of software, all 1 Degrees

Of Freedom

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 334

algorithms were implemented using Tcl/Tk and run in a PC Intel Pentium 200 MHz embedded in the robot with Linux. This methodology finds application in several real scenarios, like trajectory tracking, parking and docking of AGVs. We are also investigating the application of the methodology for the automatic landing of an aerial unmanned vehicle as well as road following, see (Silveira et al., 2001b) and (Silveira et al., 2001a). 2. Problem Formulation The main objective of this paper is to develop a full vision-based control application which is both useful and feasible using off-the-shelf components. The problem considered here consists of positioning a camera mounted on a pan-tilt carried by a Nomad 200 mobile robot (see Fig.(1)) with respect to a reference image formed by four points as in the vertex of a square and located on the wall (see Fig.(2)). This image is related to the methodology used to control the robot and it is explained in the next sections.

Figure 1. Nomad 200 mobile robot at LRV/ITI.

Figure 2. The reference image at the initial condition. Real-world applications of visual servoing techniques can be divided in vision aspects and control aspects. Concerning vision, the tasks include image acquisition and segmentation, feature tracking and computation of feature parameters. Concerning control, one has the configuration and speed transformations among all frames assigned to the system, kinematic and/or dynamic modeling, Jacobian computation, and controller design. This work will focus on the control aspects. Refer to (Carvalho et al., 2000) and (Donnouti et al., 2001) for a detailed description of the vision aspects. The vision problem consists of extracting from the image, during robot motion, the pixel coordinates of each of morphological filters. Therefore, our vector of visual signals is   circle    centroid    using   a sequence . given by  3. Implementation of the control tasks The approach proposed by Espiau et al.  (Espiau et al., 1992) basically consists of deriving a suitable mathematical relationship (the interaction matrix  ) between the motion in the image plane of the parameters vector 

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 335

of certain     primitives   (like  points, lines, or circles) and the motion of the 3D frame attached to the camera     geometric  .

      

(1)

In this paper, we are interested in an interaction matrix that permits no motion for an unchanged image, which can be implemented by utilizing four points as the reference image. Given the velocities of the 3D camera frame, one has to derive a non-singular Jacobian to express these velocities in terms of robot actuator speeds, and then, design a suitable controller. Both Jacobian and controller must consider spatial and/or mechanical constraints of the robotic platform. Consider the system of Fig.(3) as a representation for the Nomad 200, where   is the frame attached to the center of rotation of the robot base; ! is the frame attached to the center of rotation of the robot turret; " is the frame attached  to the pan-tilt rotation axes;  is the frame attached to the camera, whose X-Y projection is coincident with the image plane. Both   and  ! share the same axes of rotation, which is assumed to be vertical. Their origins are located at the center of the robot circular section, where  ! is displaced by # in the y-axes of   , and  " by $ " away from  ! . The usual assumption of movement without slipping in a horizontal plane holds.

%



!

!

!  !

 " "

%

  "



   " % ! "

"



 %   .!

  

 %  

&

 

  

% 

$/"

     (  '  ! )*!,+- 

$

 "   

 ! %



 

"

Figure 3. 3D and 2D frames attached to the Nomad 200 mobile robot with camera and pan-tilt. The Nomad 200 mobile robot has some particular characteristics that are interesting to point out:

0 A fixed reference frame is set when the robot receives its reset command. This command sets all encoders output to zero and brings the robot actuators to a pre-defined configuration. In this case the turret and base frames are aligned (their 2D Z-X projections are coincident);

0 The robot configuration given by odometry (   %  % ! ) is set according to this fixed reference frame (see Fig.(3)). The orientation of the turret % frame  ! with respect to the base frame   is in fact the difference between % the turret angle ! and the base angle  given by Nomad 200 odometry. The task function formalism, introduced in (Samson et al., 1990), used is:

;:=<<

67DC /E 7 13254 6789 + > @ 2AB254  B2 7 is the current visual features parameters related with the robot configuration 4 ,  E where 7  : << F2 and  < >@ is the pseudo-inverse of the interaction matrix, whose value computed at  1992)

  << > > @  +

GHH JM L K HH J L K HH J L K HH JML K H N HH N N I N

N N N

N JML K JML K J ML K  J ML K 

J ML O  OL OL J ML O  MO L MO L J ML O  J ML O 

C 3P Q P3Q C P3Q P3Q RVWP Q RVWP3Q RVWP3Q RVWP3Q

CSR.CTP3Q P CSR.CTP3Q P CSR.CTP3QUC CSR.CTP3QUC P Q P C P3Q C P3Q C C P3Q P

(2)

 2 7 are the desired values of

E is given by (Espiau et al.,

XZYY YY YY P YY P Y YY P

P [

(3)

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 336

 where   , is the vertex length and  the desired depth. The control law, computed in terms of speed, which will bring  to zero is selected as (Espiau et al., 1992)    with ! (4) #  "  Given the velocities of the frame attached to the camera ( ), it is now necessary to derive a Jacobian to (' ) ,from using the forward speed , steering speed and turret the robot control inputs. In this paper we consider two cases: % $  & (* () (-, rotation speed ' (next subsection), and $+$% replacing ' by the pan speed ' (in Subsection 3..2.). In practical terms,

the difference between them is that for the first case, camera and robot base have the same axes of rotation, while in the second case this is not true. In any case, it is still possible to solve the problem of positioning the camera in front of the reference image dividing it in two subtasks according to the control inputs. 3..1. Visual servo control without pan motion

& (' )  ( ' * 10

(, ('*

(',

./

) 2 (  ( *  ()

*

In this case, the visual servoing technique is applied to the Nomad 200 mobile robot using as control input . The angle and the pan speed are fixed to zero and the orientation of with respect to is . As the turret speed (used to rotate the camera) and base speed (used to steer the robot) share the same rotation in the plane to the control vector is singular. We solve this singularity axes, the Jacobian from the velocities of ) in two sub-tasks and solve them separately in problem by separating the variables which compose the task ( the frames where the Nomad 200 control inputs is defined. , which will be solved using the control input. This part is trivial, as is The first sub-task is composed by and due to the property of the Nomad 200 of not transferring to upper frames the rotation of the rigidly attached to base, then one has

*



5 6

(3' ) 4 '   '  576

(8' *



* 576 , 75 6 576  

) 4  )The remaining ):9<; sub-task, composed by ( ' , ' ) is solved in the base frame . In this case, one has &   >= & where &? @/A 4 ' -  ' %B 0 and CEDGFH ( HIKJ ( )9<; 1 @ HILJ 22 (  D-FH 2 2 (NM

) OP/A &Q(3' ) B 0

(5)

(6)

(7)

&

Using a previous result from Samson and Ait-Abderrahim (Samson and Ait-Abderrahim, 1990) and after some com( ) and : putations, one obtains the relationship between robot control signal at

)

&! @SR = OUT &? @WV = O

(8)

C ]\ ) 4_) ^ 9Y; ) VX  = RZ with R[    ^ M (9) ) ) ) and  ^ , 4`^ are the coordinates at of the distance from the origin of and the desired position. In order to avoid 3D reconstruction, and considering ( that the task is specified in terms of a final desired configuration of ^ with respect d to (  ^ 1ab  , 4 ^ ac/ , ^ 1ac/ ) the strategy employed is to use  ^ ec  , for all  and a very simple ^ 4 approximation for 1 based directly on the sensor information: k 4 d ^ ef g   =ij kLlnh mpo (10)  k o g!v are the X-coordinate where  is the focal length, which if not available may be set equal to one, and q$>Prs\s`t?Zu! where

at image plane of the four points. Then, one has

C ))  d ^ ) 9Y; EC wxn y  d ^ M  1 ? 4_d ^ M 4d ^

(11)

And then,

C Y\  wx    H1ILJ 2 ( ?4 d ^ D-FH 2 ( R[  Q  w x y y    D-FH 2 ( y y ?4 d ^ HILJ 2 ( M

(12)

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 337

where   as in Fig.(3).  To find the relation from to  one has to compute  !"$#      %'&)( *,+ -/.%0*,+21

3 



(13)

 4 5 -6.%7*,+98 "7# %'&)(:*,+

(14)

which is used to control the second sub-task. These subtasks are not decoupled and to guarantee their convergence from any initial condition a control scheme for the whole base configuration must be used. An alternative is to use different rotation axes for the base and camera, which is done considering the pan speed instead of robot turret speed, as shown in the sequence. 3..2. Visual servo control using pan motion In this section we make the turret frame ;=< always aligned with the base frame ; > , and assume that the orientation of the turret with respect to the base is zero, which means that + < 8?+ >@A . In other words, both frames are rigidly connected. The relationship between the velocity of the 3D frame attached to the camera ;  and the 3D frame attached to the base ; > is now non-singular and can be easily computed using the classical velocity transfer relationship along a sequence of revolute joints: G B)C F GH  B)C BDC ) B C  6E E (15)   B

BDC

)B C B 8/8B P   B  G DB C JI9KL B  M B E BON A B)C  /Q G

G

(16)

8P

 where, B is the translational speed of frame R given in frame S , E B is the rotation of frame R given in frame S , A B is the coordinates of the origin G of frame G R in the frame S G G -6.%!+ B %'&)(=+ G B A [\ (17) ITKL B VUW 8X%'&D(+ B -6.%Y+ B A A A Z G is the rotation matrix from frame R to frame R7 ]Z , where + B is the angle formed by the x-axis of frame R and the x-axis of frame S given in the frame S . For the frames showed in Fig.(3), one has

>E >@_^ A`A >

> 

^



+ a >cb/d

(18)

A`A b d

(19)

Using the characteristics of Nomad 200, if base and turret rotate together, one has at the turret frame


< 

<

I9KL >

+ a <  + a >cb d

>

> Ag

^

(20)



AhA b d

(21)

Now, computing the speeds at the pan-tilt frame, one has

 E  i^ A`A  

k



I9KL <

<

+ a > j+ a  b6d

(22)

8P < l < < E < N  

(23)

/- .%!+    o 8 %p&)(+  kmUW A

[\rq '% &D(=+ n  A -6.%Y+ `  A UW A Z

  +a> A

[\ (24)

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 338

 

 

         

 !      "

#$



(25)

Finally, in the camera frame, one has %&

  

%')( "*"

%

% 

%.-0/21

43

%

%

% -=/21

% &

%  

%

?>@



+ , :% 89 7

%65

(26) <;

(27)

 " 

D %

 %

    

#$

 4

  A  '      

     "



"

  

#$ 

5





 %

(28)

#$BC

" "

   

 ! 



   %E    F  "

#HG $

where  is the displacement between the center of rotation and the pan frame I between the pan frame and the camera frame. Re-writing the Eq.(29) in the form J

(29) ; and  % corresponds to the distance

LKNMO

(30)

% % % &0UWV QP R  !% S T  % S vector composed with the translational and rotational speeds of the camera frame in where , is the

 V the horizontal plane and OLXP S  S  , is the vector of control signals for driving the Nomad 200, one has    ZA " #$ 0     

 4

   %  % KY (31) " [ [ J

 4 

and to \ and ] , is given by: whose inverse, with the substitution of  #$

 \ a% \

\

  % \
 ] b   \
(32)

This is the relation used to compute the control signals based on the output of the Interaction Matrix d ,fe . 4. Experimental Results In this section the results of the two cases are presented. The hardware setup used in the experiment were basically a Nomad 200 mobile robot and a PC Intel 300 MHz, 64 MB RAM, used as robot terminal, to edit files and to remotely control the robot. All algorithms where implemented using Tcl/Tk in order to facilitate the design and debugging process. The goal consists in positioning the camera frame parallel to the vertical plane of the reference frame with a distance of 75 \hg between them, as seen in Fig.(2). The experiment was performed first using only robot actuators and then using pan motion and disabling turret rotation from the same initial condition according to sections 3.1 and 3.2, respectively. Figure (4) shows the evolution at each iteration of the four dots. The reference image was located in a half-wood-half-glass wall, where the two upper dots are located in the glass part. Although this setup is strongly non-recommended for a vision task, it is useful to validate the robustness of the vision algorithm, fully described in (Carvalho et al., 2000) and refined in (Donnouti et al., 2001). Figures (5)-a to -c shows the evolution of the control signal for the visual servo control without pan motion while in Fig. 5-d the evolution of the Euclidean norm of the error of the image plane coordinates for each dot is shown. The same set of results for the second case study is depicted through Fig.(6)-a to -d.

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

Y

Y

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 339

−0.1

−0.1

−0.2

−0.2

−0.3

−0.3

−0.4 −0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

−0.4 −0.4

−0.3

−0.2

−0.1

0

X

0.1

0.2

0.3

0.4

0.5

X

Figure 4. The evolution of the four dots during robot motion: without pan motion (left) and using pan motion (right).

(a)

(b)

20

500

steering speed

forward speed

400 15

10

300 200 100 0

5

−100 0

0

100

200 interaction

300

400

−200

0

100

0.7

2

0.6

400

300

400

0.5

0 −2 −4

0.4 0.3 0.2

−6 −8

300

(d)

4

error

turret rotation speed

(c)

200 interaction

0.1 0

100

200 interaction

300

400

0

0

100

200 Interaction

Figure 5. (a)-(c) Control signals. (d) Evolution of the norm of the errors (without pan motion).

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 340 (a)

(b)

15

60

40

steering speed

forward speed

50 10

5

30 20 10 0

0

0

100

200 interaction

300

400

−10

0

100

(c)

200 interaction

300

400

300

400

(d)

60

0.8

50 0.6

30

error

pan speed

40

20 10

0.4

0.2

0 −10

0

100

200 interaction

300

400

0

0

100

200 Interaction

Figure 6. (a)-(c) Control signals. (d) Evolution of the norm of the errors (using pan motion).

5. Discussions and Conclusion In this section we present an analysis of the results obtained in this work. First and foremost, the reader will recall that the interaction matrix was kept constant and computed for the desired final pose of the camera. A case study about the influence of adding more information to the computation of the interaction matrix can be found in (Silveira et al., 2001b). However, this does not present a severe limitation of the method, since the positioning tasks are usually performed close to the goal position. Additionally, experiments were performed from a starting point about 10 times larger than the final desired depth of the image and the performance obtained was satisfactory. Both robot and pan-tilt actuators present a saturation for large input signals. To avoid nonlinearities due to saturation, an upper bound for the control was used, selected as half of the manufacturer’s stated saturation value. Also, robot and pan-tilt actuators operate with finite resolution. The resolution of robot actuators did not present any problem, but the resolution on pan motion degenerated the results. To overcome this problem we adopted the following strategy: whenever the pan input signal was calculated as a value between 30 and 100% of the smallest non-zero input signal, we set it equal to the smallest input signal. The best execution time achieved was about 200ms per interaction. Since this can not be considered real-time, the forward speed had to be limited to 20% of the maximum speed and all rotations speed to 50% of the maximum. Execution time can be reduced significantly by using a compiled programming language and hardware based on a stateof-the-art microprocessor. Tcl/Tk was considered here because of its capabilities of data logging and plotting, reduced implementation time, platform independence, most software and hardware compatibility and for debugging purposes. Most of industrial mobile robots are submitted to nonholonomic constraints. In the present case, despite the controllability of the system, the stabilization of the robot base around a given configuration by pure state feedback is not possible (Samson and Ait-Abderrahim, 1990). In this paper only the camera pose is controlled, and the final robot base configuration depends on the initial condition. For the first case study (without pan motion) and due to the characteristics of the Nomad 200 mobile robot, the execution of the task implies that the robot will always stop in the same position (although with different orientation), but this may not be sufficient in many real problems, for instance the problem of parking a robot. One approach to solve this problem is to utilize the pan motion along with a time-varying control strategy to stabilize the base configuration or to add more DOFs to the camera to control the whole mobile-manipulator platform. A complete discussion on this subject can be found in (Tsakiris et al., 1998) and references therein. In the view of the above discussion, the obtained results constitute a first step towards the conception of an accessible real-world implementation of visual servoing of mobile robots.

Proceedings of COBEM 2001, Robotics and Control, Vol. 15, 341

6. Acknowledgments This project earns partial grant from FAPESP under the process 97/13384-7 and 98/13562-5 and ProTeM-CC CNPq / INRIA under the process 68.0140/98-0. 7. References Carvalho, Jos´e R. H., Patrick Rives, Ailton Santa-B´arbara and Samuel S. Bueno (2000). Visual servo control of a class of mobile robot. In: 2000 IEEE International Conference on Control Applications. Alaska, USA. Donnouti, Leonardo S., Geraldo F. Silveira, Pedro M. Shiroma, J. Reginaldo H. Carvalho, Patrick Rives and Samuel S. Bueno (2001). Mobile robot positioning via visual servoing techniques. In: Proceedings of the IV Brazilian Symposium on Intelligent Automation. Gramado/RS, Brazil. submitted. Espiau, Bernard, Franc¸ ois Chaumette and Patrick Rives (1992). A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation 8, 313–326. Samson, Claude and Karim Ait-Abderrahim (1990). Mobile robot control, part 1: Feedback control of a nonholonomic wheeled cart in cartesian space. Technical report. INRIA, Centre de Sophia-Antipolis. Samson, Claude, Bernard Espiau and M. le Borges (1990). Robot Control: the Task Function Approach. Oxford University Press. USA. Silveira, Geraldo F., J. Reginaldo H. Carvalho, Marconi K. Madrid, Patrick Rives and Samuel S. Bueno (2001a). A fast vision-based road following strategy applied to the control of aerial robots. In: XIV Brazilian Symposium on Computer Graphics and Image Processing. Florian´opolis, Brazil. accepted. Silveira, Geraldo F., J. Reginaldo H. Carvalho, Marconi K. Madrid, Samuel S. Bueno and Patrick Rives (2001b). Towards vision guided navigation of autonomous aerial robots. In: Proceedings of the IV Brazilian Symposium on Intelligent Automation. Gramado/RS, Brazil. submitted. Tsakiris, D., P. Rives and C. Samson (1998). Extending visual servoing techniques to nonholonomic mobile robots. In: The Confluence of Vision and Control. Lecture Notes in Control and Information System. Springer-Verlag. D. Kriegman (Ed.).

visual servo control of nonholonomic mobile robots

and/or mechanical constraints of the robotic platform. Consider the system of Fig ..... Transactions on Robotics and Automation 8, 313–326. Samson, Claude and ...

290KB Sizes 0 Downloads 391 Views

Recommend Documents

Multi-Reference Visual Servo Control of an Unmanned ...
Department of Mechanical and Aerospace Engineering, University of Florida, ... This research is supported in part by the NSF CAREER AWARD 0547448, ...... Navigation, and Control Conference, Keystone, Colorado, AIAA 2006-6718, 2006.

Daisy Chaining Based Visual Servo Control Part I - IEEE Xplore
Email: {gqhu, siddhart, ngans, wdixon}@ufl.edu. Abstract—A quaternion-based visual servo tracking con- troller for a moving six degrees of freedom object is ...

Teach by Zooming Visual Servo Control for an ...
AIAA Guidance, Navigation, and Control Conference and Exhibit ..... In (22), ˆHd (t) ∈ R3x3 denotes the following estimated Euclidean homography:11 ..... Improvements in the Stability Analysis of a New Class of Model-Free Visual. Servoing ...

Daisy Chaining Based Visual Servo Control Part II - IEEE Xplore
Email: {gqhu, ngans, siddhart, wdixon}@ufl.edu. Abstract— In this paper, the open problems and applications of a daisy chaining visual servo control strategy ...

Control of Nonholonomic Systems - Javad Taghia
control algorithms (with reference generation) are written in C++ and run with a sampling time of Ts = 50 ms on a remote server. • the PC communicates through ...

Design, Fabrication, and Visual Servo Control of an XY ...
Technology Development Fund under Grant 016/2008/A1. Q. Xu and Y. Li are with the ... mercially available. As an alternative, a low-cost microscope .... internal force in limb 1 can be calculated by a potential energy analysis as. (12) with. (13).

Visual Servo Control for the Hovering of an Outdoor ...
A semi-autonomous robotic airship for environment mon- itoring missions. In IEEE International Conference on. Robotics and Automation' Leuven' Belgium' May ...

MORPHEUS Servo Control GUI.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

A Geometric Approach to Visual Servo Control in the ...
and point of arrival η ∈ R3 of the camera coordinate frame. The angle of incidence condition represents a pencil of lines describing the surface of a right circular cone, such that the generatrix identifies possible orientations of the camera opti

Camera Independent Visual Servo Tracking of ...
In this paper, a visual servo tracking problem is developed with the objective to enable ... systems more autonomous. ..... estimated rotation tracking error system.

Omnidirectional Visual-Servo of a Gough–Stewart ...
This allows a large field of view to be obtained and avoids the occlusion problems .... tioning a single omnidirectional camera (vision system providing 360◦.

A Geometric Approach to Visual Servo Control in the ... - IEEE Xplore
University of Florida. Shalimar, FL-32579, USA. J. W. Curtis. Munitions Directorate. Air Force Research Laboratory. Eglin AFB, FL-32542, USA. Abstract—In this paper, we formulate a visual servo control problem when a reference image corresponding t

Autonomous Mobile Robots-siegwart.pdf
The idea of providing a robot. functional architecture as an outline of the book, and then explaining each. component in a chapter, is excellent. I think the authors ...

Motion planning for formations of mobile robots
if, for example, distributed data needs to be collected at different resolutions and thus ... advantageous, for example, in a data collection task where a particular ...

Motion planning for formations of mobile robots - Semantic Scholar
for tele-operation of such a network from earth where communication .... A clear advantage of using motion planning for dynamic formations is that maneuvers can .... individual robots via a wireless RC signal. ... may provide a disadvantage for appli

Semantic Labeling of Places with Mobile Robots
Henrik Christensen, Georgia Tech, USA. Peter Corke ...... Brunskill et al. [2] presents an online method for generating topological maps from ..... correspond to a robot equipped with a 360o degree field of view laser sensor. Each ...... Weymouth [11

Reconfigurable optical add-drop multiplexers with servo control and ...
Dec 31, 2004 - ... cost advantages. Conventional OADMs in the art typically employ ..... 4Ai4B shoW schematic illustration of tWo embodi ments of a WSR-S ...

Reconfigurable optical add-drop multiplexers with servo control and ...
Dec 31, 2004 - channel micromirror be individually controllable in an ana log manner ..... associated signal processing algorithm/softWare for such processing ...

Force Control of Flexible Catheter Robots for Beating ...
authors are with the Harvard School of Engineering and Applied Sciences, ... Division of Health Sciences & Technology, Cambridge, MA, 02139 USA.

Reconfigurable optical add and drop modules with servo control and ...
Dec 31, 2004 - cantly enhances the information bandwidth of the ?ber. The prevalence of ... OADM that makes use of free-space optics in a parallel construction. .... control assembly serves to monitor the optical power levels of the spectral ...

Reconfigurable optical add and drop modules with servo control and ...
Dec 31, 2004 - currently in the art are characteristically high in cost, and prone to ..... For purposes of illustration and clarity, only a selective. feW (e.g., three) of ...

Nonholonomic Interpolation
[11] B. Berret, DEA report, 2005. [12] H. Sussmann, A continuation method for nonholonomic path- finding problems, proceedings of the 32' IEEE CDC, San Antonio,. Texas, December 1993. [13] G. Lafferriere, H. Sussmann, Motion planning for controllable