New Scheme for Image Space Path Planning Incorporating CAD-Based Recognition Methods for Visual Servoing Zahra Ziaei, Reza Oftadeh, Jouni Mattila∗ Abstract— Visual servoing techniques are widely used in the design of autonomous controllers for various robotic arms performing object manipulation in 3D space. Path planning in image space has been studied widely in order to generate the feasible trajectories for the visual servoing controllers. The main idea of image space path planning is to derive a practical paths for image features while taking into account variety of constraints imposed by the image and the physical environment. However, in many practical cases especially in complex and clutter scene spaces extracting the image features is not robust enough and it demands that the target features not to be occluded by the obstacles, robot’s body or the target object itself. In this paper, we present a new image space path planning scheme for markerless 3D target that bypasses the feature extraction requirement. Utilizing artificial potential field method the planner generates the desired end-effector path such that a given target remains inside the image boundaries during the robot manipulation task . Experimental results are presented to determine the 2D and 3D generated path with reasonable accuracy for robotic manipulation scenarios.

I. I NTRODUCTION Over the past decades vision guided robotic has been introduced as a major approach in order to increase the flexibility and accuracy of robot control systems [1]. Lots of trends in this area have evolved in Visual Servoing (VS) controllers for robotic manipulation task [2]. One of the main challenge of VS is maintaining continuous visibility for target tracking inside the camera field of view (FOV) [1], [2]. Image space path planning based robust VS was introduced by Mezouar et al and improved for physical robot constraints [3], [4]. Path planning in image space has been recently optimized, globalized, and developed for physical constraints [4], [5], [6], [7] and different visual robotic aided scenarios for mobile robot manipulation task [8]. One of the key requirements of image path planning for VS is that the initial and desired 3D target positions in 3D Cartesian space should be known [3]. There are several approaches in 2D-3D pose estimation category [9]. An example of such framework is markerless augmented reality method which was introduced by Sundareswarn [10] and improved and robustified by Marchand and Comport [11], [12]. This method extracts the features to match with corresponding points of model to circumvent the geometric search space and estimate the object pose. This method is computationally efficient and easy to realize. However, since the tracking process is based on matching the edges in the image space *Authors are with the Department of Intelligent Hydraulics and Automation (IHA), Tampere University of Technology, Tampere, Finland. [email protected], [email protected]

[email protected]

c 978-1-4799-1201-8/13/$31.00 2013 IEEE

by local search algorithms, it can rapidly fail in face of large initial pose differences. Furthermore, the pose estimation loses its robustness in the presence of high textured objects and cluttered background that often happens in practical applications. POSIT algorithm is another well-known method that has been frequently used for path planning in image space [3], [4], [8]. It can realize the position of the object directly from corresponding 2D image and 3D CAD model using local matching algorithm [13]. However it relies on the known corresponding points that are difficult to detect in real world scenarios. Moreover, this method suffers from common issues such as getting stuck in local minima. Ulrich et al [14] presented a hierarchical approach based on image-view to search the maximum similarity between the real scene and its 3D CAD model. This CAD-based recognition method via monocular image is a popular method that is widely used in industry and academia to recognize the object with known geometry and estimate its pose [15], [9], [16]. Also this method is robust to noise, partial occlusion, clutter and lighting change. Its robustness can be further improved even more, by combining the idea of scale-space aspect graph with similarity aspect graphs [17]. In our previous work, we have developed this method as a markerless augmented reality for remote handling task [18], see Figure (1). In that scenario, lack of aided trajectories as a feedback, motivated us to solve this challenge via path planning in image space for markerless robot manipulation task.

Fig. 1. Markerless target tracking was developed for remote handling in ITER mockup, [18].

In this paper, we integrate 3D CAD-based recognition method into the image based planner to compose a novel planning scheme that could graciously enhance the performance of the planner. Furthermore, as we will show, the proposed scheme does not lose its robustness in the presence of unobservable or occluded target points. This paper is organized as follows. Section II reviews the 3D CAD-based recognition of 3D target using monocular camera. Section III shows the system configuration and presents

212

a new hand-eye-calibration method for robot mounted camera.The path planner method is detailed in Section IV and finally, Section V presents our trajectory workspace, Section VI appraises the experimental results that demonstrates the efficacy of our proposed framework.

into camera image plane ( u, v) described as following [16]: u = v =

II. 3D CAD-BASED R ECOGNITION M ETHOD In this paper, the CAD-based recognition of 3D object is used to estimate the six dof pose of a markerless 3D target directly from the scene [17], which is desirable especially for vision guided robotics [19], [11]. Monocular recognition of 3D target over time is based on comparison of the image content with a sample 3D CAD model of the object, with or without texture surface. In this method, geometric camera calibration and dimension of 3D target are necessary to extract a precise 3D pose information from imagery. First, in off-line step a model is automatically trained from a 3D CAD model by estimating the range of poses for every hypothesized pose which may appear in front of the camera. This created 2D model using similarity measure is robust to occlusion, clutter, and lightening. Moreover, for each of them the corresponding 3D pose is stored. Next, in online recognition step, the hierarchical view-based approach can be used to find the object in an image. Then for each found 2D view, the corresponding 3D object pose is computed by minimizing a geometric distance measure in the image. The measured accuracy is reasonable but limited to the sampling of the 2D view and sampling of the 2D pose during the 2D matching [14], [17]. The major problem in this regard is that since each 3D object has several 2D views which must be compared to the image, the algorithm is time consuming [17]-[14]. To deal with this major problem, we reduce the complexity of CAD model with manual simplification process. Combining object recognition and tracking for simplified object model have shown promising results in near real time [18]. Remark that we can extract image points of the augmented wire-frame, instead of extracting features as in feature-based recognition methods.

1+ 1+



2u (1 − 4K1 (u2 + v 2 ))



,

2v (1 − 4K1 (u2 + v 2 ))

(2)

The transformation point from the image plane coordinates into the image center coordinates system is c=

u  v + Cx , r = + Cy , Sx Sy

(3)

u2 + v2 ), Cx and Cy respectively denote where a2 = ( the column and the row coordinates of the image center point (center of the radial distortion), K1 is the distortion coefficient, and Sx and Sy are camera scale factors [16]. Computing these intrinsic camera parameters are necessary and prerequisite for the extraction of precise 3D information such as 3D position and orientation from imagery. Whenever a camera is mounted on top of a robot endeffector, it is important to know the relationship between the camera and robot’s hand. Robot kinematics model is always used to express the relationship between center coordinates of end-effector and camera center coordinates if the camera location in the end-effector is known. The problem of determining the relationship between the end-effector and camera refers to the hand-eye calibration method. A. Hand-Eye Calibration In order for a robot to use a mounted camera for estimating the 3D position and orientation of an object relative to its own base, knowing the relative poses between the camera and the robot base, between the camera and the end-effector, and between the object and the camera are necessary. In four frames: the camera frame   this workspace we have  , the end-effector frame C E , the robot base frame B,  ; see Figure (2). To obtain the and the object frame O

III. C AMERA M ODEL AND S YSTEM C ONFIGURATION In this section, we explain an overview of camera model and system configuration. We have a mounted camera on top of the end-effector. For the pinhole camera model, the perspective projection is given by:     u f xc , (1) = zc y c v where (u, v) are image plane coordinates and (xc , yc , zc ) are orthogonal camera center coordinates and f is the focal length. The relationship between the object point in the world coordinates, w P , and the object point in the camera coordinates, c P , can be written as w P = w Hc c P , where w Hc is a homogeneous transformation matrix of the world coordinates system to the camera coordinates system. The lens distortion converts the image point projection, (u, v)

Fig. 2.

Hand-eye calibration configuration.

camera position and orientation with respect to the robot base, we move the end-effector such that the entire robot base shape is in the camera field of view, as illustrated in Figure (2). Then using CAD-based recognition method [14], we can estimate the position and orientation of the robot base coordinates with respect to the camera coordinates; see

2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)

213

Fig. 3. CAD model projection of robot base center coordinates (right) and Its Denavit-Hartenberg coordinates (left).

Figure (3). Then we can obtain the transformation of the base rotation and translation with respect to the camera frame, i.e. C HB = [C RB ,C tB ], when t ∈ (3) and R ∈ SO(3), where (3) and SO(3) are respectively three dimentional euclidean space and three dimensional of robot rotational space. The robot forward kinematics gives the rotation and translation of the end-effector frame with respect to the robot base frame as B HE = [B RE ,B tE ], when t ∈ (3) and R ∈ SO(3). Therefore the hand-eye calibration equation can be written as: E

HC = (B HE )

−1 C

( HB )

−1

.

Let Fi , Fc , Fd and Fo are respectively the initial camera frame, the current camera frame, the desired camera frame, and the object frame. To recognize the target, the initial and desired camera frames must be directed to the object center coordinates such that the 2D projection of 3D object is in camera FOV. We do not rely on texture and reflectance information of the object’s surface and only the augmented model is enough to generate a 3D trajectory. This 3D pose estimation approach is associated with 3D matching [17] and there is not any image extracted features to create a vector of image features. Therefore we extract the vector of image points s rather than the vector of image features; see Figure (4).

(4)

To increase the precision of this method, we calculate the mean value of E HC over all poses. Remark that for each camera pose the entire shape of the base must be in the camera field of view. IV. I MAGE PATH P LANNING BASED ON ARTIFICIAL

Fig. 4.

Extracted 3D CAD points and scaled camera trajectory.

POTENTIAL FIELD

Path planning in image space based on artificial potential field was introduced by Mezouar et al [3]. Each robot motion iteration is influenced by an artificial potential field V , which is the sum of two terms: 1) attractive potential field Va , whose role is to pull the robot toward the goal trajectory Υ, and 2) repulsive potential field Vr , whose role is to push the robot away from constraints such as obstacles and camera field of view. → − The artificial force F(Υ)=− ∇V is the transpose of the gradient vector of V at F(Υ) = αFa (Υ)+βFrp (Υ), where Fa (Υ) denotes attractive force, Frp (Υ) denotes repulsive force, and Υ is a (6 × 1) vector representing the pose of the end-effector. Scale factors α and β are used to adjust the influence of repulsive and attractive forces. The discrete time trajectory along the direction of F (Υ) is obtained via the following equation [20], [3]: Υk+1

F(Υk ) , = Υk +  k  F(Υk ) 

(5)

where k is positive scaling factor and k is an index which increases during the generation of path. Planning a path is a method to find a sequence of state transition that lead the end-effector from a given initial state to a desired goal state. To generate an off-line 3D path incorporated with CADbased methods via artificial potential field, we assume that the mounted camera is under the influence of an artificial potential field V .

214

The point P0 is a projection of the object center frame in imuc , vc ] and the camera age 2D plane with coordinates pc = [ 3D center coordinates (x, y, z). The homogeneous transformation matrix C HP0 , gives the rotation and translation of object center coordinates with respect to the camera center frame. The homogeneous transformation matrix P0 HPj gives the rotation and translation of wire-frame points with respect to its center coordinates. To create the vector of image points we need the following chains which provide the rotation and translation matrix of each wire-frame point with respect to the camera frame C HPj : 

C C

R Pj = C R P0 . P0 R P j tP j = C R P0 . P0 t Pj + C tPj

(6)

The rotation and translation matrix, C HPj gives the 3D position and rotation values of the object [16]. Now we have the pose of all extracted vertices of wire-frame with respect to the camera frame. Consequently, corresponding points in the image plane points ( ui , vi )ni=0 are obtained via (1) and (2) and corresponding points in the image coordinates (ri , ci )ni=0 , are obtained via (3). We define the vector s = [P0 , P1 , ...Pn−1 ]T , which includes the extracted n wire-frame vertices with respect to camera coordinates system instead of extracted features vector in feature-based recognition approaches; see Figure (4).

2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)

A. Attractive Potential Field The attractive potential field Va , is a parabolic function pulling the robot goal configuration to minimize the distance between the current and desired pose in 3D Cartesian configuration space [20]. The desired pose or destination is defined as workspace vector Υd = 06×1 . Thus we have 2 2 Va = 12 α Υ − Υd  or Va = 12 α Υ  with positive T T scaling factor α, where Υ = [ti , (uθ)i ] is a parametrization of end-effector workspace including the translational vector t and rotation vector R between Fc and Fd . Vector (uθ) is created using the axis and the angle obtained from the rotational matrix between Fc and Fd . The attractive potential force hence takes the form. → − Fa (Υ) = − ∇Va = −αΥ. (7) B. Repulsive Potential Field The role of the repulsive potential field Vr is to push the robot away from the constraints [20]. For example, visibility constraint implies a potential Barrier function to push the arm to reach the goal state such that to keep all object projection points in the camera FOV. We do not rely on texture and reflectance information of the object surface. Therefore only the object image points are enough to generate a trajectories.

away from image limits. The artificial repulsive force shows the gradient vector of Vr (s) at Υ as following: Fr (Υ) = −(

∂Vr (s) T ∂Vr (s) ∂s ∂r T ) = −( ) , ∂Υ ∂s ∂r ∂Υ

or Fr (Υ) = −(

∂Vr (s) Ls Mr ) T . ∂s

r (s) ) can be calculated according to (8). Interaction ( ∂V∂s ∂s is a linear transformation matrix that matrix Ls = ∂r describes how image points of the wire-frame change with respect to the camera frame [1], [3]. Because of the effect of lens distortion the rewrite of the Jacobian matrix which is the function of z is ⎡ ⎤ Tx ⎢T ⎥ ⎤ ⎢ y⎥ ⎡f ⎢ ⎥ f 2 + u2i u i u i v i ⎢ Tz ⎥ 0 − − − − v u ˙i i z z f f ⎢ ⎥ ⎦ ⎣ = ⎢ ⎥ f 2 + vi2 f v  u  v  ˙ i i i vi 0 z −z − f − f u i ⎢ωx ⎥ ⎢ ⎥ ⎣ ωy ⎦

ωz (10) The end-effector equipped with camera, is moving with angular velocity denoted by [ωx , ωy , ωz ] and its translational velocity denoted by [Tx , Ty , Tz ] with respect to the camera frame. As already was defined s, is composed of the image vector of n points and therefore the interaction matrix is calculated as the following: L(s, z) = [LT (P0 , z1 )...LT (Pn , zn )]T ]

Fig. 5. Repulsive Barrier function for FOV constraints, (right) and defined limits in desired image state even occluded, (left).

The image 2n × 1 vector s = [P0 , P1 , ...Pn−1 ]T with Pi = [ ui , vi ] represents the projection of the object points in the image plane are obtained via (1), (2) and (6) where [um , uM ] and [vm , vM ] are the defined limits of the image and n is the number of extracted wire-frame vertices; see Figures (4) and (5). When the extracted features reach the near of defined limits, a repulsive force will be generated to push them away from the image limits [3]. The repulsive potential function Vr (s), for vector of extracted wire-frame vertices s, is as the following:   − ln( (1 − uumi )(1 − uuMi )(1 − vvmi )(1 − vvMi )) if s ∈ C 0 otherwise, (8) where C is a set of view which is defined here with effect of lens distortion as {s/∃j (ui , vi )|ui ∈ (um −αd , uM −αd ),vi ∈ (vm − αd , vM − αd )} and αd is a positive constant representing the distance of influence of the image edges. The function Vr (s), tends to infinity when at least one extracted image point of wire-frame gets closer to the image limits and is null if all extracted wire-frame points are sufficiently far

(9)

(11)

∂r or the variation of camera frame r to to calculate Mr = ∂Υ variation of trajectory Υ, we need to obtain the L−1 w which already was described in [3].

03×3 RTc Mr = (12) 03×3 L−1 w r (s) T ) for all extracted Finally we obtain Fr (Υ) = −( ∂V∂Υ wire-frame points with taking the lens distortion into account.

V. T RAJECTORY W ORKSPACE The discrete-time trajectory at iteration k + 1, denoted by Υk+1 , is obtained via (5) and (9) where the 6 × 1 vector Υj = [tj , (Uθ)j ]kj=0 , represents a parametrization robot workspace which provides the 3D trajectory {xj , yj , zj , αi , βj , γj }kj=0 in a 3D Cartesian space and 2D trajectory of a set of k points in image space {rj , cj }kj=0 , between initial and desired camera frame state. Using the hand-eye calibration method which was already explained, the relative pose between the camera frame and the end-effector frame is obtained. Therefore we can transfer the discrete generated 3D trajectory to the end-effector frame. In classical VS approaches, we extract the image features to calculate the correct pose of the target. Position based visual servoing (PBVS) and image based visual servoing (IBVS)

2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)

215

are two main categories of VS [1]. PBVS uses image data to extract a series of 3D features to estimate target position to control the robot motion in 3D Cartesian space. On the other hand, in IBVS the image extracted features are directly used to control the robot. Our visual servoing system is similar to PBVS and employs the image data to estimate the position and orientation of the target to control the manipulator [9].

TABLE I I NITIAL AND DESIRED POSES OF THE OBJECT CENTER COORDINATES Pose Desired Initial

X(m) 0.0120 -0.061

Y (m) 0.0150 -0.054

Z(m) 0.0799 0.3486

α (deg) 130.241 171.27

β (deg) -3.8164 6.4141

Υ (deg) 62.824 251.28

In the desired camera state, the position and orientation of the object with respect to the camera frame for eight wireframe vertices of the object, denoted by s, are calculated via (1), (2), and (6). These positions and orientations are our desired goal workspace, Υd = 06×1 ; see Table (II) . TABLE II T HE EXTRACTED POSE OF SEVEN POINTS OF TARGET IN DESIRED CAMERA STATE

Fig. 6.

Proposed block diagram for presented path planning scheme.

Creation of the error vector Δ, is the basic idea behind our proposed VS; see Figure (6) Δ=

K 

|pi (t) − p∗i (t)|,

(13)

i=1

where pi (t) is the current estimated pose via CAD-based recognition method and p∗i (t) is the calculated pose via feasible generated trajectory in ith trajectory points. VI. E XPERIMENTS The proposed scheme have been applied for six dof Comau robot equipped with monocular camera. The CAD models of desired target and robot base are available. The goal is generating a trajectory in image space between initial and desired camera state such that the 3D markerless target remains in the FOV during the manipulation task. Our proposed hand-eye calibration approach determines the camera frame with respect to the robot base frame via (4) with reasonable accuracy. The error distribution of measured values x, y and z, for distances 0.45 m-0.75m between the camera frame and the base frame is illustrated in Figure (7).

Pose P0 P1 P2 P3 P4 P5 P6 P7

x(m) 0.0261 0.0260 0.023 0.0234 -0.0229 -0.0231 -0.0256 -0.0258

y(m) 0.0218 0.0140 0.0148 0.00701 0.0269 0.00963 0.0198 0.0025

z(m) 0.0768 0.0723 0.0892 0.0847 0.0691 0.0592 0.0815 0.0715

α (deg) 119.9 119.9 119.9 119.9 119.9 119.9 119.9 119.9

β (deg) 359.3 359.3 359.3 359.3 359.3 359.3 359.3 359.3

Υ (deg) 100.5 100.5 100.5 100.5 100.5 100.5 100.5 100.5

The positions and orientations of target wire-frame vertices when the camera is in initial state are estimated to show the initial camera workspace Υi . The goal is generating a trajectory between the initial camera workspace and desired camera workspace in image space. 2D trajectories generated with realistic repulsive force are illustrated in Figure (8). The obtained 2D trajectory for the same scenario without any repulsive potential force is illustrated in Figure (9).

Fig. 8. The generated 2D trajectory for desired (left) and initial camera image (right), with repulsive potential force.

Fig. 7. The error distribution of measured x, y and z for distances 0.4-0.75 m between the camera frame and the robot base frame.

Due to CAD-based recognition method we estimate the position and orientation of the object center coordinates for initial and desired camera state; see Table (I), [16] .

216

3D trajectory is generated to show the entire path from initial camera state to desired camera state. This 3D trajectory with effect of repulsive potential force and without it are illustrated in Figures (10) and (11) respectively . The generated camera 3D trajectory will be transformed to the end-effector center coordinates via proposed handeye calibration approach. The end-effector could follow the trajectory in 3D Cartesian space to reach the desired pose. 2D trajectory ensures us to maintain the object image in camera field of view.

2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)

Fig. 9. The generated 2D trajectory for desired (left) and initial camera image (right), without any repulsive potential force.

Fig. 10. The generated camera 3D trajectory (position and orientation), with effect of repulsive potential force.

VII. C ONCLUSION In this paper we present a new path planning scheme by integrating the CAD-based recognition methods into image space path planning. The proposed framework allows us to generate consistent trajectories for all predetermined set of vertices or points belong to the wire-frame model of given target. One of the main benefits of this approach is, its ability to generate trajectories for 3D target image points even some parts of the object are occluded by external occlusions or by itself during the operation. Finally this approach takes into account both position and orientation of the camera and end-effector during the generated trajectory between initial and desired camera state such that the wire-frame points are in camera FOV. Future work will be devoted to generate a 3D trajectory in image space for robot manipulator equipped with camera, subjected to the external obstacles. R EFERENCES [1] S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” Robotics and Automation, IEEE Transactions on, vol. 12, no. 5, pp. 651–670, 1996. [2] D. Kragic and H. I. Christensen, “Survey on visual servoing for manipulation,” Computational Vision and Active Perception Laboratory, Fiskartorpsv, vol. 15, 2002. [3] Y. Mezouar and F. Chaumette, “Path planning in image space for robust visual servoing,” in Robotics and Automation, 2000. Proceedings. ICRA’00. IEEE International Conference on, vol. 3. IEEE, 2000, pp. 2759–2764. [4] ——, “Optimal camera trajectory with image-based control,” The International Journal of Robotics Research, vol. 22, no. 10-11, pp. 781–803, 2003. [5] M. Kazemi, K. Gupta, and M. Mehrandezh, “Global path planning for robust visual servoing in complex environments,” in Robotics and Automation, 2009. ICRA’09. IEEE International Conference on. IEEE, 2009, pp. 326–332. [6] Z. Yao and K. Gupta, “Path planning with general end-effector constraints,” Robotics and Autonomous Systems, vol. 55, no. 4, pp. 316–327, 2007.

Fig. 11. The generated camera 3D trajectory ( position and orientation), without any repulsive potential force.

[7] L. Haifeng, L. Jingtai, L. Yan, L. Xiang, Y. Kaiyan, and S. Lei, “Trajectory planning for visual servoing with some constraints,” in Control Conference (CCC), 2010 29th Chinese. IEEE, 2010, pp. 3636–3642. [8] M. Kazemi, K. Gupta, and M. Mehrandezh, “Path planning for imagebased control of wheeled mobile manipulators,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 5306–5312. [9] C. Zang and K. Hashimoto, “A flexible visual inspection system combining pose estimation and visual servo approaches,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 1304–1309. [10] V. Sundareswaran and R. Behringer, “Visual servoing-based augmented reality,” in IEEE Int. Workshop on Augmented Reality, 1998. ´ Marchand and F. Chaumette, “Virtual visual servoing: a frame[11] E. work for real-time augmented reality,” in Computer Graphics Forum, vol. 21, no. 3. Wiley Online Library, 2002, pp. 289–297. [12] A. Comport, E. Marchand, M. Pressigout, and F. Chaumette, “Realtime markerless tracking for augmented reality: the virtual visual servoing framework,” Visualization and Computer Graphics, IEEE Transactions on, vol. 12, no. 4, pp. 615–628, 2006. [13] D. F. Dementhon and L. S. Davis, “Model-based object pose in 25 lines of code,” International Journal of Computer Vision, vol. 15, no. 1, pp. 123–141, 1995. [14] M. Ulrich, C. Wiedemann, and C. Steger, “Cad-based recognition of 3d objects in monocular images,” in International Conference on Robotics and Automation, vol. 1191, 2009, p. 1198. [15] U. Klank, D. Pangercic, R. B. Rusu, and M. Beetz, “Real-time cad model matching for mobile manipulation and grasping,” in Humanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS International Conference on. IEEE, 2009, pp. 290–296. [16] http://www.mvtec.com/halcon/. [17] M. Ulrich, C. Wiedemann, and C. Steger, “Combining scale-space and similarity-based aspect graphs for fast 3d object recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 10, pp. 1902–1914, 2012. [18] Z. Ziaei, A. Hahto, J. Mattila, M. Siuko, and L. Semeraro, “Realtime markerless augmented reality for remote handling system in bad viewing conditions,” Fusion Engineering and Design, vol. 86, no. 9, pp. 2033–2038, 2011. [19] B. Tamadazte, E. Marchand, S. Demb´el´e, and N. Le Fort-Piat, “Cad model-based tracking and 3d visual-based control for mems microassembly,” The International Journal of Robotics Research, vol. 29, no. 11, pp. 1416–1434, 2010. [20] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” The international journal of robotics research, vol. 5, no. 1, pp. 90–98, 1986.

2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM)

217

New Scheme for Image Space Path Planning ... - IEEE Xplore

New Scheme for Image Space Path Planning Incorporating CAD-Based. Recognition Methods for Visual Servoing. Zahra Ziaei, Reza Oftadeh, Jouni Mattila. ∗.

906KB Sizes 2 Downloads 275 Views

Recommend Documents

Page Allocation Scheme for Anti-Fragmentation on ... - IEEE Xplore
adopted on embedded smart devices, gives more free spaces in system memory ... free page allocations between for anonymous page and for page cache. Fig.

Probabilistic Critical Path Identification for Cost-Effective ... - IEEE Xplore
School of Computer Science and. Technology. Huazhong University of Science and Technology. Wuhan, China 430074 [email protected]. Steve Versteeg.

A Fault Detection and Protection Scheme for Three ... - IEEE Xplore
Jan 9, 2012 - remedy for the system as faults occur and save the remaining com- ponents. ... by the proposed protection method through monitoring the flying.

Symbol repetition and power re-allocation scheme for ... - IEEE Xplore
Symbol Repetition and Power Re-allocation Scheme for Orthogonal Code Hopping Multiplexing Systems. Bang Chul Jung, Jae Hoon Clung, and Dan Keuii Sung. CNR Lab.. Dept. of EECS.. KAIST. 373-1. Guseong-dong. Yuseong-gu. Daejeon. 305-70 I _ KOREA. En~ail

Ordered Statistics based rate allocation scheme for ... - IEEE Xplore
We propose a new rate allocation algorithm for closed loop MIMO-OFDM system. The new scheme utilizes or- dered statistics of channel matrix's singular value ...

A Two-Dimensional Signal Space for Intensity ... - IEEE Xplore
compared to those of the best known formats. The new formats are simpler than existing subcarrier formats, and are superior if the bandwidth is measured as ...

A Diff-Serv enhanced admission control scheme - IEEE Xplore
The current Internet provides a simple best-effort service where the network treats all data packets equally. The use of this best effort model places no per flow ...

Towards a Distributed Clustering Scheme Based on ... - IEEE Xplore
Abstract—In the development of various large-scale sensor systems, a particularly challenging problem is how to dynamically organize the sensor nodes into ...

Cluster Space Control of Autonomous Surface Vessels ... - IEEE Xplore
a single robot system including redundancy, coverage and flexibility. One of the ... surface vessels consisting of 2 or 3 robots and with varying implementations ... flexible and mobile perimeter formed by the ASV cluster or to detect a threat and ..

development and validation of multitemporal image ... - IEEE Xplore
Page 1 ... METHODOLOGIES FOR MULTIRISK MONITORING OF CRITICAL STRUCTURES AND ... The capability of monitoring structures and infrastructures.

Inferring Users' Image-Search Goals with Pseudo-images - IEEE Xplore
text-based search-goal inference are also big challenges for image-based user .... If we view the original images in the search results as the original samples,.

Polynomial Weighted Median Image Sequence Prediction - IEEE Xplore
Abstract—Image sequence prediction is widely used in image compression and transmission schemes such as differential pulse code modulation. In traditional ...

Underwater Optical Image Dehazing Using Guided ... - IEEE Xplore
Kyushu Institute of Technology, Kyutech. Kitakyushu, Japan ... Color change corresponds to the varying degrees of attenuation encountered by light traveling in ...

A New Outer Bound for the Gaussian Interference ... - IEEE Xplore
Wireless Communications and Networking Laboratory. Electrical Engineering Department. The Pennsylvania State University, University Park, PA 16802.

Ubiquitous Robot: A New Paradigm for Integrated Services - IEEE Xplore
virtual pet modeled as an artificial creature, and finally the. Middleware which seamlessly enables interconnection between other components. Three kinds of ...

A New Parameter for UWB Indoor Channel Profile ... - IEEE Xplore
Abstract—This paper proposes a new parameter for identifying the room typology when the receiver is in ultra wideband. (UWB) indoor environments.

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

seam carving with rate-dependent seam path information - IEEE Xplore
7-1-2 Yoto, Utsunomiya, Tochigi, 321-8585 Japan email: {tanaka, madoka, kato}@is.utsunomiya-u.ac.jp. ABSTRACT. In this paper, we present a seam carving, ...

Nonlinear State–Space Model of Semiconductor Optical ... - IEEE Xplore
Aug 29, 2008 - Page 1 ... then apply the model to design an optical feedback controller ... we use the compressed model to design and demonstrate a con-.

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...