Hand gesture recognition for surgical control based on a depth image I. Famaey1 , K. Buys2 , D. Van Deun1 , T. De Laet2 , J. Vander Sloten1 , and J. De Schutter2 1

2

Dep. of Mechanical Engineering, Biomechanics section, KU Leuven, Belgium Dep. of Mechanical Engineering, Robotics research group, KU Leuven, Belgium

Abstract The introduction of hand gestures as an alternative to existing interface techniques could result in groundbreaking changes in health-care and in every day life. This research area is confronted with many challenges: variable illumination conditions, cluttered backgrounds, etc. Most recognition techniques for static hand gestures can be divided into three categories: methods based on low-level features, appearance-based approaches, and methods based on high-level features. This article presents three different methods for hand gesture recognition, one of each aforementioned category. The first method is based on information about the convexity defects of the 2D hand contour, the second one compares the 2D hand contour with a database, and the third one performs 3D template registration between the hand and predesigned templates. Hand detection is performed using depth information from Microsoft Kinect. These methods are evaluated and compared. The recognition rates are 97%, 73.9%, and 50% respectively. The first method was currently chosen for the implementation of a gesture-based tool for the control of visualization displays in the operation room. The developed system can recognize eleven static gestures that are intuitive, easily distinguishable, and minimally tiring for the surgeon. Keywords: Hand gesture recognition, depth camera, computer control, surgical control

1

Introduction

RGB as well as depth data. The system should operate on-line, should be person independent, Gestures are an important part of everyday com- and robust. munication amongst humans. They emphasize The application of interest for the gestureinformation in combination with speech or can based interface technique proposed in this pasubstitute speech completely. They can signify per is browsing through medical images in the greetings, warnings or emotions or they can sig- operation room (OR). Surgeons agree that the nal an enumeration, provide spatial informa- advantages of such a system are fivefold: tion, etc. But if gestures are so crucial for hu• Sterility: Not all human-computer interman interaction, why do we not use them when faces are easy to keep sterile. When using interacting with machines? hand gestures, there is no need to touch The aim of this article is to develop a system anything. that can recognize a variety of hand gestures using information from a 3D camera. The camera • Autonomy: Machines in the OR are ofused is Microsoft Kinect [1, 2], which captures ten controlled by an assisting nurse. With Corresponding author: Koen Buys Email: buys dot koen (at) gmail dot com

Inge Famaey, Hand gesture recognition for surgical control the proposed system the surgeon can per- camera, so they are relatively low cost and are form the computer control without any minimally obtrusive for the user [8]. The vision help from a nurse. based approaches can detect the hand using information about the depth, color (Sanchez• Ergonomy: The posture of a surgeon dur- Nielsen et al. [9], Tang et al. [10]), etc. ing surgery is often very unnatural. The Once the hand is detected, hand posture proposed system eliminates the need to classification methods for vision-based approaches walk towards the computer and crouch in can be divided into three categories [11]: lowfront of the screen. The surgeon can per- level features, appearance based approaches, and form the computer control from his posi- high-level features. These are explained in the tion at the operation table. following sections. • Communication: If the system is able to understand the pointing gesture of the 2.1 Low-level Features surgeon, this can be used to mimic the Many researchers raised the thought that full function of a laser pointer, such that his reconstruction of the hand is not necessary for fellow surgeons or nurses can immediately gesture recognition. Therefore, these methods understand what he is pointing at. only use low-level image features that are fairly • Availability of information: The use of robust to noise and can be extracted quickly. hand gestures makes the computer more An example of low-level features used in hand easily accessible. As such, surgeons will posture recognition is the radial histogram. Tang be more willing to use computers during et al. [10] use the radial histogram as a reference for comparison with other techniques. surgery.

2

Related work

The human hand is a highly deformable articulated object with a total of about 27 degrees of freedom (DOFs) [3]. As a consequence the hand can adopt a variety of static postures that can have distinct meanings in human communication. Hand pose recognition techniques consist of two stages: hand detection and hand pose classification. First the hand is detected in the image and segmented. Afterwards information is extracted that can be used to classify the hand posture. This classification allows it to be interpreted as a meaningful command [4]. A first group of hand pose recognition researchers focus on these so-called ‘static’ hand poses. A second research domain is the recognition of ‘dynamic’ hand gestures, in which not the pose but the trajectory of the hand is analyzed. This article focuses on the static hand poses. For more information on dynamic gestures see Doliotis et al. [5], Shan et al. [6], etc. Hand detection techniques can be divided into two main groups [3]: data-glove based and vision based approaches. The former use sensors attached to a glove to detect the hand and finger positions [7]. The latter require only a

2.2

Appearance-based approaches

Appearance-based methods use a collection of 2D intensity images to model the hand [11]. These images can for example be acquired by Principal Component Analysis [12]. Some researchers who use appearance based approaches are Murthy et al. [13], Rautaray et al. [12], Sanchez-Nielsen et al. [9], Stenger et al. [14], etc.

2.3

High-level features

Methods relying on high-level features use a 3D hand model. High-level features can be derived from the joint angles and pose of the palm [11]. Most model-based approaches create a 3D model of a hand by defining kinematic parameters and project the 3D model onto a 2D space. The hand posture can be estimated by finding the kinematic parameters of the model that result in the best match between the projected edges and the edges extracted from the input image [11]. Other approaches reconstruct the hand posture as a ‘voxel model’, based on images obtained by a multi-viewpoint camera system. The joint angles are then estimated by directly matching the three dimensional hand model to the measured voxel model. 2

Inge Famaey, Hand gesture recognition for surgical control One advantage of 3D hand model based approaches is that they allow a wide range of hand gestures if the model has enough DOFs. However, these methods also have disadvantages. The database required to cover different poses under different views is very large. Complicated invariant representations have to be used. The initial parameters have to be close to the solution at each frame. Moreover the fitting process is very sensitive to noise. Due to the complexity of the 3D structures used, these methods may be relatively slower than the other approaches. Of course, this problem must be suppressed in order to assure on-line performance. Some researchers who use model-based approaches are Bretzner et al. [15], Breuer et al. [16], Guomundsson et al. [17], and Yang et al. [18].

3 3.1

Materials and Methods Design requirements

Many factors have to be considered for sterile browsing of images in the operation room using hand gesture recognition. Above all, the system should be as user-friendly as possible. The surgeon should not have to wear a specific type of clothing or other gloves than the ones he is already wearing. Therefore, hand detection methods based on skin color can not be used [19]. Glove-based approaches are only possible for the white gloves he is already wearing. The gestures should be simple, feel natural, and tire as little as possible. The system should be able to discern when the surgeon is gesturing towards the system, or when he is doing something else, such as talking to a nurse. This could be achieved by implementing a start and/or stop sign. It is also important that the surgeon gets feedback as to whether the input was acquired correctly or not. The system should work on-line and the duration for inputting one command should be as short as possible. Since lighting conditions during surgery can vary strongly, gesture recognition must be insensitive to illumination changes (e.g. by using only depth information). The system should perform well for a user that

Figure 1: The eleven selected gestures is standing at a distance of 1.5 meters from the camera. This is the distance from the surgeon to the screen in front of him, on which the Kinect will be placed.

3.2

Selected gestures

For safety reasons, the most important condition for the surgeon is that the gesture can be performed in a restricted amount of space and the collision risk with colleagues is minimal. This is the reason why in this article, the focus lies on the recognition of static hand gestures. The selected gestures for recognition are shown in Fig.1. The choice was made based on the following conditions: intuitive, not tiring and well distinguishable from each other.

3.3

Methods

As explained in section 1, methods for recognition of static hand gestures can be divided into three major groups. This paper presents one method of each group. For the low-level features the use of convexity defects is discussed in section 3.4.1; for the appearance-based approach the use of a database method is discussed in section 3.4.2; and for the high-level features a model based implementation is discussed in section 3.4.3. For the implementation of the three methods, the PCL [20, 21, 22], OpenCV [23], MakeHuman [24], Blender [25] and Blensor [26] open source software frameworks were used.

3.4

Segmentation

Before each group of static hand gesture recognition algorithms can be evaluated the hand needs to be extracted from the image. For this we either use a heuristic rule based on the closest point to the camera combined with a re3

Inge Famaey, Hand gesture recognition for surgical control gion of interest or we use an approach based on random decision forest labeling. For the first segmentation method, the assumption is made that the user’s hand is the object closest to the Kinect camera. This hand can be filtered out with a passthrough filter on the depth image. However, segmentation is not the focus of this paper and is discussed more elaborately in [27] with more surgery specifics in [28]. 3.4.1

Convexity Defects

In the low-level feature-based method the convexity defects of the segmentation are evaluated. The segmented 3D pointcloud of the hand is projected into a 2D binary image from which the hand contour is derived after arm correction. In a next step several features are calculated: the concave points of this contour, their neighbouring convex points, the depth and the angle of the convexity defect (i.e. the depth of the concave point with respect to its neighbouring convex points and the angle made by these three points). These features provide information on the number of fingers extended and on the smoothness of the hand contour, allowing to distinguish the different gestures. To classify a gesture, a set of rules is defined for each possible gesture that compares the extracted features to the ones expected for that gesture. 3.4.2

Appearance-based method

This method is largely analogous to the first one. It also uses a segmentation step to detect the hand, transforms the pointcloud into a 2D binary image, and computes the hand contour. The difference is that instead of using information about the convexity defects, a database is used for hand gesture classification. This database consists of several sub-databases, each of which contains the hand contours of ten persons performing one of the different hand gestures. Using Hu invariant moments, the hand contour of the input image is compared to each hand contour in the database. Hu invariant moments are seven weighted averages (moments) of the intensities of the image pixels [29]. For every sub-database, the average of the result of this comparison over its members is computed. The gesture type represented by the

sub-database with the best average result is now chosen as the best match, and classification is achieved. 3.4.3

MakeHuman model

The third method provides a first step towards a method based on a hand model as explained in the introduction. It uses the hand model provided by the program MakeHuman. This model is manually forced to adopt the selected hand gestures of interest in Blender. In a next step, the pointcloud of hand gesture is generated. These pointclouds are used as templates. During computer control, template registration is performed between the measured pointcloud of the user and the model pointclouds using the iterative closest point algorithm (ICP). The measured template is classified as the gesture with the best match after comparison with a minimal threshold on the ICP fit.

3.5

Image Browsing System

The previous methods can be used to control a computer through hand gestures. This paper proposes to use the recognized gesture for browsing through medical images. As a feedback the user is exposed to a continuous stream of on-line images on which he can see himself, the detected hand contour and the name of the gesture as which his hand pose is classified, as shown in Fig.2. The user can switch between image types (CT/MRI/etc.) and image series and can zoom in and out by using the selected hand gestures.

4

Results

In this section, the three methods are evaluated and compared. A qualitative overview of their performance for five criteria is given in Table 1.

4.1

Recognition rate

The convexity defects method has the highest recognition rate. The recognition rate of the method with the MakeHuman model is rather low at the moment of writing. However, if in future work the template registration technique is improved, it will increase.

4

Inge Famaey, Hand gesture recognition for surgical control

Name method Detection method Classification method Recognition rate [%] Multiple hands Self-occlusions Scale invariant Rotation invariant Amount of postures Speed [frames/s] Userindependent Clothing Different lighting conditions Challenging backgrounds

Extensibility

Anatomy

Frame rate

Both hands

Convexity defects Database MakeHuman model

Recognition rate [%]

Table 1: Comparison methods

++ --

-+ -

-+ +

++ ++ --

yes no no

Table 2: Overview performance methods Convexity defects Database 3D passthrough filtering + arm correction based on convexity defects of hand contour Analysis convexity Shape matching of defects of hand con- hand contour with tour database 97.0 73.9

MakeHuman model 3D passthrough filtering ICP

50.0

y y y y

y y y y

y y y y

11

10

8

30 y

30 y

1.88 y

y

y

Any y

y (as long as hand is closest to Kinect)

5

Inge Famaey, Hand gesture recognition for surgical control

4.6

Evaluation performance

An more elaborate overview of the three methods for a larger number of performance criteria is provided in Table 2.

5

Figure 2: Computer screen with picture-inpicture feedback stream. Source CT scan: [30]

4.2

Extensibility

For every new gesture in the convexity defects method, a new piece of code has to be written. The database method requires the addition of a new sub-database that contains a series of the desired hand contours. If PointCloud Data (PCD) files of these gestures already exist, this method can be adjusted quickly. For the third method, the MakeHuman hand model has to be shaped manually into the desired pose and exported as a PCD-file.

4.3

Human anatomy

The convexity defects method has a low score for human anatomy, because it does not make use of high-level anatomical information. If a 3D hand model is used, this method could have a very high score, depending on the amount of DOFs of the model.

4.4

Discussion

At the moment of writing, the method using information about the convexity defects in the hand contour is the most robust and the most user-friendly. Even though it is based on ad hoc information which makes the extensibility of the method to a larger amount of gestures somewhat difficult, its performance is very reliable. The method that achieves hand gesture recognition by use of a database of hand contours, suffers from a great sensitivity to the presence of a portion of the arm in the contour. Interestingly, even with a carefully designed database, recognition rates are lower in practice than for the ad hoc method. At the moment, the third method has a rather low recognition rate. If in future research its template registration technique could be improved, this method could provide a first step towards a hand gesture recognition technique based on a 3D hand model as explained in section 2.3. Such a method could find the pose of the hand by comparing the shape of the 3D hand model and the measured pointcloud for different model parameters. Depending on the amount of DOFs implemented in the 3D hand model, a large amount of gestures can be recognized.

6

Conclusion

Frame rate

Hand gesture recognition was achieved using Both the method based on convexity defects as three methods: one based on low-level features, the appearance-based method can be used on- one appearance-based approach, and one based line. The frame rate of the third method can on high-level features. Although the last one is be increased by improving the template regis- the most promising, the first one currently delivers the best recognition rate. tration technique. The focus was laid on one specific application: the creation of a gesture-based tool for 4.5 Both hands the control of visualization displays in the opThe convexity defects method works for both eration room. A system was developed which hands (left and right). The other methods work allows the user to switch between image types (CT or MRI), between image series, and to for one hand only. zoom in and out. Surgeons agree that this tool 6

Inge Famaey, Hand gesture recognition for surgical control would mean an improvement for the sterility, the autonomy and ergonomy of the surgeon, the availability of information, and the communication with colleagues and nurses. The most important design requirements for such a system are that it can function within the conditioned environment of the operation room, that it is robust and not tiring for the surgeon. Interesting future work would be further research on the method that uses the MakeHuman model, as it can be used for the development of a method with a 3D hand model. In conclusion, hand gesture recognition is a very interesting and promising research area, that could result in groundbreaking changes in healthcare and in every day life.

Acknowledgements

[5] P. Doliotis, A. Stefan, C. McMurrough, D. Eckhard, and V. Athitsos, “Comparing gesture recognition accuracy using color and depth information,” in Conference on Pervasive Technologies Related to Assistive Environments (PETRA)(May 2011)[544], 2011. [6] C. Shan, T. Tan, and Y. Wei, “Real-time hand tracking using a mean shift embedded particle filter,” Pattern recognition, vol. 40, no. 7, pp. 1958–1970, 2007. [7] T. Labs, “Myo.” https://getmyo.com/, April 2013. The Gesture Control Armband. [8] Leap Motion, Inc, “The leap motion controller.” https://www.leapmotion.com/, April 2013.

Tinne De Laet is a PostDoctoral Fellow of the [9] E. Sanchez-Nielsen, L. Ant´on-Canal´ıs, Research Foundation - Flanders (FWO) in Beland M. Hern´andez-Tejera, “Hand gesture gium. recognition for human-machine interacKoen Buys is funded by KU Leuven’s Contion,” Journal of WSCG, vol. 12, no. 1-3, certed Research Action GOA/2010/011 Global pp. 2–6, 2003. real-time optimal control of autonomous robots and mechatronic systems, a PCL-Nvidia Code [10] M. Tang, “Recognizing hand gestures with microsoft0 s kinect,” Palo Alto: DepartSprint grant, an Amazon Web Services educament of Electrical Engineering of Stanford tion and research grant. University:[sn], 2011. The work of Dorien Van Deun is carried out thanks to the financial support of the Agency [11] R. Hassanpour, S. Wong, and A. Shahfor Innovation by Science and Technology (IWT) bahrami, “Visionbased hand gesture in Flanders, Belgium. recognition for human computer interacThe authors would like to acknowledge dr. van tion: A review,” in IADIS International Cleynenbreugel from University Hospitals LeuConference Interfaces and Human Comven for his support. puter Interaction, pp. 125–132, Citeseer, 2008.

References [1] [2] [3]

[4]

[12] S. Rautaray and A. Agrawal, “A novel human computer interface based on hand Microsoft XBOX Kinect. xbox.com, 2010. gesture recognition using computer vision techniques,” in Proceedings of the Primesense. primesense.com, 2010. First International Conference on IntelliP. Garg, N. Aggarwal, and S. Sofat, gent Interactive Technologies and Multi“Vision based hand gesture recognition,” media, pp. 292–296, ACM, 2010. World Academy of Science, Engineering and Technology, vol. 49, pp. 972–977, 2009. [13] G. Murthy and R. Jadon, “Hand gesture recognition using neural networks,” in G. Murthy and R. Jadon, “A review of viAdvance Computing Conference (IACC), sion based hand gestures recognition,” In2010 IEEE 2nd International, pp. 134– ternational Journal of Information Tech138, IEEE, 2010. nology, vol. 2, no. 2, pp. 405–410, 2009.

7

Inge Famaey, Hand gesture recognition for surgical control [14] B. Stenger, “Template-based hand pose recognition using multiple cues,” Computer Vision–ACCV 2006, pp. 551–560, 2006.

(TUM) Ph.D., Prof. Kurt Konolige (Stanford) Ph.D., Prof. Gary Bradski (Stanford) Ph.D.; summa cum laude.

[23] “OpenCV Wiki.” http://opencv. [15] L. Bretzner, I. Laptev, and T. Lindeberg, willowgarage.com/wiki/, June 2012. “Hand gesture recognition using multiscale colour features, hierarchical models [24] “MakeHuman — Home.” http://www. makehuman.org/. and particle filtering,” in Automatic Face and Gesture Recognition, 2002. Proceed- [25] “Blender. org - home.” http://www. ings. Fifth IEEE International Conference blender.org/. on, pp. 423–428, IEEE, 2002. [26] “Blender Sensor Simulation — [16] P. Breuer, C. Eckes, and S. M¨ uller, “Hand www.blensor.org.” http://www.blensor. gesture recognition with a novel IR timeorg/. of-flight range camera–a pilot study,” Computer Vision/Computer Graphics [27] K. Buys, C. Cagniart, A. Baksheev, T. D. Collaboration Techniques, pp. 247–260, Laet, J. D. Schutter, and C. Pantofaru, 2007. “An adaptable system for rgb-d based human body detection and pose estimation,” [17] S. Guomundsson, J. Sveinsson, M. Pard`as, Journal of Visual Communication and ImH. Aanæs, and R. Larsen, “Model-based age Representation, no. 0, pp. –, 2013. hand gesture tracking in ToF image sequences,” Articulated Motion and De- [28] K. Buys, D. V. Deun, B. V. Cleynenformable Objects, pp. 118–127, 2010. breugel, T. Tuytelaars, and J. D. Schutter, “Surgeon pose detection with a depth [18] X. Yang, J. Park, K. Jung, and H. You, camera during laparoscopic surgeries,” in “Development and evaluation of a 25In proceedings of the conference on Digital degree of freedom hand kinematic model,” Human Modeling, 2013. [19] I. Oikonomidis, N. Kyriazis, and A. A. Ar- [29] M. Hu, “Visual pattern recognition by gyros, “Full dof tracking of a hand intermoment invariants,” Information Theacting with an object by modeling occluory, IRE Transactions on, vol. 8, no. 2, sions and physical constraints,” in Compp. 179–187, 1962. puter Vision (ICCV), 2011 IEEE International Conference on, pp. 2088–2095, [30] R. Ballinger, “MIRC document.” IEEE, 2011. http://www.mritutor.org/mriteach/ 8300/8300.html. [20] “PCL - Point Cloud Library.” http:// pointclouds.org, 2012. [21] R. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in Robotics and Automation (ICRA), 2011 IEEE International Conference on, pp. 1–4, IEEE, 2011. [22] R. B. Rusu, Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. PhD thesis, Computer Science department, Technische Universitat Msuchen, Germany, October 2009. Advisor: Univ.-Prof. Michael Beetz (TUM) Ph.D.; Committee: Univ.-Prof. Dr. Nassir Navab (TUM), Univ.-Prof. Michael Beetz

8

Hand gesture recognition for surgical control based ... - Matthew P. Reed

Abstract. The introduction of hand gestures as an alternative to existing interface techniques could result in .... Above all, the system should be as user-friendly.

636KB Sizes 0 Downloads 233 Views

Recommend Documents

Hand gesture recognition for surgical control based ... - Matthew P. Reed
the desired hand contours. If PointCloud Data. (PCD) files of these gestures already exist, this method can be adjusted quickly. For the third method, the MakeHuman hand model has to be shaped manually into the desired pose and exported as a PCD-file

Hand gesture recognition for surgical control based ... - Matthew P. Reed
2Dep. of Mechanical Engineering, Robotics research group, KU Leuven, Belgium. Abstract. The introduction of hand gestures as an alternative to existing interface techniques could result in groundbreaking changes in health-care and in every day life.

Computer Vision Based Hand Gesture Recognition ...
Faculty of Information and Communication Technology,. Universiti ... recognition system that interprets a set of static hand ..... 2.5 Artificial Neural Network (ANN).

Robust Part-Based Hand Gesture Recognition ... - Semantic Scholar
or histograms. EMD is widely used in many problems such as content-based image retrieval and pattern recognition [34], [35]. EMD is a measure of the distance between two probability distributions. It is named after a physical analogy that is drawn fr

Hand Gesture Recognition for Human-Machine ...
achieve a 90% recognition average rate and is suitable for real-time applications. Keywords ... computers through hand postures, being the system adaptable to ...

a fast algorithm for vision-based hand gesture ...
responds to the hand pose signs given by a human, visually observed by the robot ... particular, in Figure 2, we show three images we have acquired, each ...

Surgeon pose detection with a depth camera ... - Matthew P. Reed
These statistics are eigenvectors and eigenvalues of the corre- sponding pointcloud, number of pixels and real life size. In the resulting set of candidate parts a.

46.A Hand Gesture Recognition Framework and Wearable.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 46.A Hand ...

Hand Gesture Recognition.pdf
Page 1 of 14. International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.4, July 2012. DOI : 10.5121/ijaia.2012.3412 161. HAND GESTURE ...

89. GESTURE RECOGNITION SYSTEM FOR WHEELCHAIR ...
GESTURE RECOGNITION SYSTEM FOR WHEELCHAIR CONTROL USING A DEPTH SENSOR.pdf. 89. GESTURE RECOGNITION SYSTEM FOR ...

REALISM: Real-Time Hand Gesture Interface for ...
College of Engineering. University of ... One of the major problems that come with the use of computer ... By early 1990s, computer scientists and medical experts.

REALISM: Real-Time Hand Gesture Interface for ... - CiteSeerX
The implementation of REALISM system is divided ... poorly illuminated environment the system detected 15.98% ..... It can be interpreted as false alarm.

Distal place recognition based navigation control ...
Distal place recognition based navigation control ... The navigation control system is inspired by ... set of grid cells however it is necessary to process the.

Research Article Cued Speech Gesture Recognition
This method is essentially built around a bioinspired method called early reduction. Prior to a complete analysis of each image of a sequence, the early reduction process automatically extracts a restricted number of key images which summarize the wh

Discriminative Ferns Ensemble for Hand Pose Recognition - Microsoft
At the highest level the best accuracy is often obtained us- ing non-linear kernels .... size) and M (number of ferns) for a given training set size. In other words, if a ...

cued speech hand shape recognition
paper: we apply the decision making method, which is theoretically .... The distance thresholds are derived from a basic training phase whose .... As an illustration of all these concepts, let us consider a .... obtained from Cued Speech videos.

Gesture Recognition of Nintendo Wiimote Input Using ...
Apr 17, 2008 - allow for successful classification of complex gestures with ..... http://en.wikipedia.org/w/index.php?title=Wii_Remote&oldid=206155037.

Gesture Recognition with a 3-D Accelerometer
devices in daily life, for example, Apple iPhone [21], Nintendo Wiimote [22]. ... The first step of accelerometer-based gesture recognition system is to get the time.

Face Recognition Based on SVM ace Recognition ...
features are given to the SVM classifier for training and testing purpose. ... recognition has emerged as an active research area in computer vision with .... they map pattern vectors to a high-dimensional feature space where a 'best' separating.

VISION-BASED CONTROL FOR AUTONOMOUS ...
data, viz. the mean diameter of the citrus fruit, along with the target image size and the camera focal length to generate the 3D depth information. A controller.

VISION-BASED CONTROL FOR AUTONOMOUS ... - Semantic Scholar
invaluable guidance and support during the last semester of my research. ..... limits the application of teach by zooming visual servo controller to the artificial ... proposed an apple harvesting prototype robot— MAGALI, implementing a spherical.

Camera-based Scrolling Interface for Hand-held Devices
subjects that executed a trajectory with our system and, as a comparison, with ... ing computing and networking capacity of such devices, users expect to be able ...

VISION-BASED CONTROL FOR AUTONOMOUS ... - Semantic Scholar
proposed an apple harvesting prototype robot— MAGALI, implementing a ..... The software developed for the autonomous robotic citrus harvesting is .... time network communication control is established between these computers using.