Intelligent interface for robot object grasping Christophe Leroux1, Isabelle Laffont2, Sophie Schmutz3, Nicolas Biard2, Rodolphe Gelin1, Jean François Désert3, Gérard Chalubert1, Omar Tahri1, Jean Marc Alexandre1 1

CEA, List, BP 6, 92265 Fontenay-aux-Roses CEDEX, France

[email protected], [email protected], [email protected], [email protected] 2

PFNT, Hôpital Universitaire Raymond Poincaré, 92380 Garches

[email protected], [email protected] 3

UGECAM IDF, Centre de réadaptation de Coubert, D96, 77170 COUBERT [email protected], [email protected]

1 Abstract In this paper, we describe the human computer interface developed in the AVISO system to assist severely handicapped people (tetraplegic) in intuitively grasping objects of their environment. The proposed solution relies on usage of a dedicated robot arm with a gripper and a computer allowing automatic vision control of robot’s motion. The grasping function developed does not need putting marks on the objects of the environment and does not use any model of the objects. It is therefore easy to use, polyvalent and can be applied to a wide variety of objects. The AVISO system includes an intuitive human machine interface specially designed to take into account needs and interaction constraints of physically handicapped people. The AVISO system was developed together with medical teams. It was then tested and evaluated with 15 handicapped people of different aetiology and more than 32 valid persons under the control of occupational therapists of two physical medicine and rehabilitation services. The results are positive enough to enable to think to an industrial transfer of this technology at mid term. The paper terminates on a description of the future works, to enhance the system developed and improve our man machine interface in order to be furthermore usable by non-computer or robot specialists.

2 Résumé Dans cet article, nous décrivons l’interface homme machine développée pour le système AVISO pour aider des personnes handicapées de type tétraplégique à saisir des objets de leur environnement. La solution proposée repose sur l’utilisation d’un bras robot avec sa pince équipé d’un contrôleur permettant la commande automatique des mouvements du bras. La fonction de saisie conçue ne nécessite pas de marquage des objets de l’environnement et n’utilise pas de modèle des objets. Elle est par conséquent facile à mettre en œuvre, polyvalente et peut s’appliquer à une très large variété d’objets. Le système AVISO

inclut une interface homme machine intuitive spécialement conçue pour prendre en compte les besoins et les contraintes d’interaction des personnes ayant un handicap physique. Le système AVISO a été développé en collaboration avec des équipes médicales. Il a été ensuite testé et évalué avec 15 personnes handicapées de différente étiologie ainsi qu’avec une cinquantaine de personnes valides sous le contrôle d’ergothérapeutes de deux services de Médecine Physique et de Réadaptation. Les résultats sont suffisamment positifs pour permettre d’envisager un transfert industriel de cette technologie à moyen terme. L’article se termine par une présentation des travaux futurs pour augmenter les performances du système et améliorer l’interface homme machine pour être encore d’avantage utilisable par des personnes non spécialistes des robots ou des ordinateurs.

3 Síntesis En el articulo siguiente se describe la interfaz hombre-maquina desarrollada en el sistema AVISO para proporcionar a las personas que sufren una alta discapacidad (tetraplegia) la posibilidad de coger los objetos que le rodean. La solución propuesta, se basa en un brazo robot provisto de una pinza y un controlador, permitiendo así el control automático del movimiento del robot. El proceso para alcanzar un objeto no necesita ni que los objetos estén marcados ni la utilización del modelo de dichos objetos. Por lo tanto, dicho procedimiento, es sencillo de llevar a cabo, es polivalente y su aplicación se extiende a una gran variedad de objetos. El sistema AVISO incorpora una intuitiva interfaz hombre maquina teniendo en cuenta las necesidades y las restricciones que pueden tener las personas que padecen este tipo de discapacidades físicas. El sistema AVISO ha sido desarrollado en colaboración con dos equipos médicos. Posteriormente el sistema ha sido probado con 15 personas discapacitadas de diferentes etiologías así como con medio centenar de personas validas bajo el control de terapeutas de diferentes hospitales y de centros de rehabilitación. Los resultados son lo suficientemente positivos como para poder plantearse una posible introducción de esta tecnología en el dominio industrial a medio plazo. El artículo termina con una presentación de los futuros trabajos para poder aumentar las prestaciones del sistema y mejorar la interfaz hombre maquina para que sea aun más sencillo de utilizar para aquellas personas que no son especialistas ni en robots ni en ordenadores.

Keywords Tetraplegia, Man Machine Interface, Grasping, Robot, Computer Vision

4 Introduction In classical robotics applications, robots keep on repeating, with a great accuracy, pre-programmed gestures in an environment entirely designed for them. Usage of vision systems can release some constraints when conceiving a robotics cell since the robot is able, in pick and place applications for example, to adapt his gesture to the final position of the object to grasp in his workspace. In general, thanks to knowledge of object’s shape, and in an environment where we can master lighting conditions, the robot can make its trajectory fit to an optimal grasping trajectory. In this paper, we propose a method to control a robot arm based on vision and able to work on a priori unknown objects and in any kind of environment. An application of this method to the control of a robot arm for disable people has allowed validating the method.

5 Development context Despite their potential usefulness and developments lead for many years throughout the world, robots are still scarcely used in public domains and especially in the world of handicapped people [2]. One reason is the expensive cost of existing systems. Another one is the complexity of these machines. This complexity makes difficult the usage of such systems by people non-familiar with technical notions. The smallest problem becomes quickly a hassle, difficult to overcome. In this article, we present an intelligent man machine interface to assist people at grasping objects of every day life. This interface, centred on the use, aims at masking the technology of the equipment involved. This interface is intelligent in that it minimizes the activity required by the end user: an operator action triggers a function with complex objectives while making possible to this operator to maintain the control on the operations. It is intelligent in addition, as it masks the technological aspects of the system used. As we will show it, in the part devoted to the validations, the function reduces, in addition, the differences between handicapped and valid people, going in the direction of “design for all”. The function presented in this document, uses low cost and compact equipment to be prepared for a broad use. The software function was baptised AVISO (Assistance by VIsion for the Seizure of Objects).

6 State of the art To seize an object is one of the elementary robotics tasks and several works are started in this field. Object seizure via a robot can be broken up into two operations: approach with the object then gripping. One can

consider that the approach phase corresponds to a displacement in open space: the objective being geometrically, to adjust the gripper position close to the object. In the gripping phase, the problem is to ensure a stable catch of the objects. For this gripping phase, several grippers of various types were developed, see [1], [18] for examples of industrialized robot grippers. Studies on objects gripping can be found in [19] for example. In the continuation, we will focus ourselves on the calculation of the approach movement for the seizure of an object. In industrial robotics, when one wishes to seize a tool, adapted mechanical interfaces can be conceived. In this case, whether the manipulator arm is precise or not, it is possible to lead seizure operations in open loop by using the repeatability of the arm i.e. its capacities to remake a beforehand learned displacement. It is this kind of principle, which one uses in industry or in hostile environment tele-robotics [4], [5]. In this case, the position of the tools to be seized or deposit is known perfectly, and one knows the type and the number of objects to be seized before the task. When the position of the objects to be seized is not known a priori, the manner of proceeding described above, does not work any more. Manipulators arms, with some exceptions, having a weak precision, the use of sensors to guide displacements of the arm towards the object to be seized during displacements, becomes a requirement. In fact, most frequently, except for example for hostile environments, video sensors are used. In this case, it is necessary to establish a distinction between the case where the object to be seized is known a priori and the case where one does not know which object must be seized. If the object to be seized is known, it is possible to lead quite efficiently a seizure operation using control techniques. Not to weigh down the talk, we will limit the continuation of the presentation to visual controls, the principle of this technique being easily transposable to other sensors. In the principle, a visual control consists in controlling displacements of the arm according to the variations noted between a reference to reach and current information's provided by a vision system. Visual control techniques are usually classified in: •

3D control, in this case, control is based on the 3D information generally reformed starting from a model of the object observed and its image.



One also finds 2D controls; in this case, control is only based on image information.



Last technique is 2D ½ controls [12]. In this case, one estimates, at each iteration of the control law, the homography associated for example to a reference plane on the object, between the current image and a desired image. From this homography, the 3D rotation and extended image co-ordinates (i.e. the coordinates in the image and the relative distance) are used to carry out a closed loop control law for the

six degrees of freedom of the camera. For further information and implementations, see [3], [4], [5], and [13]. For objects seizure, one can distinguish the solutions using a vision system observing the object to be seized and eventually the gripper on one side and those using a sensor installed on the gripper of the robot. In the first category one finds the solution implemented by [20] seizing objects similar to planar objects using a Mitsubishi arm manipulator in a behavioural approach. The cameras observe the gripper and the object to be seized. The gripper and the object are black, the background is white, lighting is controlled. The identification of the object’s type makes it possible to determine the behaviour to adopt and to start the movement of the arm to go and seize the perceived object. A mark is fixed on the object and used as reference to the seizure. A 2D visual control makes it possible to manage the approach towards the object. It is the solution adopted for example in [9] who describe a seizure method based on a 2D control in which the objects are marked and the camera observes both the arm and the object to be seized in the same view. [15] describe a seizure experimentation using the Barret hand [1] using the MIT UMASSTORSO humanoid. The authors use vision to determine the objects grasping position. The objects are planar and observed from the top. In [20] the authors tested various techniques to control the seizure of unknown objects using a humanoid robot. The device includes a stereoscopic camera assembled on the head of the humanoid. They observe the gripper on which

LED

are fixed to be followed with a 2D visual control. In another

experimentation, a laser line makes it possible to identify the objects to be seized and to plan a trajectory of seizure. In this case, the arm is not controlled using vision information. In Sweden, [10] use a geometrical model of the objects to make the seizure. The experiments are led on a PUMA arm manipulator, the cameras observe the scene. [16] used 2D visual control to carry out the seizure of a parallelepiped from information provided by a camera assembled on the gripper of a PUMA arm. [14] used a camera assembled in the gripper of a MANUS arm to manage by visual control the seizure of object. The unknown objects are modelled geometrically and the operator before triggering visual control allots the object a semantic. The arm unit and camera are assembled on a mobile platform. Whether they use cameras on the gripper or a cameras assembly observing the object to be seized (and the gripper), the solutions described require marks or models (geometrical) of the objects to be seized. The solution that we present requires neither marking nor model. It relies on a localization of the object to seize from the man machine interface. In this way, the designation of the object plays a fundamental part in the process of seizure. The principle used will be described in the following paragraph. The man machine

interface designed as well as the visual control developed will be detailed in the two following parts. We will finish the document by presenting the validations carried out with handicapped people and will evoke in the conclusions the next steps of our action.

7 General principle The general principle of the assistance function relies on a manipulator robot usage. A stereoscopic sensor is assembled on robot’s gripper. The image of the one of the cameras is displayed on a screen presented to the user. The operator interacts with the machine with an adapted device. By default, the system works with a mouse. The operator uses the image to indicate the object of the environment that he wishes to seize. After validation, the movements of the arm towards the indicated object are controlled according to vision information to bring the gripper in position of catch of the object. Once the gripper is in position, gripping is triggered and the object is brought back near the operator. The robot used is the arm manipulator ARM (MANUS) of the EXACTDYNAMICS Company. The robot was lent by

AFM

for validations. The stereoscopic sensor was built for the needs of this function on a basis of

Philips Webcams. The cameras are used at the same time to guide the movement of the robot with visual control and to provide information on the scene to the user. The gripper is equipped with an optical barrier, which makes it possible to detect with precision the moment when the object is in the bits of the gripper. The calculator is a PC 2,8GHz. This computer supports the man machine interface and carries out control calculations. The PC is connected to the robot controller through a Can bus. The architecture is presented on Figure 1 below.

AVISO MMI

CAN

USB

Controller

RS232

MANUS

Gripper

Stereovision sensor

Figure 1 : System architecture The principle of the solution that we propose relies on approximate designation by the operator, of the object to seize on the image displayed. In the method, designation remains very simple to minimize the activity of the operator. The difficulty then consists in locating the object and establishing discrimination between the object that the user has truly desired to indicate, the other objects and the bottom of the image. After designation, the operator triggers the automatic movement of the robot. This movement is controlled using the information provided by the stereoscopic sensor. The arm moves towards the object up to ten centimetre of the object, distance below which, the object is not perceived any more simultaneously in the two images. The seizure of the object is carried out in blind mode. The object is then brought back automatically near the user. The general principle is described on Figure 2.

Object designation Automatic approach D ≤ 0.1m Blind grasping

Object in gripper

Stop motion- close gripper

Automatic return

Gripper closed

Figure 2 : Flow chart of seizure principle Man-machine-interface answers to several imperatives related to assistance to handicapped people. It has a limited set of commands; it is intuitive and easy to learn; it hides completely the technical aspect of the system. It is made of a window displaying the video image of one of the two Webcams and four buttons allowing arm motion. To achieve grasping, the user must first orient the gripper in order to have the object in the field of view of the cameras. This realized, the operator must then surround approximately the object with a rectangular area on the image displayed (see Figure 3). Afterwards, the operator needs to validate this selection to trigger the automatic process of arm motion. A button is available on the MMI to stop the arm motion in case of problem. While the arm moves automatically, mouse motion is limited to the area of the stop button to behave like an emergency stop with a simple mouse pressing.

8 The Man Machine interface 8.1

The way it works

The human machine interface answers several requirements related to the assistance to handicapped people. It has a minimum of number of elements; it is intuitive and easy to take in hand; it completely masks the technical aspects of the system. It consists of a window displaying the video image of one the Webcams and four buttons allowing the displacement of the arm. To carry out the seizure, the user must first direct the gripper so that the object is in the field of vision of the cameras. This made, the operator must then surround approximately the object to select by a rectangular zone on the displayed image (see Figure 3). After that, the operator needs to validate this selection to trigger the automatic movement to the arm. A stop key is available to stop the movement of the arm in the event of problem. Mouse is forced focused in the stop button area to behave like an emergency stop.

Figure 3 : Object designation Man Machine Interface The MMI software can interface with most of user control devices available in the world of the handicap (mouse, head alignment, single contactor, command of the wheel chair, etc.). A mechanism of scan control (moving bar) was implemented to allow people having only faculty to validate an action (press on a button, blow, muscular contraction…) to use the system. 8.2

Feature extraction

Once the zone in the image defined, it is necessary to locate and “identify” the object truly indicated by the

operator. This localization is based on the postulate that the object dominates the other objects and the bottom of the scene in the area delimited by the operator. With this guess, one solution to localize the object consists in building a disparity map but this process is expensive and time consuming. The method we propose, avoids the construction of a disparity map selecting feature points in the left and right images. For this stage, feature points are computed using Harris and Stephens [7] corner detector. 8.3

Matching

The technique used for matching features in the left and right image is a voting technique a little similar to the Hough transform. The principle is to retain the pair of points whose distance to the camera is most frequent. The points extracted in the left and right images are associated using the epipolar geometry constraint. Given this, a distance to a 3D point is calculated. However, on an epipolar line, the point of an image can have several correspondents in the other image. Each matching will consequently contribute to a distance assumption.

A1

BJ2

A2

BJ K

BJ1

AJ

A characteristic point in right image

Research area for corresponding characteristic points in left image

Figure 4 : Statistical matching Main pick (soda can)

Figure 5 : Distance picks 8.4

Localization and selection of the object

Once all the matching assumptions are carried out, the selection of the object consists in retaining the most frequent distance. We suppose that this distance corresponds to the object indicated by the operator. We then use the barycentre of the matching points, to locate the direction in which the object is.

9 Vision control Once the localization carried out, the user triggers the automatic approach movement of the robot towards the object. The movements of the robot are then controlled using the information from extracted images. At each step, a projection of the window defined by the operator on each of the left and right image is calculated taking account of displacement. Feature points are extracted from each of these two windows and then matched using the same principle as during designation described above to compute an updated distance to the object. The barycentre of the corner points is updated and a Cartesian speed command proportional to the distance to the object is calculated. This process is reiterated until convergence towards the preset object approach distance (ten centimetres to the object in practice). This process is repeated during robot motion to ensure a correct approach towards the object. When the approach distance is reached, the robot moves of the remaining distance in blind mode towards the object. The gripper is then closed and the object is brought back towards the user automatically.

10 Evaluations The evaluations were carried out successively in Physical Medicine and Rehabilitation units in two establishments: Raymond Poincaré Universitary Hospital in Garches and rehabilitation centre of Coubert. The interface was evaluated by 32 “pilot” subjects (valid) and 10 patients whose aetiologies were as follows ; Muscular dystrophy, Locked in Syndrome, Spinal Cord Injury, multiple sclerosis, arthrogryposis and Amyotrophic Lateral Sclerosis. The people used a device adapted to their handicap situation (or “incapacity”). A clinical description of the characteristics is given Figure 6.

Patient Age Gender

Pathology

Interface pointing

Interface validation

1

29

F

Arthrogryposis

Trackball

mouse click

2

54

M

TT

Trackball

mouse click

3

48

F

SEP

Mouse

mouse click

4

48

M

SLA

Mouse

mouse click

5

47

M

TT

Mouse

mouse click

6

50

F

Muscular dystrophy Mouse

mouse click

7

35

M

TT

Click derived by a switch

Joystick

button 8

34

M

READ

Piloting with the

Dwell click

head tracking 9

5

M

Muscular dystrophy Joystick

Click derived by a switch button

10

29

M

Muscular dystrophy Joystick

Click derived by a switch button

Figure 6: Clinical characteristics and data-processing interface used for the ten tetraplegic patients. The tasks to be realized were as follows: •

Successive seizure of 3 objects laid out on a plan: video tape, soda can, salt box



Seizure of a water bottle placed in height, practically in the maximum reaching zone of the arm



Collecting a bottle on the floor (to test the gripping of objects located out of the field view of the subject)

It is noted (see Figure 7) that for the valid subjects, the number of validations necessary, i.e. the number of actions on a contactor, is relatively constant and does not vary according to the object to select: around six

actions on contactor by task of gripping. The tetraplegic patients have a statistically higher number of validations (6.08+/-2.15 versus 8.42+/- 3.77; T-test p< 0.0001). Object by object, this difference is significant only for two objects: the saltbox (p=0.02) and the bottle on the ground (p=0.003).

14

Number of validations

12 10 8

Control subjects Tetraplegic patients

6 4

Bottom bottle

Bottle

Saltbox

Soda can

0

Video tape

2

Figure 7: A number of necessary validations The results obtained from control subjects show 53.1% of “satisfied” and 46.9% of “very satisfied” subjects. The results obtained from the tetraplegic patients are comparable with 60% patients “satisfied” and 40% “very satisfied” (Figure 8).

,8

,6

Control subjects Tetraplegic patients

,4

Very satisfied

Not satisfied at all

0

Satisfied

,2

More or less satisfied

Percentage of subjects

1

Figure 8: Piloting mode satisfaction All of the users consider the equipment not tiring during its use: 90.6% of the control subjects and 80% of the tetraplegic patients estimate that its use is “not tiring at all” (Figure 9).

Percentage of subjects

1 ,8

,6

Control subjects Tetraplegic patients

,4

,2 0 Very tiring

Tiring

Quite tiring

Not tiring at all

Figure 9: Device usage The results are also very encouraging: among the valid ones, one person considers this training “difficult”, 28.1% consider it “easy” and 68.8% consider it “very easy”. Among the tetraplegic patients, 40% find this training “easy” and 60% find it “very easy” (Figure 10).

Percentage of subjects

1 ,8

,6

Control subjects Tetraplegic patients

,4

,2 0 Very difficult

Difficult

Easy

Very easy

Figure 10: Learning interface handling Time necessary to the user to select the target and to trigger the robot command (exploration time) was

90 80 70 60 50

Control subjects

40

Tetraplegic patients

30 20

Bottom bottle

Bottle

Saltbox

0

Soda can

10 Video tape

Duration of the selection of the object (sec)

measured for 15 subjects (including three tetraplegic people).

Figure 11: Target selection duration The tetraplegic patients have an exploration time a little longer than the control subjects do but this difference is not significant (Figure 11).

11 Conclusion The analysis of the results obtained with the population of valid control subjects confirms the facility of use,

the effectiveness and the reliability of the graphic interface to control the robot: few failures, fast realization of the gripping tasks with few validations on the MMI. The qualitative results obtained in this population confirm the great satisfaction of the valid users towards this piloting mode. The only reserve (handling of fragile objects) seems rather ascribable to the lack of control of the gripping force of the robot than to its piloting interface. The results obtained from the tetraplegic patients are very encouraging: the performances during the objects seizure can be compared to those of pilot subjects with regard to the number of failures and the duration of realization of the task. The graphic interface seems to meet its aims of facility of training and use, reliability and adaptability. The verbal satisfaction scales are in agreement with these quantified results and confirm, in the two populations, satisfaction towards this piloting mode as well as the training facility to handle the

AVISO

graphic interface. The semi-directing talks make it possible to plan the future developments to be brought to answer best to handicapped people’s waiting. Increasing in the camera field of view could allow time saving and a more powerful exploration of the person’s environment. The improvement of the gripper could allow complex objects seizure. Enthusiasm is large for the mobile base, which would make it possible to explore the environment in another room, or to be free to embark it outside or not when going out. Two patents were deposited. Contacts are in hand with Exact-Dynamics in perspective of a technology transfer.

12 Acknowledgments This development was conducted in partnership with APPROCHE association and with the financial support of The Fondation des Caisses d’Epargnes pour la Solidarité as well as support from OSEO ANVAR in the frame of contract

AVISO.

APPROCHE is the association for the promotion of new technologies for

handicapped people. APPROCHE offers possibilities to validate prototypes realised in one of its ten reeducation or functional adaptation establishments. We particularly thank Dr. Michel Busnel president of APPROCHE association and Annie Cron Picard for

their support throughout this project. We especially thank Philippe Vallet and Claude Dumas of the Service of the Technical assistances from

AFM

thanks to whom we could make the experiments with the

MANUS

arm.

13 REFERENCES [1] http://www.barrett.com/ [2] M. Busnel, R. Gelin et B. Lesigne,” Evaluation of a robotized MASTER/RAID Workstation at home: Protocol and first results”, Proc. ICORR 2001, Vol. 9, pp. 299-305, 2001. [3] Chaumette. Asservissement visuel. In La commande des robots manipulateurs, W. Khalil (ed.), Chap. 3, Traité IC2, Hermès, 2002. [4] P. I. Corke, “Visual control of robot manipulators—A review,” in Visual Servoing, K. Hashimoto, Ed. New York: World Scientific, 1993. [5] P Guermeur, E Pissaloux, A Qualitative Image Reconstruction from an Axial Image Sequence, - AIPR, Washington, october, 2001. [6] G. D. Hager, G. Grunwald, and G. Hirzinger, “Feature-based visual servoing and its application to telerobotics,” in Proc. IEEE/RSJ/GI Int. Conf. Intell. Robots Syst., Sept. 1994, vol. 1, pp. 164–171. [7] C.Harris & M.Stephens A combined corner and edge detector, Proc. of 4th Alvey Vision Conf., 1988, 147-151. [8] K. Hashimoto, T. Kimoto, T. Ebine, and H. Kimura, “Manipulator control with image-based visual servo,” in Proc. 1991 IEEE Int. Conf. Robot. Automat., Sacramento, CA, Apr. 1991, vol. 3, pp. 2267– 2272. [9] R. Horaud, F. Dornaika, B. Espiau Visually Guided Object Grasping, IEEE Transaction on robotics and automation, vol. 14, n° 4, august 1998. [10]

D. Kragic, H. I. Christensen,

Robust Visual Servoing, The International Journal of Robotics

Research Vol. 22, No. 10–11, October–November 2003, pp. 923-939, [11]

C. Leroux, M. Guerrand, C. Leroy, Y. Méasson, B. Boukarri, « MAGRITTE: a graphic supervisor

for remote handling interventions », 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation, 'ASTRA 2004', 2 – 4 November 2004, ESTEC - Noordwijk, The Netherlands [12]

E. Malis, F. Chaumette, S. Boudet. 2 1/2 D Visual Servoing. IEEE Transactions on Robotics and

Automation, 15(2):238-250, April 1999. [13]

N. Maru, H. Kase, S. Yamada, A. Nishikawa, and F. Miyazaki, “Manipulator control by visual

servoing with the stereo vision,” in Proc. 1993 IEEE/RSJ Int. Conf. Intell. Robots Syst., Yokohama,

Japan, July 1993, vol. 3, pp. 1866–1870. [14]

Matsikis, A., M. Schmitt, M. Rous and K.-F. Kraiss (1999): Ein Konzept für die mobile

Manipulation von unbekannten Objektenmit Hilfe von 3D- Rekonstruktion und Visual Servoing. In: Informatik aktuell, 15. Fachgespräch Autonome Mobile Systeme (AMS), November 26.-27., München, pp. 179 - 187. (in German only) [15]

Morales, A., Sanz, P. J., del Pobil, A. P., and Fagg, A. H. (unpublished) Vision-based three-finger

grasp synthesis constrained by hand geometry [16]

N. Papanikolopoulos and C. Smith, “Issues and experimental results in vision-guided robotic

grasping of static or moving objects,” Industrial Robot, 25(2):134-140, 1998. [17]

N. Rezzoug et P. Gorce, “A biocybernetic method to learn hand grasping posture”, Kybernetes, Vol.

32, No. 4, pp. 478-490, 2003. [18]

http://www.robosoft.fr/

[19]

Y. Rybarczyk, S. Galerne, P. Hoppenot, E. Colle et D. Mestre, ”The development of robot human-

like behaviour for an efficient human-machine cooperation”,AAAT, pp.274-279, 2001. [20]

G. Taylor, L. Kleeman, “Grasping unknown objects with humanoid robot”, proc Australasian conf.

on robotics and automation, Auckland 27-29 Nov. 2002, [21]

K. Vollmann and M. C. Nguyen, Manipulator control by calibration-free stereo vision SPIE’s Intern.

Conference on Intelligent Robots and Computer Vision XV, Boston, November 1996

Intelligent interface for robot object grasping 1 Abstract ...

think to an industrial transfer of this technology at mid term. The paper terminates on ..... education or functional adaptation establishments. We particularly thank ...

404KB Sizes 1 Downloads 228 Views

Recommend Documents

MILO - Mobile Intelligent Linux Robot
Indian Institute of Information Technology, Deoghat, Jhalwa, Allahabad - 211 012. Abstract .... the wireless network of the robot goes out of range, it connects the ...

MILO - Mobile Intelligent Linux Robot
A high speed switching circuit switches the ... connects the internet using a GSM (Global System for ... internet and contacts a server running on a static IP. In.

An Intelligent Interface Agent for an Airline Company ...
Interface Agent to support the use of a web portal in an airline company. The interface agent ... TAP Portugal - Portal DOV [1] is currently a hub of information and services to all crewmembers. ... TAP installations or by telephone. For instance ...

Abstract Data Types in Object-Capability Systems
Jul 9, 2016 - the ADT are encapsulated together in the ADT, and the code in the. ADT has full access to all the ..... Ada - the project: The DoD high order lan-.

User interface for removing an object from a display
Jul 18, 2007 - set the width of cd qrc CurrPict to width of ad grc CUrrPict - 10 end if. J end repeat ..... is based on a laptop computer (not shown). Digital system ...

Abstract Data Types in Object-Capability Systems
Jul 9, 2016 - the ADT are encapsulated together in the ADT, and the code in the. ADT has full access to all the instance's representations. In con- trast, pure ...

Abstract 1 Introduction - UCI
the technological aspects of sensor design, a critical ... An alternative solu- ... In addi- tion to the high energy cost, the frequent communi- ... 3 Architectural Issues.

Abstract 1. Introduction
Mar 17, 2009 - 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30 ... precisely, in a decentralized equilibrium, social networks can.

1 Grasping the Third Realm John Bengson ...
[we lack] an account of the link between our cognitive faculties and the objects known. … ...... one might hold that the experience which is the basis of Trip's beliefs is best ...... comes to enjoy a state bearing non-accidental relation R to the

Overt Object Shift in Japanese Masao Ochi Abstract ...
Jun 1, 2008 - we will return, is that a null object does not show TR, whether it is a relative gap or pro as shown in (13a-b). (12) PP object. Taro-no Jiro-kara moratta tegami. Taro-GEN Jiro-from received letter. 'the letter that Taro received from J

Intelligent Tutoring 1 Running head: INTELLIGENT ...
Graesser, Lu et al., 2004) helps college students learn about computer literacy, ... system is able to trace the student's progress using these comparisons and to ...

Visual Steering for Program Debugging Abstract 1 ... - Semantic Scholar
Department of Computer Science, LI 67A ... At the highest level, the user can slow down or speed up the execution rate of the program. ..... Steering,” Georgia Institute of Technology College of Computing Technical Report GIT-CC-94-15,. 1994 ...

Visual neuroscience of robotic grasping
validare le teorie relative ai meccanismi utilizzati dalle aree cerebrali coinvolte nella presa. ..... (generated from http://dan.corlan.net/medline-trend.html). house.

Mozart: A Programming System for Agent Applications Abstract 1 ...
Nov 3, 1999 - it is to develop applications with these properties. This makes Mozart particularly well-suited for build- ing agent applications. We give a ...

Abstract 1. Introduction A Simple Method for Estimating ...
Feb 24, 2004 - Lawless, Hu, and Cao (1995) present a method for the analysis of the important problem of estimation of survival rates from automobile warranty data when both time to failure and ..... intractable for analytical integration.

Template for Abstract (Grouping) 1. Introduction Start ... -
Programme Phone. No. Email. Address. 1. 2. 3. 4. 5. 6. Title of project. : Supervisor. : (name as in IC or Passport). Group photo. HEADER:EDX-(CATEGORY) ...

Hobbes: CVS for Shared Memory Abstract 1 Introduction
These are in addition to other benefits of the Hobbes model such ... on the merits and limitations of the model. Our current ... that existing programs written for shared memory ..... Our custom workloads take a different approach and are used to ...

Rope Caging and Grasping
design and fabrication of the soft gripper used in the experiment. 1T. Kwok and Y. Chen .... Although the topology analysis based on a topological loop has been ...

Rope Caging and Grasping
lead to a smart method for caging/grasping 3D objects. For a rope L stored as ..... ping,” in IEEE International Conference on Robotics and Automation,. 2013, pp.

Visual Steering for Program Debugging Abstract 1 ... - Semantic Scholar
As software systems become more complex and must handle ever ... application to program visualization where it could prove quite beneficial in reducing debugging time. The ... These interaction philosophies can be very important to the development of

HOW DYNAMIC ARE DYNAMIC CAPABILITIES? 1 Abstract ...
Mar 11, 2012 - superior performance. The leading hypothesis on performance is deemed to be that of sustainable competitive advantage, (Barney 1997).