Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, Technische Universität München, Munich, Germany, August 1-3, 2008

Robot Task Control Utilizing Human-in-the-loop Perception Wonpil Yu, Jae-Yeong Lee, Heesung Chae, Kyuseo Han, Yucheol Lee, and Minsu Jang

Abstract— We propose a network robot application utilizing a human-in-the-loop structure. The proposed structure hybridizes robot perception with perception capability of a human, rendering a robot immune to the uncontrolled environment. Network infrastructure enables the user to intervene in the operation of a robot whenever necessary, enabling seamless transition of task control between the robot and the remote user. We have developed a prototype security robot system by implementing the proposed human-in-the-loop structure. Introduction of network infrastructure, however, also invokes system instability due to network fluctuation, time delay, or jittering. We could resolve the instability by imparting task intelligence to the robot and employing wireless distribution system (WDS) configuration for seamless data communication beyond the coverage of a single access point. Although our approach may solve only part of problems inherent in networked robots, we found that the developed robot system bears sufficient applicability to real world robotic systems incorporating large-area robot navigation.

I. INTRODUCTION Until recently, robots have demonstrated their usefulness in a limited number of applications such as welding, assembly, packaging, and the like. These applications typically comprise a series of predetermined operations in a precisely controlled environment. Today, there is a growing interest for robots to better serve humans beyond manufacturing sites and to accommodate various demands raised as social, industrial, and personal needs change. While traditional robotics research focused on enhancing functionalities of robots, a new computing paradigm, which is now called ubiquitous computing, has been proposed [1]. To date, a number of novel technologies have been proposed supporting the idea of ubiquitous computing: radio frequency identification, wireless sensor network, mobile devices, broadband convergence network, etc. We may consider network robotics [15] as a specific embodiment of ubiquitous computing from the standpoint of robotics. Network robot, once called as online robot or Internet robot during the early days of the Internet, utilizes wireless communication for transmission of sensor and robot control signals to and from remote users or collaborating robots. Wireless communication has contributed to the expansion of robotic applications but at the same time, invoked system instability originating from bandwidth fluctuation, time delay, or jittering of network. As a solution to the above problem, imparting local intelligence as much as possible to a remote robot (or client) was suggested [15]. The authors are with U-Robot Research Division at ETRI (Electronics and Telecommunications Research Institute), Daejeon, Korea. E-mail:

[email protected].

978-1-4244-2213-5/08/$25.00 ©2008 IEEE

Aside from the network problem, it is well known that perception of environment is one of key obstacles to be overcome for commercialization of robotic systems [13]. Improvement of robot service quality under dynamic environment is a challenging task for building commercial robotic systems, where a recent work [14] has suggested a semi-autonomous and human-in-the-loop approach for manipulation in human environments. Recent experiments in adjustable autonomy also gives much insight to designing a human robot interaction mechanism, describing human metalevel control over the level of robot autonomy [12]. However, it is rare to find a robotic system that faithfully performs given tasks in uncontrolled situations. Although wireless communication invokes network-oriented instability, it is a primary goal of this research to design a human-in-the-loop perception mechanism to cope with dynamic environment by making use of pervasive network and communications infrastructure. For a realistic demonstration of the proposed humanin-the-loop structure, we have developed a security robot system based on network and communication infrastructure. A variety of recent security robot systems make use of network or communication infrastructure [6]–[11]. An early development of a ground surveillance robot introduced a blackboard architecture for communication of commands and exchange of sensor data among constituting elements of the robot [3], where each individual element may be considered primitive from the standpoint of current technology. A recent work described an architecture for a security robot utilizing WLAN and mobile IPv6 [4], proposing a solution for seamless network hand-off when the robot crosses a current WLAN coverage into another one. A probabilistic approach implemented coordination of multi-robots for surveillance task, taking the limitations and uncertainties of sensors into account for operation in large environments [5]. As described earlier, detection and matching of events originating from dynamic environment is a formidable task for any robotic system; in fact, it is almost impossible to distinguish individual events originating from dynamic environment, whereas the same task is so simple for humans. It is, therefore, a natural reasoning that a robotic system carrying out a specific task in the ordinary environment should be equipped with a mechanism to deal with unexpected events in cooperation with human [14]. The most convenient approach to this goal is to implement a telepresence system [2], whereby a human supervisor can intervene in the operation of a robotic system whenever the robotic system fails to handle the unexpected events. However, it is a paradoxical situation that while network and communication infrastructure are

395

a practical means for the robotic system to deal with uncontrolled situations, the same infrastructure may introduce instability to the robotic system. We have organized the paper into five sections. Section II gives an overview of the developed security robot system. Section III describes our implementation of the proposed HIL (Human-In-the-Loop) structure for the security robot system. Section IV describes a demonstrative operation of the developed security robot system and Section V concludes the paper.

(a)

II. OVERVIEW OF THE D EVELOPED S ECURITY ROBOT S YSTEM We have implemented a prototype security robot system into the ground floor of our office building, which is measured to be 74.8 m × 25.2 m. The respective roles of subsystems constituting the developed security robot system are much the same as those of conventional security robot systems [6]–[11]. A key difference lies in the proposed control architecture which allows the user to intervene in the task and a decision maker embedded in the robotic system. As shown in Fig. 1, the decision maker module decides who is entitled to execute the current task, i.e., perception task at some particular time. The decision maker module can be located inside the robot or a remote server; we have placed the decision maker module inside the robot, making task intelligence intact from network instability, while the remote server takes care of update or change of the task intelligence. We have defined two fundamental system requirements for our robotic system: robustness to dynamic environment and large-area robot navigation. We have further combined the above requirements into a single phrase to specify our design goal: a dependable security robot capable of navigating with a maximum speed of 0.5 m/s in a large office environment1 .

(b) Fig. 1. (a) Conventional controller. (b) The proposed human-in-the-loop controller.

B. Robot System We have developed a network robot carrying out a security task in the office environment. Fig. 2 shows the external appearance of the developed security robot. On-board sensors and hardware modules of the robot can be classified into two groups. •

A. Data Types We have identified three types of data: environment data, robot data, and event data. The following are brief descriptions of respective data types. • Environment data: wireless sensor network provides sensed data obtained by sensor nodes distributed across the environment. We developed a sensor network based on ZigBee [16] where sensing data obtained from a distant sensor node are transferred to the sink node installed inside the robot by using multi-hopping relay. • Robot data: sensor data gathered by sensors installed in the robot—video, sound, range data, remained battery value, and so on. • Event data: spontaneous data delivered to the decision maker module of the robot system when the robot fails to understand a current situation. Localization failure, path blockage, or system failure corresponds to this type and is used to trigger a different feedback path for robot control (see Fig. 1-(b)). 1 The maximum speed may be changed to a different value depending on the situation, which is not a primary concern of this paper.



Communication/network component: a sink node for gathering sensing data delivered from wireless sensor nodes, WLAN module, and HSDPA (High Speed Downlink Packet Access) module to handle 3G-based communications in case of network failure Control/sensor component: Navi-Guider [18], localization sensor, laser scanner, video camera, microphone, bumper, and motion sensor

Control requests and responses are delivered between the remote user and the robot through the WLAN module installed in the robot. Video signals are transmitted being encoded according to H.263 standard while other sensor data are transmitted being encoded according to our own specific data format. C. Wireless Localization System We have developed an infrared based localization sensor. The localization sensor comprises a detector and a set of two IR tags; the two IR tags are attached on the ceiling and the detector is installed on top of the robot. The detector is equipped with an IR filter so that only IR light from the tags reach the image sensor inside the detector. Localization is based on triangulation principle, where the detector controls an on-off state of each IR tag for identification at the initialization stage by using 2.4 GHz RF communication.

396

(a) Fig. 2.

The external appearance of the developed security robot.

(b)

Fig. 3.

(a) Tree routing algorithm. (b) Mobility problem.

Fig. 4.

System structure of the developed security system.

A detailed description of localization performance of the localization sensor can be found in [17]. To accommodate a large-area environment, additional tags are attached on the ceiling. Since the robot is equipped with wheel encoders, a proper fusion of encoder values and localization data significantly reduces the number of tags to be attached on the ceiling. D. Sensor Network As described in Section II-B, the developed sensor network consists of two types: wireless sensor nodes utilizing ZigBee network protocol and a sink node for collecting sensing data measured by the wireless sensor nodes. Different from existing sensor network applications, our work takes account of mobility of a navigating robot. Under normal operating conditions, data from sensor nodes can reach the robot by using the tree topology without any routing table as shown in Fig. 3-(a). (“R” represents a navigating robot.) If the robot moves out of RF transmission range of its current parent node, it cannot communicate with the entire sensor network as shown in Fig. 3-(b). Currently, Zigbee specification version 1.0 [16] does not support node mobility. To support the mobility of the robot, we have developed a mobility-supporting procedure consisting of four phases of operation: link failure detection phase, network discovery phase, rejoin phase and announcement phase. In the proposed procedure, a sink node installed in the mobile robot (hereinafter, it is called a mobile node) should behave as an end device of a Zigbee network. In other words, the mobile node must not have any child nodes in a sensor network. It is because if the communication link between the mobile node and its parent node is broken as the robot moves, then all the data transmission to or from the mobile node will be disabled. Out of transmission range, the mobile node will try to re-establish its communication link to a nearby node at the rejoining phase of the proposed procedure. If the mobile node rejoins the sensor network successfully, it gets a new network address according to the Zigbee specification. Since at this moment, sensor nodes retaining a previous network address of the mobile node cannot communicate with the mobile node, a new network address of the mobile node is announced to the entire sensor network. We consider

the proposed mobility-supporting sensor network essential to robot navigation applications utilizing network infrastructure and found it useful for the security robot system where reliable data transmission is crucial. III. H UMAN -I N - THE -L OOP A RCHITECTURE Fig. 4 illustrates the basic structure of the developed security robot system. Although we present a security robot system in this paper, various other applications can also utilize the proposed structure by implementing relevant task intelligence into the structure, which corresponds to the semantic module of Fig. 4. The semantic module corresponds to the decision maker module of Fig. 1-(b). The three functions below are primary tasks that the semantic module carries out among other things. • Exception handling We have defined four exception types: system error, navigation error, localization error, and miscellaneous error. We also defined error detection and identification method for each individual error. Some errors are difficult to define and sometimes, system reboot is the only way to deal with such an error. As shown in Table I, errors defined in the proposed robot system include typical ones observed for an ordinary robot navigation system. • Robot mode control We have defined three operating modes: normal, emergency, and manual mode. Each mode has its own control scenario and the semantic module determines transition between the three modes.

397

TABLE I E RROR DEFINITIONS OF THE PROPOSED ROBOT SYSTEM . Error types System error

Navigation error

Localization error

Miscellaneous error



Description Robot OS suspension Server OS suspension Internal robot sensor failure Internal robot controller failure Start position error Destination position error Dynamic obstacles Mobile platform error Other unknown error Connection failure Communication failure Ambient light interference Initialization failure Collision with passers-by Bumper mal-function Breakaway from normal paths or kidnapping Charging failure Abnormal low power Physical damage of sensor nodes

Fig. 5.

The user can override the current operating mode at any time, which triggers transition to the manual mode. Event handling Events detected at the semantic module also triggers transition among the three operating modes. In our implementation, detection of an event given by a magnetic door sensor, PIR (Passive InfraRed) sensor, or a combination of both triggers transition to emergency mode from normal mode.

The semantic module of Fig. 4 monitors various status data gathered by the robot system and determines intervention of the user according to the decision made inside the semantic module. By making use of network and communication infrastructure, the user can take care of imminent system errors, whereas traditionally, the robot itself had to carry out error handling, leading to unsatisfactory task execution. More specifically, the semantic module provides functions of processing world model, representing robot service knowledge, and performing service execution. The aforementioned functions are implemented by an embedded rule engiene called eBossam [19]. eBossam is a forward-chaining production rule engine built based on the RETE algorithm [20]. eBossam contains a set of inference rules for performing simple reasoning over OWL ontology. Reference [21] provides more details about the semantic module. The performance of the semantic module has a direct influence on the apparent behavior of the robot and one may measure the task intelligence by the number of user intervention in the robot task execution—the smaller the intervention is, the higher the task intelligence becomes. IV. E XPERIMENTAL R ESULTS The prototype security system is directed to an office monitoring application, where a robot carries out a routine patrol task under normal conditions. We have implemented the prototype security system on the ground floor of our

A screen shot of the developed web-service client program.

building, the area of which was measured to be 74.8 m × 25.2 m (see Fig. 8 for layout of the workplace). The localization network periodically provides location data to the robot and the server, with which the robot carries out a navigation task and at the same time, the user can check the current location of the robot by issuing an inquiry to the server. We have developed a client program for displaying a 2D map, status data of the environment fed from the wireless sensor network2 , and location information of the robot as it moves. (Fig. 5 shows a screen capture of the developed web service client program.) The client program also provides a control interface for controlling the behavior of the robot in case of manual operation. When the semantic module detects irregular situation by contextual information processing, it subsequently issues a new navigation task to the robot to visit the spot in question. Also, when the semantic module determines handover of task control, a short message service, for example is delivered to the user, after which the user may monitor the current scene transmitted by video stream encoded according to H.263 protocol (see Fig. 6-(a)) and manipulate the situation at his or her will by using relevant user interface provided by the remote terminal (in our case, the remote terminal is a mobile phone, an exemplary user interface of which is shown in Fig. 6-(b)). As we clarified in Section II, another goal of this research work is to realize large-area robot navigation. To that end, the foremost requirement is to secure a reliable wireless network covering a large-area. A conventional choice of a network robot is to employ a wireless access point (AP) for data communication. A typical coverage of an AP is about 10 ∼ 30 m of radius, which implies communication cutoff when a mobile robot moves out of the coverage of a current AP. The proposed human-in2 u-clips in Fig. 4 represents a sensor node with the proposed mobilitysupporting feature.

398

V. C ONCLUSION

(a)

(b)

Fig. 6. (a) A snapshot of video streaming provided by the video camera installed in the robot. (b) The mobile user interface developed for remote manipulation of security task.

Fig. 7.

We have proposed a robot system utilizing a human-inthe-loop structure. Pervasive network and communication infrastructure enables the proposed structure to be available under ordinary environments. The proposed human-in-theloop structure can render a robot application vulnerable to environment changes into a reliable solution. The design of a semantic module is important for the proposed mechanism to be a practical solution. Domain and task knowledge should be properly encoded into the rule engine of the semantic module so that the remote user can properly handle unknown events originating from dynamic environments based on alarming of the semantic module. We also implemented a large-area navigation solution by setting up a wireless mesh network across a large area. The proposed robot security application is a generic framework which can be modified into different robot applications by introducing relevant task intelligence (which is a set of control and inference rules utilizing domain and task knowledge). One of our future works include design of user interfaces that will facilitate the use of human-in-the-loop structure. VI. ACKNOWLEDGMENTS

Wireless network setup using WDS function.

the-loop perception also requires a reliable network, where various types of control and sensing data should be delivered to and from the remote user and the robot. Fortunately, we could implement a large-area networking solution by using custom-off-the-shelf wireless access points. The employed AP provides a so-called WDS (Wireless Distribution System) function. The WDS function enables wireless connection of APs, which is useful for building a wireless mesh network in a large area, whereby the user can enjoy seamless data service while he or she is moving around. Fig. 7 illustrates a basic configuration of APs utilizing WDS function. Fig. 8 illustrates our setup of APs to build a largearea wireless network. From the viewpoint of mobile robot navigation, the robot transfers sensor data to a nearby AP by using the same IP address, which removes unnecessary reconnection sequence when the robot crosses boundaries of APs, leading to removal of network delay. Due to this property, we could implement the proposed human-in-theloop perception seamlessly across the operating area.

Fig. 8.

Setup of large-area wireless network.

This work was partly supported by the IT R&D program of Korea MIC (Ministry of Information and Communication) and IITA (Institute for Information Technology Advancement). [2005-S-092-02, USN-based Ubiquitous Robotic Space Technology Development]. R EFERENCES [1] M. Weiser, “The computer for the 21st century,” Scientific American, vol. 265, no. 3, pp. 66–75, 1991. [2] E. Paulos and J. Canny, “PRoP: Personal roving presence,” in Proc. ACM SIGCHI, vol. 1, 1998, pp. 296–303. [3] S. Y. Harmon, “The ground surveillance robot (GSR): An autonomous vehicle designed to transit unknown terrain,” IEEE J. Robotics and Automation, vol. 3, no. 3, pp. 266–279, Jun. 1987. [4] C. Ku and Y. Cheng, “Remote surveillance by network robot using WLAN and mobile IPv6 techniques,” in Proc. TENCON, 2007, pp. 1–4. [5] M. Moors, T. R¨ohling, and D. Schulz, “A probabilistic approach to coordinated multi-robot indoor surveillance,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Aug. 2005, pp. 3447–3452. [6] J. N. K. Liu, M. Wang, and B. Feng, “iBotGuard: An Internetbased intelligent robot security system using invariant face recognition against intruder,” IEEE Trans. Systems, Man, and Cybernetics—PART C: Applications and Reviews, vol. 35, no. 1, pp. 97–105, Feb. 2005. [7] R. C. Luo and K. L. Su, “Autonomous fire-detection system using adaptive sensory fusion for intelligent security robot,” IEEE/ASME Trans. Mechatronics, vol. 12, no. 3, pp. 274–281, Jun. 2007. [8] Z. Dehuai, X. Gang, Z. Jinming, and L. Li, “Development of a mobile platform for security robot,” in Proc. IEEE Int. Conf. Automation and Logistics, Aug. 2007, pp. 18–21. [9] C. W. C. et al., “Development of a patrol robot for home security with network assisted interactions,” in Proc. SICE Annual Conference, Sep. 2007, pp. 924–928. [10] H. Andreasson, M. Magnusson, and A. Lilienthal, “Has something changed here? autonomous difference detection for security patrol robots,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Oct. 2007, pp. 3429–3435. [11] R. C. Luo, P. K. Wang, Y. F. Tseng, and T. Y. Lin, “Navigation and mobile security system of home security robot,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Oct. 2006, pp. 169–174.

399

[12] J. W. Crandall and M. A. Goodrich, “Experiments in adjustable autonomy,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Oct. 2001, pp. 1624–1629. [13] M. H¨agele, World Robotics 2007. Frankfurt, Germany: International Federation of Robotics, Oct. 2007, ch. 6, pp. 379–459. [14] C. C. Kemp, A. Edsinger, and E. Torres-Jara, “Challenges for robot manipulation in human environments,” IEEE Robotics & Automation Magazine, vol. 14, no. 1, pp. 20–29, Mar. 2007. [15] G. McKee, “What is networked robotics?” in Proc. ICRA (Int. Conf. Robotics and Automation) Workshop on Omniscient Space, Apr. 2007. [16] ZigBee Network Layer Specification 1.0, ZigBee Alliance, Dec. 2004. [17] H. Chae, W. Yu, J. Lee, and Y. Cho, “Robot localization sensor for development of wireless location sensing network,” in IEEE Int. Conf. Intelligent Robots and Systems, Oct. 2006, pp. 37–42. [18] S. Na, H. Ahn, Y. Lee, and W. Yu;, “Navi-Guider: An intuitive guiding system for mobile robot,” in IEEE Int. Symp. Robot and Human Interactive Communication (RO-MAN), Aug. 2007, pp. 228–233. [19] M. Jang and J. Sohn, “Bossam: An extended rule engine for OWL inferencing,” in Proc. RuleML (LNCS Vol. 3323), Nov. 2004. [20] C. Forgy, “RETE: A fast algorithm for the many pattern/many object pattern match problem,” Artificial Intelligence, no. 19, pp. 17–37, 1982. [21] W. Yu, J. Lee, H. Ahn, Y. Ha, J. Sohn, and Y. Kwon, “Design and implementation of a ubiquitous robotic space,” IEEE Trans. Automation Science and Engineering, submitted for review.

400

Robot Task Control Utilizing Human-in-the-loop ...

To date, a number of novel technologies have been proposed supporting the idea of ubiquitous computing: radio frequency identification, wireless sensor network, mobile de- vices .... Downlink Packet Access) module to handle 3G-based.

474KB Sizes 1 Downloads 204 Views

Recommend Documents

Robot and control system
Dec 8, 1981 - illustrative of problems faced by the prior art in provid ... such conventions for the purpose of illustration, and despite the fact that the motor ...

Task Space Robot Control using an inner PD loop.
directly from task space without requiring the robot kine- matic inverse mappings. This approach is used in [8], [10]. In [10] the author proves asymptotic stability ...

Task Space Robot Control using an inner PD loop.
capture 500 images per second and a data acquisition card may sample an optical ..... [5] C. C. Cheah, M. Hirano, S. Kawamura and S. Arimoto. Approximate.

Utilizing Gamma Band to Improve Mental Task Based Brain-Computer ...
The author is with the Department of Computer Science, University of Essex,. Colchester, CO4 3SQ .... RBP training algorithm were chosen after some preliminary experiments. .... the B.E. and M.Eng.Sc. degrees in electrical engineering and ...

Integrating human / robot interaction into robot control ...
10, Place Georges Clemenceau, BP 19, 92211 Saint-Cloud Cedex, France. bDGA / Centre .... and scalable man / machine collaboration appears as a necessity. 2.2. ..... between the robot onboard computer and the operator control unit. 4.3.

Integrating human / robot interaction into robot control architectures for ...
architectures for defense applications. Delphine Dufourda and ..... focusses upon platform development, teleoperation and mission modules. Part of this program ...

robot modeling and control pdf
Page 1 of 1. File: Robot modeling and control pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. robot modeling and ...

Stabilization Control for Humanoid Robot to Walk on ...
To control robot's posture, torque and tilted angular velocity are modeled as the parameters of a network system. Fig. 3(a) and Fig. 3(b) show the inverted pendulum model of robot and its corresponding one-port network system, respectively. The robot

pdf-175\romansy-16-robot-design-dynamics-and-control-cism ...
... the apps below to open or edit this item. pdf-175\romansy-16-robot-design-dynamics-and-control ... nal-centre-for-mechanical-sciences-from-springer.pdf.

Visual PID Control of a redundant Parallel Robot
Abstract ––In this paper, we study an image-based PID control of a redundant planar parallel robot using a fixed camera configuration. The control objective is to ...

Control of a humanoid robot by a noninvasive brain-computer ...
May 15, 2008 - Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA ... M This article features online multimedia enhancements ... humanoid robot which only requires high-level commands.

Dynamic programming for robot control in real-time ... - CiteSeerX
performance reasons such as shown in the figure 1. This approach follows .... (application domain). ... is a rate (an object is recognized with a rate a 65 per cent.

Artificial Intelligence Based Robot Control Using Face and ... - IJRIT
artificial intelligence technique where robot control authentication process is done ... intelligence and face recognition system and hand gesture recognition and ...

Dynamic programming for robot control in real-time ... - CiteSeerX
is a conception, a design and a development to adapte the robot to ... market jobs. It is crucial for all company to update and ... the software, and it is true for all robots in the community .... goals. This observation allows us to know if the sys

Variable Spatial Springs for Robot Control Applications
Oct 29, 2001 - Inf. Tech. and Systems ... information can be coded using scattering techniques and sent ... ity of references [14], we can associate to each mov-.

Dynamic programming for robot control in real-time ...
real-time: towards a morphology programming ... conception, features for the dynamic programming and ... Lot of applications exist in the computer science.