IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

633

[13] A. Hatami and K. Pahlavan, “Performance comparison of rss and toa indoor geolocation based on UWB measurement of channel characteristics,” in Proc. IEEE 17th Int. Symp. Personal, Indoor and Mobile Radio Commun., Helsinki, Finland, 2006, pp. 1–6. [14] A. H. Sayed, A. Tarighat, and N. Khajehnouri, “Networked-based wireless location,” IEEE Signal Process. Mag., vol. 22, no. 4, pp. 24–40, 2005. [15] H. Chae, J. Lee, W. Yu, and N. L. Doh, “StarLITE: A new artificial landmark for the navigation of mobile robots,” in Proc. 1st JapanKorea Joint Symp. Netw. Robot Syst., Kyoto, Japan, 2005. [16] S. Park, H.-S. Ahn, and W. Yu, “Adaptive path-loss model-based indoor localization,” in Proc. IEEE Int. Conf. Consumer Electron., Las Vegas, NV, Jan. 2008.

Design and Implementation of a Ubiquitous Robotic Space Wonpil Yu, Jae-Yeong Lee, Young-Guk Ha, Minsu Jang, Joo-Chan Sohn, Yong-Moo Kwon, and Hyo-Sung Ahn

Abstract—This paper describes a concerted effort to design and implement a robotic service framework. The proposed framework is comprised of three conceptual spaces: physical, semantic, and virtual spaces, collectively referred to as a ubiquitous robotic space. We implemented a prototype robotic security application in an office environment, which confirmed that the proposed framework is an efficient tool for developing a robotic service employing IT infrastructure, particularly for integrating heterogeneous technologies and robotic platforms. Index Terms—Localization network, mobile robot, navigation, ubiquitous robotic space, wireless sensor network.

I. INTRODUCTION

A

MBIENT intelligence denotes a digital environment that proactively, but sensibly, supports people in their daily lives [1]. From a robotics viewpoint, this type of smart space further supports a robot in understanding the environment and thus improves its interaction capability. Robotic Room [2], Omniscient Space [3], and WABOT-House [4] are a few examples of smart spaces that incorporate robotics. Since the implementation of a smart space requires the integration of a large number of heterogeneous components, architectural issues have also received much attention. As examples, Luo and colleagues described a prototypical configuration for networked robot systems [5]; Distributed Intelligent Network Device (DIND) [6] proposed by Hashimoto et al. is one of the earliest works to implement an intelligent Manuscript received February 10, 2009. First published July 14, 2009; current version published September 30, 2009. This paper was recommended for publication by Associate Editor P. Remagnino and Editor M. Wang upon evaluation of the reviewers’ comments. This work was supported by the R&D program of the Korea Ministry of Knowledge and Economy (MKE) and the Korea Evaluation Institute of Industrial Technology (KEIT) [2005-S-092-02, USN-based Ubiquitous Robotic Space Technology Development]. W. Yu, J. Lee, M. Jang, and J. Sohn are with the Robot Research Department, Electronics and Telecommunications Research Institute (ETRI), Daejeon, Korea (e-mail: [email protected]). Y. Ha is with Konkuk University, Seoul 143-701, Korea. Y. Kwon is with the Korea Institute of Science and Technology (KIST), Seoul 130-741, Korea. H. Ahn is with the Gwangju Institute of Science and Technology (GIST), Gwangju 500-712, Korea. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TASE.2009.2024925

Fig. 1. Conceptual structure of the proposed ubiquitous robotic space.

space using networked devices (e.g., networked cameras). In addition, McKee and colleagues proposed a framework for networked robotic systems and the concept of a module that represents a networked resource, and also acts as an interface between other modules [7]. Saffiotti and colleagues proposed Physically Embedded Intelligent System (PEIS) ecology, characterized by ambient intelligence and networked robot systems [8]. Kim and colleagues introduced a ubiquitous function service and described an intelligent space that incorporates networked robot systems from a control point-of-view [9]. In this paper, we propose a ubiquitous robotic space (URS), which refers to a special environment in which robots are supported in understanding the environment through distributed sensing and computing, and thus can intelligently respond to human needs in the current context of the space. Fig. 1 illustrates the conceptual structure of the proposed URS. The URS comprises three spaces: physical, semantic, and virtual space; Table I describes the structural features of the respective works introduced above, as well as the primary differences from our current work. It should be noted that the structural and functional roles of the three proposed spaces are similar to the work conducted by Saffiotti and colleagues [8]. As opposed to the aforementioned works, however, we paid much attention to the issue in integrating a large number of heterogeneous technologies and robotic components in the implementation of an actual robotic service. This paper describes how we developed the fundamental elements of each space and how we could integrate the three independently developed spaces into an actual robotic application. The remainder of this paper is organized as follows. Section II describes the basic elements of the physical space and characteristics of each individual element. Section III introduces the semantic space and its elements. The virtual space is described in Section IV, and a robotic security application based on the proposed ubiquitous robotic space is described in Section V. Section VI then concludes this paper and introduces further works. II. THE PHYSICAL SPACE Wireless sensor networks and localization networks are two fundamental components supporting the physical space, controlling the behavior of a robot inside the physical space. Another component essen-

1545-5955/$26.00 © 2009 IEEE

634

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

TABLE I DESCRIPTION OF PREVIOUS WORKS AND DIFFERENCES FROM THE PROPOSED ARCHITECTURE

tial for management of the three spaces and data flow among them is a URS server. The URS server also handles database issues for multiusers, multirobots, dynamic change of the URS and so forth, not to mention various connectivity works between users and the Internet or existing communications network. Building an actual physical space requires various kinds of components other than the three components. Since the proposed robotic space aims to support a robot in understanding and responding to the current context of the space, in what follows, we confine the description of the physical space to the two kinds of networks for the enhancement of mobility and environment sensing capability of a mobile robot. A. Localization Sensor Network The localization network attempts to embody human behavior to localize an object. Inaccurate but computationally simple localization with wide coverage (e.g., a ZigBee network) is carried out at the initial stage of a task; for the terminal stage, accurate but relatively complex localization is used to precisely position the object. Note that Ahn and Yu provide a more detailed description of the wireless localization network intended for the initial stage [13]. In this section, we describe a precision localization sensor network that is used to narrow the uncertainty of the initial stage. As Casas and colleagues pointed out in [14], several constraints should be satisfied if a localization technique is to be used in real-world robotic applications. We reviewed various issues related to building a precision localization network, including: position accuracy, jittering, coverage, response time, availability, deployment, and cost of the localization network. Although a localization network can be realized by various means [15], [16], we decided that optical triangulation is most suitable for our needs due to its fast response under line-of-sight conditions, high accuracy, negligible jittering, and cost. In addition, other constraints could be met by making systematic use of the localization system.

The developed localization sensor constituting the precision localization sensor network is comprised of two infrared beacon modules attached on the ceiling, and a detector equipped on top of a mobile robot. The detector includes an image sensor, an infrared bandpass filter, and a DSP for image processing and location calculation. The detector is oriented upward, with its optical axis perpendicular to the ground, and to ensure maximal field of view we employed a wide-angle camera lens. Each beacon module contains an infrared LED, with the on–off status of the LED wirelessly controllable using a unique identifier. Chae and colleagues provide a further description of the operational principles and systematic use of the proposed localization system for mobile robotic navigation [17]. The performance of the proposed localization sensor achieved includes position and orientation errors of less than 65 cm and 61 , respectively, with the repeatability error (jittering) confined to be less than 1 mm and 0.01 . The maximum update rate of the location data achieved is 30 Hz and the coverage area is approximately a circle of radius 2 m at a ceiling height of 2 m. B. Wireless Sensor Network The sensor network platform used in the construction of the ubiquitous robotic space is called u-Clips and is composed of sensor node hardware and sensor network protocol software. Here, we developed the u-Clips sensor network protocol stack based on Zigbee network protocol; in the Zigbee addressing scheme, data from sensor nodes can reach the robot using a tree topology without the need for a routing table, as illustrated in Fig. 2(a). Moreover, tree routing requires little memory overhead and significantly reduces route discovery overhead compared to table-driven mesh routing algorithms. As shown in Fig. 2(b), however, if the robot moves out of the radio transmission range of its current parent node, the robot loses connectivity to the entire sensor network, and can no longer gather environmental status data.

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

635

Fig. 2. Zigbee network: (a) Tree routing algorithm. (b) Mobility problem. (c) Network discovery. (d) Rejoining the network.

Fig. 4. The semantic space consists of two main function modules: CASE module and DR module.

Fig. 3. Proposed mobility supporting procedure.

Currently, Zigbee specification version 1.0 does not support node mobility. For robot mobility, we propose a mobility supporting scheme in which a robot equipped with a sensor node (hereinafter referred to as a mobile node) behaves as an end device in the Zigbee network. Fig. 3 illustrates the proposed mobility supporting procedure of a u-Clips sensor network based on standard Zigbee protocol primitives. The proposed mobility supporting procedure consists of a link failure detection phase, a network discovery phase [Fig. 2(c)], a rejoin phase [Fig. 2(d)], and an announcement phase. 1) Link Failure Detection Phase: The mobile node (child node) periodically sends a “hello” message to its parent node as a means of detecting any link failure. If the mobile node does not receive acknowledgement for more than MaxRetries times, the network (NWK) layer of the mobile node notifies its application (APL) layer that the link to the parent node has been broken by calling NLDE-DATA.confirm primitive with a parameter value of NO ACK. 2) Network Discovery Phase: The application layer discovers potential parent nodes around the mobile node by calling NLME-NETWORK-DISCOVERY.request primitive. During the discovery phase, the application layer finds a list of potential parent nodes and their PAN IDs, tree levels, link quality indications (LQIs), and other factors. Then, among the nodes of the same PAN ID, the application layer chooses its parent node as the one that provides a sufficient LQI value, and simultaneously, the smallest tree level that reduces the routing cost. 3) Rejoin Phase: After selecting a new parent node, the mobile node makes a request to rejoin the network by calling NLME-JOIN.request primitive, and the network layer of the mobile node then transmits an ASSOCIATION_REQUEST command to the new parent node. The parent node accepts this request for association if the number of current child nodes is less than a predefined maximum number. If the parent node accepts the request of the mobile node, it assigns a new

16 bit network address to the mobile node and sends back an ASSOCIATION_RESPONSE command. Otherwise, if the mobile node fails to rejoin the sensor network, it repeats the discovery and rejoin phases until a new parent node is found. 4) Announcement Phase: After the rejoining phase, the mobile node broadcasts its new 16 bit network address to the entire sensor network using a DEVICE_ANNOUNCE command. After the announcement of the new network address, sensor data can be seamlessly routed to the mobile node. Our mobility supporting procedure may occasionally fail due to some unpredictable reasons such as continuous movement of the robot, radio interference during the discovery, low battery level of a new parent node, and other factors. Currently, this problem is handled by retrying the discovery and rejoin phases. Thus, during the retry period, some periodic data sent from the sensor nodes may be lost but are not retransmitted. However, critical sensor data such as the data from a surveillance sensor node are retransmitted right after the completion of the mobility supporting procedure using the application-level acknowledgement mechanism. III. THE SEMANTIC SPACE The semantic space consists of two main modules: context-aware service execution (CASE) and dynamic reconfiguration (DR). CASE interprets sensor data collected from the u-Clips nodes to understand current situations and then to generate situation-aware action plans. DR handles dynamic behavior of URS by adding or removing URS services in response to dynamic changes in the physical space, such as device deployment or removal. Fig. 4 shows the overall structure of the semantic space and how it interfaces with the physical space. A. Context Aware Service Execution (CASE) The CASE module consists of three symbolic data models: robot service knowledge for application-specific situation understanding and command generation, semantic space map for spatial context description, and situation descriptions posted into a situation board that enables dynamic context description. 1) Semantic Space Map: This model, specified as an OWL ontology, describes structural and semantic configuration of the physical space. The semantic space map is a knowledge base containing symbols

636

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

Fig. 5. CIN for deriving atmospheric mood from temperature and humidity. This CIN produces situation descriptions such as (h 1i ), (h 1i ), and (h 1i ).

Warmth;LOC ;HIGH Dampness;LOC ;HIGH AtmosphericMood;LOC ;STICKY

Fig. 6. Sample operation models in the semantic space.

B. Dynamic Reconfiguration (DR) and formal sentences that describe a specific physical position or region being referenced for processing location-specific services and interpreting location-dependent contexts. The following example, specified in N3 syntax [18], states that the region bounded by the rectangular coordinate (0,45,30,75) is a restricted meeting room. region001 a meeting-room; a restricted-area; x ”0”; y ”45”; width ”30”; height ”30”. 2) Situation Descriptions: This model represents the dynamic status of the physical space—for example, robot position, atmospheric status of a region, etc. Situation descriptions are generated by a Context Interpretation Network (CIN). A CIN consists of context interpreters that form a hierarchically layered structure. Fig. 5 illustrates a simple example of a CIN that generates descriptions of the atmospheric mood of a region from sensory data of temperature and humidity. In the figure, the circular nodes are called context interpretation (ci) nodes. Each ci node is implemented based on context interpretation rules that symbolize sensor data or synthesize elementary contexts into a more elaborate context. In our implementation, the CIN is dynamically constructed by the DR module at the time of installation or removal of sensor devices that comprise the u-Clips sensor network. A situation description is represented as a pair (k , v ), where k is a context descriptor and v is the value thereof. For example, the temperature value of 26 C at a location identified by a symbol LOC1 is represented as the pair (hTemperature; LOC1i; 26). Every situation description, generated continuously from the CIN, is posted into a storage called situation board. The situation board is a reference point from which one can fetch all the situation descriptions pertaining to the current URS status. The situation board implements a simple update scheme to reflect the changing status of the URS by updating the corresponding values of context descriptors. For example, if the temperature at LOC1 changes, the value of the context descriptor hTemperature ; LOC1i is updated to the new value. The value updating scheme can handle conflicting contexts to some degree since the scheme replaces the past status from the situation board with the new status. In our implementation, the CIN and robot service knowledge handle more complicated conflicts by using more deliberative rules based on common sense and domain knowledge. 3) Robot Service Knowledge: This model consists of the rules and ontologies required to detect application-specific situations and trigger appropriate action commands. The following is a sample of security service knowledge. unsafe(?l). r1: patrol(?l) intrusion(?e; ?l). r2: unsafe(?l) r3: intrusion(?e;?l) entrance-at(?e;?l) ^ restricted-

area(?l) ^ (notactor(?e; ?a) _ (actor(?e; ?a)^ not admitted(?l; ?a))):

The rule r3 is triggered if an unidentified or unauthorized person enters a restricted area. If r3 is triggered, the area ?l is declared as unsafe and a command for the robot to patrol the area is generated. The CASE, in turn, identifies the coordinates of the area by referring to the semantic space map and issues a command packet to the physical space to make the robot move to the area in question.

The URS is highly dynamic in that devices can be installed into or removed from the physical space at any time; such dynamic changes can create unsupported services and idle devices. To this end, unsupported services are those services that can no longer be executed due to the removal of corresponding devices. Idle devices are devices that are not currently being used by any services. Unsupported services and idle devices should be prevented since they impair the integrity of URS services. Thus, we designed a profile-based service reconfiguration and service discovery mechanism to address the problem. 1) Profile-Based Service Reconfiguration: In the URS, every device in the physical space periodically sends out a device announcement packet (DAP) containing its unique device ID. The DR accepts DAPs to extract the IDs of all deployed devices and builds a device capability map by retrieving the capability profiles for these devices. Then, it renews the service configuration by adding or removing the corresponding ci nodes, CINs, or robot service knowledge whenever changes in the device capability map are detected. 2) Service Discovery: Service reconfiguration requires the ability to discover service components based on specific device profiles—we adopted means-end planning for this purpose [19]. To apply means-end planning, we first modeled each sensor as an operation with only outputs, and the robot service knowledge and ci nodes as operations with both inputs and outputs. Then, we performed means-end planning to build a chain of CINs and robot service knowledge by matching the input and output of each of the modeled operations. Fig. 6 shows an example of a composed service chain. In this chain, if a newly deployed device is identified as a TemperatureSensor, its output, Temperature , is matched to the input of WarmthInterpreter , which allows the interpreter to determine whether or not the corresponding service is deployable. C. Implementation of the Semantic Space The functions of the semantic space—processing the world model, representation of robot service knowledge, and performing service discovery by means-end planning—are all implemented using an embedded rule engine called eBossam [20]. eBossam, developed in C++ language, is a forward-chaining production rule engine built based on the RETE algorithm [21]. We installed most of the semantic space functionalities on an ARMbased board embedded inside a robot. The device profile repository and service repository, however, were deployed as Java servlet applications in the URS server to make the services publicly accessible as web applications. In addition, the semantic space utilizes simple TCP sockets to interface with the physical space, which includes a robot and the u-Clips sensor network. IV. THE VIRTUAL SPACE The primary role of the virtual space is to provide a user with a 2-D or 3-D virtual model of the physical space, thereby enabling the user to investigate and interact with the physical space in an intuitive way. Fusion of range and image data [22] and 3-D reconstruction from a

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

637

Fig. 9. (a) 3-D model generated from the 2-D model of Fig. 8. (b) The corresponding 3-D texture model of (a). TABLE II TIME TO GATHER AND PROCESS DATA FOR THE 3-D MODEL IN FIG. 9(A)

Fig. 7. (a) Experimental device for acquiring spatial snapshots. The data acquisition device consists of an LMS200 laser scanner, a pair of IEEE-1394 Dragonfly cameras, and a precision localization sensor (see Section II-A). (b) Close-up view of a pair of IEEE-1394 cameras.

Fig. 8. Data flow for building 2-D and 3-D models of an indoor environment.

sequence of 2-D images [23] are two popular methods employed for modeling a 2-D or 3-D environment. In the proposed approach, Scalable Vector Graphics (SVGs) and Virtual Reality Modeling Language (VRML) formats are used for the 2-D and 3-D maps, respectively. In the following, we will describe the procedures for constructing 2-D and 3-D models of the physical space. Fig. 7 shows an experimental device for acquiring spatial snapshots. One laser scanner, two IEEE-1394 cameras, and the IR landmark-based localization sensor described in Section II-A comprise the data acquisition device. Each individual sensor is aligned with the vertical axis that goes through the geometric center of each sensor to simplify data registration between range and image data. As the data acquisition device moves, the position and orientation of each data acquisition point is determined by the localization sensor proposed in Section II-A. We collected data in such a way that each pair of successive data has overlapping parts. We then applied a line-based 2-D geometry estimation technique, for which we employed a least square estimation to extract the line segments while simultaneously suppressing noisy samples. Fig. 8 illustrates the data flow for building 2-D and 3-D models of an indoor environment. First, a 2-D map is constructed from the range

data obtained from a laser scanner; line features are then extracted from a set of point clouds of range data. Next, the coordinates of the end points of each extracted line are used to generate a table [Table Metric Map (TMM) in Fig. 8] in which 2-D plane geometry is represented with line-based coordinates with indexed vertices as each line is represented by start- and endpoint coordinates. A block metric map (BMM) can immediately be obtained from the TMM. Finally, the 2-D metric map is stored in SVG format for inter-operability between web-based applications. Developing a 3-D model relies on the aforementioned 2-D map, consisting of a single floor plane and model patches that represent walls. The developed 3-D model is stored in VRML format. Fig. 9(a) illustrates an example of a 3-D model constructed from the 2-D model of Fig. 8. We captured two consecutive images in such a way that the two images are overlapped. We then applied image registration, warping, and stitching to the consecutive images; a cropping operation was also applied to the stitched image to generate a texture image of the wall. Fig. 9(b) shows a textured 3-D model corresponding to Fig. 9(a). The time needed to gather and process data to construct a 2-D map of a 23.11 m 2 25.52 m sized space (Fig. 9) is summarized in Table II. It should be noted that the time reuired to generate 2-D and 3-D maps depends on the size and structure of the space in question. The processing time includes geometric data processing for line extraction, texture stitching, and cropping of the corresponding geometric data. After data processing is completed, the TMM shown in Fig. 8 is built; then 2-D and 3-D maps are generated within a few seconds. V. IMPLEMENTATION This section describes how the proposed URS can be applied to a real environment. For this application, we developed a prototype robotic security service for monitoring an office. We implemented the proposed URS on the ground floor of our building, the area of which was measured to be 23.11 m 2 25.52 m. A. Robot Security Application Utilizing the Proposed Ubiquitous Robotic Space In general, a security robot is expected to carry out tasks related to detecting an abnormal situation and then reacting to the situation (an overview of security robot systems is given in [24]). These two tasks may involve various subtasks, including: zone-to-zone robot navigation crossing wireless network boundaries, remote perception of the environment, inference in understanding the situation, generation of robot commands relevant to the situation, and relaying user requests for controlling the system through an appropriate user interface.

638

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

TABLE III DATA TYPE, LENGTH, AND DESCRIPTION OF EACH FIELD OF THE EMPLOYED MESSAGE

Fig. 10. Block diagram of the prototype robot security service.

We divided the security area of the robot into a couple of subareas (or zones) by taking into account the radio transmission range of the access point (AP) switches employed and the spatial structure of the environment. Each subarea has its own Cartesian coordinate system; the relationships between the coordinate systems are stored in the URS server. In this system, the robot plays the role of data gateway in gathering the environmental data (provided from the u-Clips sensor network of Section II-B) and locational data (provided from the localization network described in Section II-A), and then transfers the data to the URS server through a wireless connection to the Internet. However, due to this wireless communication, sensor and location data of the robot may disappear (which may take a few seconds depending on the immediate distribution of radio signals) if the robot crosses a boundary of neighboring subareas. To ensure seamless data communication, we employed a wireless distribution system (WDS) configuration that provides coverage beyond what is possible for a single access point [24]. Multihopping and mobility-supporting routing algorithms also help the robot retrieve remote environment data, irrespective of its current location, leading to enhanced perception of the robot inside the ubiquitous robotic space. The semantic space is responsible for monitoring the physical space and controlling the robot according to the given service scenario. In this scenario, irregular situations such as intrusions or low battery levels of the robot are detected from contextual information processing using eBossam [20]; the semantic space issues a new navigation task to the robot upon detection of the irregularity, where the irregular situation is reported to the user in the form of a short message service (SMS). As described in Section III, we encoded the robotic service knowledge (robot security in our implementation) using OWL ontology and a forward-chaining production rule. Control rules for the patrol mode, patrol path, event handling based on context interpretation, and user call management have also been encoded in the semantic module. In addition, we implemented exception handling for physical elements in the semantic module to accommodate any abnormal behavior of the respective elements (e.g., unidentified navigation error) [24]. Fig. 10 illustrates a block diagram of the prototype robotic security service. For a different robot service, one can realize the intended robotic service by replacing the robot service knowledge with the corresponding domain and task knowledge. In this case, we can still use the same inference engine (eBossam) and system modules (Fig. 10) since the internal structure of the ubiquitous robotic space does not change. This property allows the proposed robotic service framework to encompass a wide variety of applications. Along with the robot ser-

vice knowledge, we implemented the device profile repository using Java servlets, the service repository using OWL ontology, and service rules expressed in a plain text format. We assigned a unique identifier to each u-Clips sensor node and the robot and stored the identifier information in the URS server; based on the identifier, we retrieved the corresponding device profile from the URS server and loaded the profile into the rule engine, eBossam, thereby implementing the dynamic reconfiguration described in Section III-B.1 The above property, therefore, renders recognition of a new device or a new service introduced to the space into a simple task of obtaining the identifier and retrieving the corresponding profile or service from the repositories. This, in turn, enhances the decision and execution capability of the robot since we can instantly determine the configuration of the space and information therein. B. Data Communication Interface We have defined the data communication protocols to be used between the three spaces. A message for exchanging data between the three spaces consists of six bytes for the message header and a data area of variable length. Table III illustrates the data types and descriptions of respective fields of the message. In this way, we have defined nine message IDs according to the nature of the corresponding data, including: UM_SERVER_MANAGE, UM_ROBOT_DATA, UM_ROBOT_CONTROL, UM_VIDEO, UM_SENSOR, UM_SERVICE, UM_MAP, UM_OBJECT, and UM_EVENT. For each message ID, we defined various opcodes and accompanying data to specify operations for the corresponding data. UM_SERVICE, UM_MAP, and UM_OBJECT messages have been defined for download of service, download of environment map, and manipulation of objects, respectively, but are not used in the present implementation. Table IV presents the opcodes and corresponding data types used for transferring data between the three spaces. It should be noted that the messages in the table are only a partial set of messages, used to introduce the structure of the actual messages. For example, Table IV provides only one typical example of UM_ROBOT_CONTROL, though UM_ROBOT_CONTROL is comprised of fourteen messages related to robot control, battery charging, and recovery of service rules. 1) Data Transfer Between Physical Space and Semantic Space: The physical space provides three kinds of messages: UM_ROBOT_DATA, UM_SENSOR, and UM_EVENT message. Based on the messages provided from the physical space, the semantic space carries out inference by using domain knowledge and task specifications. For example, the semantic space decides whether the message UM_EVENT represents an actual intrusion or a false alarm. Depending on the decision, the semantic space issues a UM_ROBOT_CONTROL message to con1For the current implementation, we defined only one service profile—security service. Additional service profiles can be implemented by defining appropriate identifiers and associating them with the OWL ontology and related service rules. For this reason, UM_SERVICE message in Section V-B has been defined for future reference, though it is not used in the current implementation.

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

639

TABLE IV SELECTED MESSAGE DEFINITIONS FOR TRANSFERRING DATA INSIDE THE UBIQUITOUS ROBOTIC SPACE (P: PHYSICAL SPACE, S: SEMANTIC SPACE, V: VIRTUAL SPACE)

trol the robot, the opcode of which specifies basic behavior of the robot such as move forward, move backward, or rotate. 2) Data Transfer Between Physical Space and Virtual Space: Various status data are transferred from the physical space to the virtual space. Status data with an identifier UM_SERVER_MANAGE are comprised of a list of registered users and robots connected to the server are transferred to the virtual space. UM_ROBOT_DATA messages with opcodes of ROBOT_STATUS and ROBOT_POSITION are periodically transferred to the virtual space, the contents of which are displayed at the corresponding positions of the client program (not shown due to limited space). The video stream generated from the camera installed on top of the robot is transferred to the client program. UM_VIDEO messages with opcodes of VIDEO_INFO and VIDEO_DATA are then used to transfer the video data to the client program. Messages transferred from the virtual space to the physical space are mostly user commands. Most messages with the identifier UM_SERVER_MANAGE are intended for administrative operations, and messages with the identifier UM_ROBOT_CONTROL are used to allow the user to manually control the robot; the same messages are used when the semantic space controls the robot according to its current situation. UM_VIDEO messages with different opcodes from the above (e.g., VIDEO_START, VIDEO_STOP, VIDEO_CONTROL, etc.) are also transferred from the virtual space to the physical space during camera operation. 3) Data Transfer Between Semantic Space and Virtual Space: When a UM_EVENT message is generated with an opcode EVENT_DETECT, the semantic space carries out inference to determine whether or not the message indicates an actual intrusion. In the case of an intrusion, the semantic space transfers a UM_EVENT message with the opcode EVENT_ALARM to the virtual space to notify the user of the intrusion. The user can generate various UM_ROBOT_CONTROL messages through the user interface of the client program, whereas the semantic space simply relays the generated messages to control the robot.

VI. CONCLUSION In this paper, we proposed ubiquitous robotic space as a robotic service framework that integrates heterogeneous technologies from information technology and robotics. We described how the robot can achieve enhanced perception, recognition, decision, and execution capabilities based on mobility-supporting algorithms, precision localization networks, dynamic reconfiguration, and virtual world modeling. In this framework, replacement of robotic service knowledge is sufficient to accommodate different robotic services, since the internal structure of the proposed ubiquitous robotic space does not change. We also defined communication protocols to exchange data between the three spaces, and then implemented a robot security application to demonstrate the feasibility of the proposed architecture. One of our future works is to introduce radio perception technology such as RFID to further enhance the perception capability of the robot.

REFERENCES [1] C. Ramos, J. C. Augusto, and D. Shapiro, “Ambient intelligence—The next step for artificial intelligence,” IEEE Intell. Syst., vol. 23, no. 2, pp. 15–18, Mar.-Apr. 2008. [2] T. Sato, T. Harada, and T. Mori, “Environment-type robot system Robotic Room featured by behavior media, behavior contents, and behavior adaptation,” IEEE/ASME Trans. Mechatronics, vol. 9, no. 3, pp. 529–534, Sep. 2004. [3] N. Y. Chong, H. Hongu, K. Ohba, S. Hirai, and K. Tanie, “A distributed knowledge network for real world robot applications,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), Sendai, Japan, Sep. 2004, pp. 187–192. [4] S. Sugano and Y. Shirai, “Robot design and environment design: Waseda robot-house project,” in Proc. SICE-ICASE Int. Joint Conf., Oct. 2006, pp. 31–34. [5] R. C. Luo, K. L. Su, S. H. Shen, and K. H. Tsai, “Networked intelligent robots through the Internet: Issues and opportunities,” Proc. IEEE, vol. 91, pp. 371–382, Mar. 2003. [6] J. Lee and H. Hashimoto, “Controlling mobile robots in distributed intelligent sensor network,” IEEE/ASME Trans. Mechatronics, vol. 50, no. 5, pp. 890–902, Oct. 2003.

640

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009

[7] D. I. Baker, G. T. McKee, and P. S. Schenker, “Network robotics, a framework for dynamic distributed architectures,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., Sep. 2004, pp. 1768–1771. [8] R. Lundh, L. Karlsson, and A. Saffiotti, “Plan-based configuration of an ecology of robots,” in Proc. IEEE Int. Conf. Robot. Autom., Apr. 2007, pp. 64–70. [9] B. Kim, N. Tomokuni, K. Ohara, T. Tanikawa, K. Ohba, and S. Hirai, “Ubiquitous localization and mapping for robots with Ambient Intelligence,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), Oct. 2006, pp. 4809–4814. [10] J. Lee, K. Morioka, N. Ando, and H. Hashimoto, “Cooperation of distributed intelligent sensors in intelligent environment,” IEEE/ASME Trans. Mechatronics, vol. 9, no. 3, pp. 535–543, Sep. 2004. [11] M. Broxvall, M. Gritti, A. Saffiotti, B. Seo, and Y. Cho, “PEIS ecology: Integrating robots into smart environments,” in Proc. IEEE Int. Conf. Robot. Autom., May 2006, pp. 212–218. [12] A. Saffiotti, M. Broxvall, M. Gritti, K. LeBlanc, R. Lundh, J. Rashid, B. S. Seo, and Y. J. Cho, “The PEIS-ecology project: Vision and results,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), Sep. 2008, pp. 2329–2335. [13] H. Ahn and W. Yu, “Environmental-adaptive RSSI-based indoor localization,” IEEE Trans. Autom. Sci. Eng., to be published. [14] R. Casas, D. Cuartielles, A. Marco, H. J. Gracia, and J. L. Falcó, “Hidden issues in deploying an indoor location system,” IEEE Pervasive Comput., vol. 6, no. 2, pp. 62–69, Apr.-Jun. 2007. [15] J. Borenstein, H. R. Everett, and L. Feng, “Where am I? Sensors and methods for mobile robot positioning,” Univ. of Michigan, Ann Arbor, MI, Tech. Rep., 1996.

[16] J. Hightower and G. Borriello, “Location systems for ubiquitous computing,” IEEE Computer, vol. 34, no. 8, pp. 57–66, Aug. 2001. [17] H. Chae, W. Yu, J. Lee, and Y. Cho, “Robot localization sensor for development of wireless location sensing network,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), Beijing, China, Oct. 2006, pp. 37–42. [18] T. Berners-Lee, Notation 3. [Online]. Available: http://www.w3.org/ DesignIssues/Notation3. [19] P. Bresciani, A. Perini, P. Giorgini, F. Giunchiglia, and J. Mylopoulos, “Tropos: An agent-oriented software development methodology,” Autonomous Agents and Multi-Agent Systems, vol. 8, no. 3, pp. 203–236, May 2004. [20] M. Jang and J. Sohn, “Bossam: An extended rule engine for OWL inferencing,” in Proc. RuleML, Nov. 2004, vol. 3323, LNCS, pp. 128–138. [21] C. Forgy, “RETE: A fast algorithm for the many pattern/many object pattern match problem,” Artif. Intell., no. 19, pp. 17–37, 1982. [22] I. Stamos, L. C. Chen, G. Wolberg, G. Yu, and S. Zokai, “Integrating automated range registration with multiview geometry for the photorealistic modeling of large-scale scenes,” Int. J. Comput. Vision (Special Issue), vol. 78, no. 2-3, pp. 237–260, Jul. 2008. [23] Y. Tan, J. Hua, and M. Dong, “3D reconstruction from 2D images with hierarchical continuous simplices,” The Visual Computer: Int. J. Comput. Graphics, vol. 23, no. 9, pp. 905–914, Aug. 2007. [24] W. Yu, J. Lee, H. Chae, K. Han, Y. Lee, and Y. Ha, “Robot task control utilizing human-in-the-loop perception,” in Proc. IEEE Int. Symp. Robot Human Interactive Commun. (RO-MAN), Aug. 2008, pp. 395–400.

Design and Implementation of a Ubiquitous Robotic ...

three proposed spaces are similar to the work conducted by Saffiotti and colleagues [8]. .... tential parent nodes around the mobile node by calling NLME-NET- .... to investigate and interact with the physical space in an intuitive way. Fusion of ...

858KB Sizes 0 Downloads 235 Views

Recommend Documents

Development of Ubiquitous Robotic Space for ...
proposed URS comprises three spaces: physical, semantic, and virtual space. ... service, different sensors are employed for a mobile robot; however, the robot ...

Design and Implementation of e-AODV: A Comparative Study ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 6, ... In order to maximize the network life time, the cost function defined in [9] ...

design and implementation of a high spatial resolution remote sensing ...
Aug 4, 2007 - 3College of Resources Science and Technology, Beijing Normal University, Xinjiekou Outer St. 19th, Haidian ..... 02JJBY005), and the Research Foundation of the Education ... Photogrammetric Record 20(110): 162-171.

Design and implementation of a new tinnitus ... -
School of Electronics and Information Engineering, Sichuan University, ... Xavier Etchevers; Thierry Coupaye; Fabienne Boyer; Noël de Palma; Gwen ...

Design and Implementation of a Fast Inter Domain ...
Jul 6, 2006 - proximity of virtual machines sharing data and events can .... that share file systems is already being investigated [14] [15]. [16]. It is not ...

design and implementation of a high spatial resolution remote sensing ...
Aug 4, 2007 - 3College of Resources Science and Technology, Beijing Normal University, ..... 02JJBY005), and the Research Foundation of the Education.

design and implementation of a high spatial resolution remote sensing ...
Therefore, the object-oriented image analysis for extraction of information from remote sensing ... Data Science Journal, Volume 6, Supplement, 4 August 2007.

The Design and Implementation of a Large-Scale ...
a quadratic analytical initial step serving to create a quick coarse placement ..... GORDIAN [35] is another tool that uses a classical quadratic objective function ...... porting environment of software components with higher levels of functionality

Design, Simulation and Implementation of a MIMO ...
2011 JOT http://sites.google.com/site/journaloftelecommunications/. Design, Simulation and Implementation of a. MIMO MC-CDMA based trans-receiver system.

Design and Implementation of e-AODV: A Comparative Study ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 6, ... Keywords: Wireless mobile ad hoc networks, AODV routing protocol, energy ... In order to maximize the network life time, the cost function defined in [9] ...

design and implementation of a voronoi diagrams ...
There are various legal systems in place to protect consumers. Electronic fraud related to ... at lower prices. Online companies are trying their best to attract and ...

Design and Implementation of a Log-Structured File ... - IEEE Xplore
We introduce the design principles for SSD-based file systems. They should exploit the performance character- istics of SSD and directly utilize file block level statistics. In fact, the architectural differences between SSD and. HDD result in differ

Design and implementation of Interactive ...
Processing can be deployed within the development environment, in Java projects, as well as in HTML/JSP pages in tags. For web pages integration, processing.js extension is needed. Its main purpose is to translate the whole sketch into the Javascrip

design and implementation of a computer systems ...
for a specialised diagnosis service arose which would give the management system the ability to predict failure causes ...... taccesspolicy.xml and crossdomain.xml files at the root of the domain where the service is hosted. ...... Algorithm types Li

The Design and Implementation of a Large-Scale ...
Figure 2.3: GORDIAN: Center of Gravity Constraints: The average location of ..... as via a distributable solution framework for both global and detailed placement phases. ...... BonnPlace calls this step repartitioning, we call it re-warping.

Design and Implementation of a Combinatorial Test Suite Strategy ...
Design and Implementation of a Combinatorial Test Su ... rategy Using Adaptive Cuckoo Search Algorithm_ p.pdf. Design and Implementation of a ...

Design Patterns for Ubiquitous Computing
and shout and drink, and let go of their sorrows? .... the user is participating in a meeting, attending a ... owner is in a meeting and switch auto- matically to a ...

A Design Concept for a Robotic Lunar Regolith ...
ety of tasks supporting the establishment and maintenance of a permanent lunar base. These can be categorized into con- struction and harvesting tasks. A. Construction. Initially the robotic system will assist with assembling the. TABLE I. AVERAGE CO

Design and Implementation of High Performance and Availability Java ...
compute engine object that is exported by one JAVA RMI (JRMI) server, it will take ... addition, to build such a system, reliable multicast communication, grid ...

Design and Implementation of High Performance and Availability Java ...
compute engine object that is exported by one JAVA RMI (JRMI) server, it will take .... Test an application, such as a huge computation on this system. 2. Test the ...

Design and implementation of Interactive visualization of GHSOM ...
presented the GHSOM (Growing Hierarchical Self Organizing Map) algorithm, which is an extension of the standard ... frequently labeled as text mining) is in comparison with classic methods of knowledge discovery in .... Portal provides a coherent sys

Design & Implementation of a DS-CDMA RAKE ...
recovery process. 2. System ... the forward link (Base station to Mobile Station) of a ... Data bits generated by user, either from a text or from a vocoder. Direct.

design & implementation of a ds-cdma based ...
The communication engineers have recently developed a multiple access technique, CDMA, for the ... Telecommunication from the College of Signals, NUST. ...... designed and built at the Naval Post Graduate School in Monterey, California.