Visual Homing: experimental results on an autonomous robot P. Arena, S. De Fiore, L. Fortuna, L. Nicolosi, L Patan´e, G. Vagliasindi Dipartimento di Ingegneria Elettrica, Elettronica e dei Sistemi Universit´a degli Studi di Catania Viale A. Doria 6, 95125 Catania, Italy Email: [email protected]

Abstract— In the past few years, a number of researchers have found that, in a variety of natural environments, different insect species are able to localize the position of the nest by using panoramic visual sensors. Exploiting this localization method, it is possible to implement algorithms which let an agent come back to a reference position from any starting point in an arena. This method is called visual homing. If the home is a recharging station, then the visual homing can help to recharge the battery pack of a autonomous mobile robot. In this paper we propose a new visual homing algorithm implemented using the CNN-based vision system Eye-Ris. Furthermore, experimental results using a roving robot are presented.

I. I NTRODUCTION For many animal species the ability of reaching the nest is vital for survival. The nest is a refuge and food source. Visual sense stimuli are often used to find the position of the nest in the environment. Visual Homing is a navigational technique used by an agent to return to its reference location, called home, from any environmental position by using visual information of the surrounding. Several approaches to visual homing use an omnidirectional image of the environment, typically obtained by using a spherical, parabolic or conical mirror located above a camera pointing upwards [1]. Visual homing algorithms fall into two categories: • Feature-based: specific features are extracted from the current image to find a match with the home image; • Image-based: raw images are directly used to find the home position. A detailed classification of Visual Homing algorithms can be found in [2]. The Feature-based approach includes matching methods, where the best-matching region between corresponding areas is found. These methods can be divided in two categories: • Matching method with feature preselection, where the searching of the best-matching is restricted to preselected features, like the snapshot model explained in [3]. • Matching methods without feature preselection (or “flowbased matching methods”) which are based on methods for the computation of optical flow. Vardy and M¨oller (2005) in [4] reported that flow-based matching methods and differential flow methods have robust navigation characteristics.

From the other hand, an image-based algorithm consists in comparing a reference image with the current acquired image to find the correct direction that leads to the home position. The most popular image-based method is the Image Warping [5], [6] which is a “mental” simulation of current image distortion (warp) due to different movements. The Image Warping algorithm is based on the assumption that all landmarks are placed at the same distance from the reference position. The model calculates all possible deformations of the greyscale 1D horizon image taken from the current position, in order to move the agent according to the movement vector that produces the image most closely resembling the home image. The parameter suggested by Franz et al. in [5], [6], used to measure the resemblance is the product between the current image line and the reference image line. Image Warping is a very robust algorithm, but it has a high computational cost, since it has to perform a full search for each processing step. A subclass of image-based methods includes parameter methods. They only store a concise description of the snapshot image (parameter vector). In order to perform homing tasks, some optimization methods are applied on a distance measure between the parameter vectors obtained from current and target location. A special instance of parameter models is the average landmark vector (ALV) model presented in [7]. The parameter methods are of very important from a theoretical point of view, but however the Fourier-amplitude method suggested by Menegatti et al. in [8] is more suitable for real world applications. It is based on the computation of the first few Fourier amplitudes of each row of a panoramic image. Fourier amplitudes are invariant against shifts, so the parameter vector has the property of being invariant against rotations of the agent, and the method can thus be used without a compass sensor. Moreover, St¨urzl and Mallot (2006) in [9] studied another Fourier-based method based on the warping approach. Other variants of in image distances methods are proposed by Zeil et al. in [10]. He studied the properties of outdoor scenes for the purpose of visual homing, but instead of extracting the horizon line, the whole panoramic images were unfolded, and then each image was compared to the home image using the root mean square (RMS) difference function. In this paper we study the RMS-based visual homing algorithms applied to roving robot. To have real time navigation capabilities, a cellular visual processor, named Eye-Ris, has

Fig. 1. The experimental platform is composed by the omnidirectional vision system, a rover robot and wireless communication system.

(a)

(b)

(c)

(d)

Fig. 2. The mirror used in the test and an example of the acquired image: spherical (a)(c) and conic mirror (b)(d).

been used to process panoramic images. Finally, a new visual homing algorithm is presented and compared with the other algorithms implemented on roving robot. II. E XPERIMENTAL S ETUP The design of an autonomous mobile robot is a critical and the power supply is a crucial question. Typically a mobile robot is equipped with one or more battery packs that must be recharge periodically. The robot during the long time experiments should recharge the battery autonomously. If the home is a recharging station, the visual homing can help to solve this task. The experimental platform is constituted by an FPGA-based board, called Eye-Ris, that implements the visual homing algorithm, a mobile robot, an omnidirectional vision system and a wireless communication module (Fig. 1). A computer can be used to make a backup of the data for a post processing analysis. The AnaFocus Eye-Ris Vision System[11] has been developed for real time vision applications and visual homing algorithms can be simply developed on the board. The visual chip optically acquires images, extracts the relevant information through a parallel processing mechanism takes decisions depending on the result of the complete process. Eye-RIS is a multiprocessor system, two processors make the system up: • AnaFocus ACE16kv2 Focal Plane Processor (FPP); • Altera NIOS II Digital Microprocessor synthesized on FPGA. ACE16kv2 FPP acts as an Image Coprocessor. It acquires and processes images, extracting the relevant information from the scene being analyzed, usually without intervention of the Nios II processor. ACE16kv2 is massively parallel, performing operations simultaneously in all of its 16k cells. It mainly processes images in the analog domain; consequently, A/D conversion is not required. Nios II is a a “soft-core” FPGAsynthesizable digital microprocessor. “Soft-core” means that the CPU core is offered in “soft” design form (not fixed in silicon), so its functionalities, like the peripheral number and type or the amount of memory, can be easily modified according to the specific task [12]. It controls the execution flow and processes the information provided by the FPP. Generally, this information is not an image, but characteristics of images analyzed by ACE16kv2. Thus, no image transfers are usually needed in the Eye-Ris. All these features allow

Eye-Ris Vision System to process images at ultra-high speed but still with very low power consumption. The omnidirectional vision system consists of a spherical (Fig. 2.a) or conic mirror (Fig. 2.b). A typical image acquired with the spherical mirror is shown in Fig. 2.c. It is possible to notice that a large part of the image includes the reflection of the robot body, so the information useful for the visual homing algorithm is reduced. Moreover, the obtained results, as reported below, are satisfactory. Therefore, solving this problem using a conic mirror will improve the system performance (see Fig. 2.d). The conic mirror used in the experiments has a radius and a hight of 2 cm. The vertical axis of the mirror is aligned with the optical axis of the camera. The projection of the scene in the focal plane is determined by the shape of the mirror and camera configuration parameters (distance between the mirror and the lens, and focal length, and others). This setup permits to acquire in a single step a 360◦ view of the environment without moving the camera or the robot. In this way there are no odometry errors that could be introduced by a complete rotation of the robot needed to acquire a panoramic image. The robot used for the experimental is a Lynxmotion 4WD2 Robot [13]. It is a classic four wheel drive rover controlled through a differential drive system. The robot is equipped with 4 DC 50:1 gear head motors and with an RF radio telemetry module for remote control operations and sensor acquisition. The wireless communication system is used to interface the FPGA with the robot to drive motors. It is based to the

Fig. 3. Description of the framework used during the visual homing experiments.

36 34

RMS

32 30 28 26 24 22 0

(a) Fig. 4. The flow diagram describes the implemented algorithms. The robot performs the action with minimum difference, called D, between the current and home image. The robot stops when D is smaller than a threshold (THS).

ER400TRS radio transceiver that provides a serial interface with the host (Eye-Ris board in this work). The NiosII microprocessor is devoted to the execution of the navigation algorithm. Fig. 3 shows the links between the different blocks that constitute the presented framework. III. ROBOT NAVIGATION A LGORITHM In this section the developed visual homing algorithms are discussed. The first step consists into capturing the home image placing the robot near the recharging station. The homing algorithms should be able to drive the robot towards the home position, following a path that minimizes the position error (Fig. 4). The difference among the algorithms consists in the way this operation is performed. In particular three algorithms are here presented: • RMS based; • RMS and rotation based; • XOR and rotation based. The videos of the presented experiments are available on the web in [14]. A. RMS algorithm The RMS algorithms computes the difference between the current image and the home image by means of the Root Mean Square (RMS) value: s ∑Ni=1 (I1 (i) − I2 (i))2 (1) RMS = N where I1 e I2 are respectively the current and the reference images and N is the total pixel number. The robot performs a sequence of actions to find the image possessing the smallest RMS value. Each action consists of a rotation in order to capture an image with a different robot orientation. When the RMS value of the captured image is smaller than the previous step, the robot performs a movement in that direction. When the current position is classified as a local minimum (i.e. no directions produce a decrement of RMS) the direction of the minimum RMS among the processed images is chosen. Finally, when the RMS value is below a given threshold the algorithm stops as the home position is assumed us. Fig. 5.a shows an experiment in which the agent is placed on the left

5

10

step

15

20

25

(b)

Fig. 5. An example of trajectory followed by controlled through the RMS algorithm. The system follows the direction that decrease the RMS .

with respect to the home and rotated of 180◦ with respect to the desired direction. For the omnidirectional vision system a spherical mirror has been used. The RMS difference between the current image and the home image produces a function which, considering the cluttering environment used, is next to have a monotone shape (see Fig. 5.b). Moreover, the global RMS minimum is in correspondence of the home position, so the robot can be return to the charging station. B. RMS and rotation algorithm In order to search the home position, the RMS-based algorithm performs a lot of rotations that decrease the performance in terms of time needed and introduce errors due to odometry. To solve these problems image rotations have been performed in software using the NiosII processor. Formally, a rotation operator around a pixel po can be defined as a geometric transformation mapping the generic input pixel pi onto the rotated pixel pr . The equations are: xr yr

= (xi − xo ) cos θ − (yi − yo ) sin θ + xo = (xi − xo ) sin θ + (yi − yo ) cos θ + yo

(2)

where (xr , yr ) are the coordinates of pr , (xi , yi ) are the coordinates of pi , (xo , yo ) are the coordinates of po and θ is the rotation angle. In order to reduce the computation time, the rotations of fixed pre-defined angles are performed through a look-up tables loaded during the start up phase. The videos, available on the web in [14], show that the algorithm is suitable for finding the home position. C. XOR and rotation algorithm The computation time to calculate the RMS value is not compatible with real time applications due to the processing performed on the NiosII. To improve the time performed, the parallel processing of the Eye-Ris has been exploited. If binary images are used, simple operations like XOR can be used to calculate the moving direction that leads the robot toward the home. The XOR operator is a basic function for CNN-based chip mounted in the Eye-Ris board and can be performed in the time scale of ms.

TABLE I T HE COMPARATIVE AMONG THE PRESENTED ALGORITHM .

80

240 70

80

240

230

220

220

200

210

50

40 40

20 4

XOR

RMS

60

60

180

2 y

1 1

3

2

4

5

XOR and Rotation

RunDown

20

10.5

25.5

23.56

3

1.6

2.4

4.5

1165◦

700◦

1170◦

580◦

0.0625

0.2625

0.15

0.252

3.5

2.3

0.3

-

200

3 2

20

x

4

3

2

1 1

y

(a)

5

190

180

x

(b)

Fig. 6. The potential map of the same environment with the RMS (a) and XOR (b) parameter .

Fig. 6 shows the comparison between the RMS and XOR function. Both visual homing algorithms create a potential field in the environment where the home position is the global minimum (RMS case) or maximum (XOR case). The images used to create the map have been acquired on a grid with a step of 30 cm and the robot is always oriented toward the x axis. Fig. 7.a shows the results of an experiment with the XOR algorithm. The robot follows the direction with increasing XOR value (Fig. 7.b). IV. R EMARKS In this section we compare the algorithm proposed with an other RMS-based visual homing algorithm, called RunDown [10]. Table I shows the average results of ten tests. All the presented algorithms have a performance comparable to the RundDown method and they are suitable for the visual homing application on robot. The advantage is that the on chip XOR algorithm requires a much reduced computation time thanks to introduction of the Eye-Ris parallel processing capabilities. V. C ONCLUSIONS In this paper three visual homing algorithms are presented. We have tested the algorithms using a roving robot equipped with an omnidirectional vision system that consists of a spherical or conic mirror and parallel visual processor. This setup permits to acquire in a single step a 360◦ view of the environment without moving the camera or the robot. Moreover rotation operators permit to built a different views

190 180

XOR

170 160 150 140 130 120 0

(a)

RMS and Rotation

4

30

3

RMS

5

10 step

15

20

(b)

Fig. 7. An example of trajectory of the robot during the experiment of the XOR and image rotation algorithm. .

Step Crossed Distance (m) Total Rotations Precision (m) Computation Time for step (s)

of the acquired image directly in NiosII. The performance of the proposed visual homing algorithms have been reported and compared. The best performance has been obtained with the Xor-based method that permits to reach the home position in a time compatible to a real time application. Further developments will include the introduction of other low level sensors needed to improve the obstacle avoidance capabilities and useful to detect when the robot is nearby the recharging station to activate precision docking procedures. These sensors can also substitute the threshold used to stop the robot that can be affected by changing in the illumination of the environment. ACKNOWLEDGMENT The authors acknowledge the support of the European Commission under the project FP6-2003-IST2-004690 SPARK “Spatial-temporal patterns for action-oriented perception in roving robots”. R EFERENCES [1] Franz M.O., Mallot H.A., “Biomimetic robot navigation. Robotics and Autonomous Systems”, 30, 133-153, 2000. [2] M¨oller R., Vardy A., “Local visual homing by matched-filter descent in image distances”. [3] Cartwright B. and Collett T., “Landmark learning in bees”, Journal of Comparative Physiology 151:521-543, 1983. [4] Vardy A., M¨oller R., “Biologically plausible visual homing methods based on optical flow techniques”, Connect Sci 17:47-89, 2005. [5] Franz M.O., SchM¨ollerlkopf B., Mallot H.A., Blthoff H.H., “Learning view graphs for robot navigation”, Auton Robots 5:111-125,1998. [6] Franz M.O., Sch¨olkopf B., Mallot H.A., Blthoff H.H., “Where did I take that snapshot? Scene-based homing by image matching”, Biol Cybern 79:191-202, 1998. [7] Lambrinos D., M¨oller R., Labhart T., Pfeifer R., Wehner R., “A mobile robot employing insect strategies for navigation”. Robot Auton Syst, special issue: Biomimetic Robots 30:39-64,2000. [8] Menegatti E., Maeda T., Ishiguro H., “Image-based memory for robot navigation using properties of omnidirectional images”. Robot Auton Syst 47:251-267,2004. [9] Strzl W., Mallot H.A., “Efficient visual homing based on Fourier transformed panoramic images”, Robot Auton Syst 54:300-313,2006. [10] Zeil J., Hoffmann M.I., Chahl J.S., “Catchment areas of panoramic images in outdoor scenes”. J Opt Soc Am A 20:450-469,2003. [11] “ANAFOCUS home page” [Online]. Available: http://www.anafocus.com [12] “Altera home page” [Online]. Available: http://www.altera.com [13] “Rover vendor home page” [Online]. Available: http://www.robotitaly.com [14] “Spark home page” [Online]. Available: http://www.spark.diees.unict.it

Visual Homing: experimental results on an autonomous ...

agent, and the method can thus be used without a compass sensor. Moreover ..... The authors acknowledge the support of the European Com- mission under ...

665KB Sizes 0 Downloads 261 Views

Recommend Documents

Visual Homing: experimental results on an autonomous ...
conversion is not required. .... The potential map of the same environment with the RMS (a) and ... used to create the map have been acquired on a grid with a.

Experimental Results
polynomial (since the complexity of the network increases with each training and ...... W., Identification of fuzzy systems by means of an auto-tuning algorithm and.

The results of an experimental indoor hydroponic Cannabis growing ...
New Zealand is the most geographically isolated country in the. world. ... annually, with the typical cultivation period being between .... Zealand bedroom, which is a space commonly used to grow Cannabis indoors. .... PDF. The results of an experime

Implementation and Experimental Results of ...
GNU Radio is an open source software development toolkit that provides the ... increasing α, we shall see that the geometry of the modulation constellation ...

An Experimental Investigation on the Static Response ...
cylindrical object and its state of stress is a function of its angular speed. It is well understood ... the shape and size of segments into which the disks will burst to make an estimate of maximum translational ... disk dimension, the average inner

An Experimental Study on the Capture Effect in ... - Semantic Scholar
A recent measurement work on the cap- ture effect in 802.11 networks [12] argues that the stronger frame can be successfully decoded only in two cases: (1) The.

An Experimental Study on Basic Performance of Flash ...
The simulator is expected to be effective to design flash-based database ... calculated the trend line for each data series. The ... RAID 0, 1, 5 and 10. Seagate ...

An Experimental Study on IO Optimization Techniques for Flash ...
We examined the IO optimization techniques and the distinct features of the flash SSD. The IOs applied with optimization techniques are analyzed through the IO ...

Experimental Investigation on an Intermittent Ammonia ...
Department of Mechanical Engineering, Khulna University of Engineering & Technology,. Khulna, Bangladesh. ... degree of Master of Science in Engineering in the Department of Mechanical Engineering,. Khulna University of ..... In Europe, America and I

Bad News: An Experimental Study On The ...
Sep 1, 2011 - rewards can be an effective way of motivating people, there is also a vast ..... We did not find any indication of order effects of the conditions (I-U.

An Experimental Study on Non-Linear Vibration ...
... structures need stronger design and higher service life associated with saving in weight. ... top, keeping the chamber open to atmosphere. ... to account them.

An Experimental Study on Coupled Balancing Tasks ...
linearized inverted pendulum, the subject manipulates the displacement of cart xc. H by mice. In this system, the subject can perform a balancing task, manipulating the cart xc. H to make it track the target repelling from the cart. To achieve this,

Mariokart An Autonomous Go-Kart - GitHub
Mar 9, 2011 - Make a robust platform for future projects ... Nice hardware platform for future years. - Project almost stuck to time ... Automation Software. 45d.

An Experimental Investigation
Jun 21, 2015 - the Max Planck Institute for Research on Collective Goods, the 2013 ... Economics Conference at the University of Southern California, the ...

Experimental Results for Moving Object Structure ...
Email: [email protected];[email protected];[email protected];[email protected] ... tasks, online structure estimation algorithms are required. Recently, a causal ...

Experimental and Simulation Results of Wheel-Soil ...
where v is the component of wheel carrier velocity in the horizontal direction, ω is the ..... mobile robots”, Proceedings of the 4th International. Conference on ...

Experimental Results of a Plasma Wakefield ...
Los Angeles, CA, USA. &. Karl Kusche, Jangho Park, Igor Pogorelsky, Daniil Stolyarov, Vitaly Yakimenko. Accelerator Test Facility @ Brookhaven National ... Gradient of. 35 MeV/m (ILC). 150MeV/m (CLIC). • Limited e.g. by wall breakdown*. • Plasmas

Experimental Results Prediction Using Video Prediction ...
RoI Euclidean Distance. Video Information. Trajectory History. Video Combined ... Training. Feature Vector. Logistic. Regression. Label. Query Feature Vector.

Experimental investigation on separation performance ...
Jul 7, 2002 - nanofiltration membranes for inorganic electrolyte solutions. Xiao-Lin Wang”* ... water treatment, dye, food and pharmacy industry due to their ...

Push: An Experimental Facility for Implementing Distributed ...
Distributed database systems need special operating system support. Support rou- ... supplement or modify kernel facilities for database transaction processing.

Are Preferences Complete? An Experimental ... -
Nov 21, 2006 - rationality tenet the way transitivity is (Aumann, 1962; Bewley, 1986; Mandler, 2001,. 2005; Danan, 2006). Third, and most importantly, incomplete preference theory has ...... in a pilot experiment with 12 subjects in January 2004. In

An experimental spatio-temporal model checker - GitHub
logical spatial logics [10], whereas temporal information is described by a Kripke ..... minutes, depending on the formula, on a quite standard laptop computer.