AUTONOMOUS ROVERBOT USING SCENE ANALYSIS

B.E. (CIS) PROJECT REPORT by

Faisal Nasim Usman Ghani Syed Raza Abbas

Department of Computer and Information Systems Engineering N.E.D. University of Engineering and Technology, Karachi-75270

AUTONOMOUS ROVERBOT USING SCENE ANALYSIS

B.E. (CIS) PROJECT REPORT

Project Group Muhammad Usman Ghani

CIS-022

Faisal Nasim

CIS-037

Syed Raza Abbas

CIS-105

BATCH: 2002-2003

Project Adviser Fahad Abdel Kader (Internal Adviser)

December 2006

Department of Computer and Information Systems Engineering N.E.D. University of Engg. & Technology Karachi-75270

–1–

ABSTRACT

Autonomous Roverbot using Scene Analysis covers all the major aspects of Computer Engineering from Software to Hardware and from Signaling to Control. The idea behind the project is to develop an autonomous vehicle which will be controlled through a remote station. The vehicle is fitted with a wireless video camera which transmits live video to a base-station and is processed through MATLAB. The base-station, then, submits controlling signals to the vehicle to navigate through its course. Such a robot could be used for surveillance, scanning pipes (through manual or limited autonomous control) and tracking moving objects.

–2–

ACKNOWLEDGEMENTS

We'd like to make special acknowledgement of two people who helped us in project selection and provided guidance through various parts of the project:

Our internal adviser Mr. Fahad Abdel Kader and Prof. Dr. Uvais Qidwai.

–3–

TABLE OF CONTENTS

ABSTRACT ................................................................................................................ 1 ACKNOWLEDGEMENTS ....................................................................................... 2 TABLE OF CONTENTS ........................................................................................... 3 1. INTRODUCTION .................................................................................................. 7 1.1 Scope of the Project...............................................................................................7 1.2 Application Areas ..................................................................................................7 2. AUTONOMOUS ROVERBOT USING SCENE ANALYSIS............................ 8 2.1 Application Areas ..................................................................................................8 2.2 Research Analysis..................................................................................................8 2.2.1 Software Tool Evaluation.................................................................................8 I. RobotFlow..............................................................................................................8 II. MATLAB..............................................................................................................9 2.2.2 Hardware Evaluation ........................................................................................9 I. Lego Mindstorm with IR........................................................................................9 II. More Bot Evaluations .........................................................................................10 2.3 Design Approaches..............................................................................................10 2.3.1 Design Approach # 1 ......................................................................................10 2.3.2 Design Approach # 2 ......................................................................................11 2.4 Modified Ideas.....................................................................................................11 2.4.1 Finalized Hardware.........................................................................................11 2.4.2 Finalized Software..........................................................................................12 2.4.3 Robot Characteristics......................................................................................12 2.5 Project Design Strategy .......................................................................................12

–4– 2.5.1 Main Objective ...............................................................................................12 2.5.2 Main Project Modules ....................................................................................13 2.5.3 Hardware ........................................................................................................13 2.5.4 Software Design .............................................................................................13 3. COMPUTER VISION.......................................................................................... 14 3.1 Related Fields ......................................................................................................15 3.2 Examples of Applications of Computer Vision...................................................16 3.3 Theory of Motion.................................................................................................18 3.3.1 Ego-motion .....................................................................................................18 I. Problems...............................................................................................................19 II. Solutions .............................................................................................................19 III. Scope of Ego-motion in our project ..................................................................20 3.4 Optical Flow ......................................................................................................20 3.4.1 Horn Schunck Method....................................................................................21 3.4.2 Lucas Kanade Method ....................................................................................21 3.5 Image Processing.................................................................................................21 3.5.1 Solution Methods............................................................................................21 3.5.2 Commonly Used Signal Processing Techniques ............................................22 3.5.3 Feature Extraction...........................................................................................23 4. INTRODUCTION TO ROBOTICS ................................................................... 31 4.1 Robot ...................................................................................................................31 4.2 Robotics ...............................................................................................................31 4.3 Robot Navigation.................................................................................................33 4.3.1 Navigation ......................................................................................................33 4.3.2 Coordinate Systems ........................................................................................34

–5– 4.4 H Bridge ..............................................................................................................37 4.5 Servo Motors .......................................................................................................39 4.6 Robotics Future....................................................................................................41 4.7 Some Robots........................................................................................................43 5. THE BOT .............................................................................................................. 44 5.1 General Design and Structure..............................................................................44 5.2 Hardware Specifications......................................................................................45 5.2.1 Camera specs ..................................................................................................45 5.2.2 Hardware modules used..................................................................................45 5.2.3 Integration.......................................................................................................46 6. THE INTERFACE ............................................................................................... 48 6.1 The Parallel Port ..................................................................................................48 6.1.1 Hardware ........................................................................................................49 6.1.2 Software..........................................................................................................50 I. Parallel Port on Linux ..........................................................................................50 II. Parallel Port on WindowsXP ..............................................................................50 6.2 Interface Design...................................................................................................50 6.2.1 Interface Hardware .........................................................................................51 I. Port Bit Combinations Used.................................................................................54 6.2.2 Interface Software...........................................................................................55 7. SCENE ANALYSIS.............................................................................................. 57 7.1 Design Summary .................................................................................................57 7.2 Object Tracking using Lucas Kanade..................................................................58 7.2.1 Overview ..........................................................................................................58 7.2.2 Top Level Model ............................................................................................59

–6– 7.2.3 Thresholding and Region Filtering.................................................................60 7.2.4 Region Filtering..............................................................................................61 7.2.5 Centroid Calculation.......................................................................................62 7.2.6 Display Results ...............................................................................................63 7.2.7 Display Bounding Boxes ................................................................................64 7.2.8 Line Vector Calculation..................................................................................65 7.2.9 Navigation Logic ............................................................................................66 7.2.9 Object Tracker Embedded Code.....................................................................66 CONCLUSION ......................................................................................................... 69 Appendix A: Algorithms .......................................................................................... 71 A.1 Lucas Kanade Method ........................................................................................71 A.2 Horn Schunck Method........................................................................................74 Appendix B: More Bot Evaluations ........................................................................ 77 Appendix C: MATLAB Hardware Routines ......................................................... 78 REFERENCES ......................................................................................................... 87

–7–

CHAPTER 1 1.

INTRODUCTION

1.1 Scope of the Project Autonomous intelligent vehicles are the one of the most widely researched areas in computer science and engineering. These vehicles capable of operating by themselves or at least with minimal human help, find applications in a lot of areas where sending humans is too dangerous or where human cognition skills are utilized only minimally. This project is an attempt to explore this active field by designing and implementing a small, simple robot that guides itself using visual information from its environment. 1.2 Application Areas UAV – Modern powerful Unmanned Aerial Vehicles are being built for surveying of a landscape. Precision Target Tracking – To track and follow a moving object through area or a terrain. Self-navigating Vehicle – A vehicle that would avoid any obstacle. Space Missions – Unmanned Rovers have been sent to Mars for planetary survey.

–8–

CHAPTER 2 2.

AUTONOMOUS ROVERBOT USING SCENE ANALYSIS

2.1 Application Areas Application areas for the Roverbot include: •

Space exploration



Domestic usage



Helping disabled people



Performing repetitive tasks in industry



Performing dangerous tasks

2.2 Research Analysis 2.2.1 Software Tool Evaluation I. RobotFlow RobotFlow [1] is a complete framework for robotics and includes video processing blocksets and AI, Fuzzy Logic frameworks. The major limitation is that it has limited vision algorithms which include color tracking and training. A major set of algorithms will have to be designed from scratch and will have to be plugged in somehow into that framework. This work could very well encompass several projects in itself. Another option is to be able to connect this framework to MATLAB and do the image processing there but it beats the purpose because this was being considered to replace the requirement of MATLAB.

–9– II. MATLAB MATLAB is a computing environment designed by MathWorks Inc. for engineers and scientists working in a variety of different fields. It has features for Numerical Computing, Image Analysis, Signal Processing, Artificial Intelligence and a lot of other development libraries. III. Simulink Simulink is a component of MATLAB. It is a software package for modeling, simulating, and analyzing dynamic systems. It supports linear and nonlinear systems, modeled in continuous time, sampled time, or a hybrid of the two. Systems can also be multirate, i.e., have different parts that are sampled or updated at different rates. IV. Video and Image Processing Blockset The Video and Image Processing Blockset is a tool used for the rapid design, prototyping, graphical simulation, and efficient code generation of video processing algorithms. The Video and Image Processing Blockset blocks can import streaming video into the Simulink environment and perform two-dimensional filtering, geometric and frequency transforms, block processing, motion estimation, edge detection and other signal processing algorithms. 2.2.2 Hardware Evaluation I. Lego Mindstorm with IR Infrared is not well suited for our application however it serves well to demo the system features and since Lego Mindstorm consists of discrete components which have much better reliability compared to self-made circuits, hence this option is under consideration.

– 10 – II. More Bot Evaluations See Appendix B. 2.3 Design Approaches Two design approaches have been devised by the team depending on how much functionality to incorporate into our Roverbot. The two approaches are analogous to a Thin Client/Fat Client classification. 2.3.1 Design Approach # 1 To put an entire computer inside the remote vehicle, for example a laptop or a Pentium motherboard which would be running a complete operating system. This approach will be necessary in order to start in any case as preliminary testing will be done over MATLAB and a visual environment for rapid development. Pros •

Can use Wireless LAN to transfer data instead of a customized solution.



The robot can employ its own guidance mechanism if the base-station link terminates.

Cons •

Bulky vehicle.

Possible Solutions •

Use a laptop.



Use a micro-ATX board with a frame-grabber card or USB.



Run Linux/Windows.

– 11 – 2.3.2 Design Approach # 2 The vehicle will be fitted with only sensors and cameras. All the processing is done over the base-station. This kind of vehicle will have its own applications. Pros •

Small vehicle.



The product can showcase as: "Just install this card over your robot and bring it to life!” the card being our customized solution for sending video to the basestation.

Cons •

If the base-station connection breaks, the robot will not know what to do.



If live video is to be transmitted, we would need to find a suitable microcontroller and make an interface from the video camera and send the video over wire/air.

Possible Solutions •

Use some kind of PIC board with embedded video interface and optionally a Bluetooth/WiFi interface as well.



Use a home wireless spy-camera solution (comes with receiver and free software).

2.4 Modified Ideas 2.4.1 Finalized Hardware •

Use a miniature wireless spy camera which will operate on RF.

– 12 – •

Process at most 30 fps.



Use a ready-made toy car with modified remote control interfaced to PC.



No video processing on the robot itself.

2.4.2 Finalized Software •

MATLAB R2006a using the Image Acquisition and Image Processing toolboxes.



Simulink Video and Image Processing Toolbox, for initial design and fast code generation for the final product.

2.4.3 Robot Characteristics •

Robot may be given two locations, source A and destination B and robot will move to that location by avoiding obstructions (Simulates Delivery).



Robot may be locked onto a particular target and the robot will follow that target by avoiding obstacles (Simulates missile behavior).



The robot will navigate around itself freely by avoiding obstructions (Simulates surveillance).

2.5 Project Design Strategy 2.5.1 Main Objective To design a mobile robotic vehicle controlled wirelessly through a base-station. The lightweight vehicle will transmit images back to the base-station and receive directions. The directions can be made specific to a task that it is currently performing.

– 13 – 2.5.2 Main Project Modules •

MATLAB for Image Processing



Hardware Design: Vehicle and Wireless Solution



Software Design

2.5.3 Hardware The hardware consists of a regular toy car fitted with a remote camera and the remote control interfaced with computer. An intermediate software component built using MATLAB links the software algorithm in Simulink to the actual hardware through an interface circuit. 2.5.4 Software Design Of the two aforementioned design approaches, it was decided to go for the second approach since it allows for a lighter vehicle with a focus on maneuverability and portability. The approach seemed perfect for an academic project like ours due to its cost-effectiveness. Also, moving all the processing to the base-station enables simpler software and communication module design and interfacing. It also allows the use of more complex software tools and algorithms, and speeds up processing.

– 14 –

CHAPTER 3 3.

COMPUTER VISION

Computer vision is the study and application of methods which allow computers to "understand" image content or content of multidimensional data in general. The term "understand" means here that specific information is being extracted from the image data for a specific purpose: either for presenting it to a human operator (e. g., if cancerous cells have been detected in a microscopy image), or for controlling some process (e. g., an industry robot or an autonomous vehicle). The image data that is fed into a computer vision system is often a digital gray-scale or colour image, but can also be in the form of two or more such images (e. g., from a stereo camera pair), a video sequence, or a 3D volume (e. g., from a tomography device). In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Computer vision can also be described as the complement (but not necessary the opposite) of biological vision. In biological vision and visual perception real vision systems of humans and various animals are studied, resulting in models of how these systems are implemented in terms of neural processing at various levels. Computer vision, on the other hand, studies and describes technical vision system which are implemented in software or hardware, in computers or in digital signal processors. There is some interdisciplinary work between biological and computer vision but, in general, the field of computer vision studies processing of visual data as a purely technical problem. The main reasons that computers are widely used for vision systems are: •

They are versatile and have fully experimentally capability.

– 15 – •

They are precise and efficient, ambiguities are not supported, unless specific programming.



Computers are able to give precise computer process and the amount of digital memory they use for certain task.

Computer vision and other research areas: •

Computer vision research give new process and task to psychology, neurology, linguistic and philosophy.



Computers can be use to reproduce process of biological vision systems in order to understand them, and they can be used to implement new theories and experimental processes in order to achieve similar vision goals or others.

3.1 Related Fields Computer vision, Image processing, Image analysis, Robot vision and Machine vision are closely related fields. If you look inside text books which have either of these names in the title there is a significant overlap in terms of what techniques and applications they cover. This implies that the basic techniques that are used and developed in these fields are more or less identical, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. The following characterizations appear relevant but should not be taken as universally accepted. Image processing and Image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local

– 16 – operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis does not produce nor require assumptions about what a specific image is an image of. Computer vision tends to focus on the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image. Machine vision tends to focus on applications, mainly in industry, e.g., vision based autonomous robots and systems for vision based inspection or measurement. This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. There is also a field called Imaging which primarily focus on the process of producing images, but sometimes also deals with processing and analysis of images. For example, Medical imaging contains lots of work on the analysis of image data in medical applications. Finally, pattern recognition is a field which uses various methods to extract information from signals in general, mainly based on statistical approaches. A significant part of this field is devoted to applying these methods to image data. A consequence of this state of affairs is that you can be working in a lab related to one of these fields, apply methods from a second field to solve a problem in a third field and present the result at a conference related to a fourth field! 3.2 Examples of Applications of Computer Vision Another way to describe computer vision is in terms of applications areas. One of the most prominent application fields is medical computer vision or medical image

– 17 – processing. This area is characterized by the extraction of information from image data for the purpose of making a medical diagnosis of a patient. Typically image data is in the form of microscopy images, X-ray images, angiography images, ultrasonic images, and tomography images. An example of information which can be extracted from such image data is detection of tumors, arteriosclerosis or other malign changes. It can also be measurements of organ dimensions, blood flow, etc. This application area also supports medical research by providing new information, e.g., about the structure of the brain, or about the quality of medical treatments. A second application area in computer vision is in industry. Here, information is extracted for the purpose of supporting a manufacturing process. One example is quality control where details or final products are being automatically inspected in order to find defects. Another example is measurement of position and orientation of details to be picked up by a robot arm. See the article on machine vision for more details on this area. Military applications are probably one of the largest areas for computer vision, even though only a small part of this work is open to the public. The obvious examples are detection of enemy soldiers or vehicles and guidance of missiles to a designated target. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness" imply that various sensors, including image sensors, provide a rich set of information about a combat scene which can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability.

– 18 – One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars or trucks), and aerial vehicles. An unmanned aerial vehicle is often denoted UAV. The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer vision based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, i.e. for knowing where it is, or for producing a map of its environment (SLAM) and for detecting obstacles. It can also be used for detecting certain task specific events, e. g., a UAV looking for forest fires. Examples of supporting system are obstacle warning systems in cars and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e. g., NASA's Mars Exploration Rover. Other application areas include the creation of visual effects for cinema and broadcast, e.g., camera tracking or match moving, and surveillance. For more information, see [12]. 3.3 Theory of Motion 3.3.1 Ego-motion Ego-motion means self motion or the motion of the observer. In computer vision it refers to the effects created by the motion of the camera itself. The goal of Ego-motion computation is to describe the motion of an object with respect to an external reference

– 19 – system, by analyzing data acquired by sensors on-board on the object i.e. the camera itself. Examples •

Given two images of a scene, determine the 3D rigid motion of the camera between the two views.

I. Problems Ego-motion leads to problems in motion segmentation based scene analysis. When a scene (image) is being segmented on the basis of moving objects, the real motion (velocity, orientation etc.) of objects might appear to be different than the factual value. a. Wrong determination of velocity/orientation. b. Static objects might be perceived as moving objects. II. Solutions A lot of solutions have been proposed to compensate for Ego-motion, most of them being targeted to particular environments or situations, like urban traffic etc. The proposed solutions fall under a few broad categories like: a. Using the knowledge of the environment to separate or remove background features. b. Using knowledge about the motion of the observer (camera) itself, like velocity of a car, if the camera is mounted on a car. c. Probabilistic methods to estimate background features. d. Techniques from stereoscopic vision to perceive depth, and separate background. e. Some combination of the aforementioned techniques.

– 20 – III. Scope of Ego-motion in our project Ego-motion poses the greatest problem in Roverbot navigation as it is almost impossible to avoid obstacles and track targets without identifying targets from obstacles. The problem emerges from the movement of the camera mounted on the Roverbot. Unless the problem is solved, it will create major difficulties in the implementation of autonomous navigation. 3.4 Optical Flow Optical flow is a concept for estimating the motion of objects within a visual representation. Typically the motion is represented as vectors originating or terminating at pixels in a digital image sequence. Optical flow is useful in pattern recognition, computer vision, and other image processing applications. It is closely related to motion estimation and motion compensation. Often the term optical flow is used to describe a dense motion field with vectors at each pixel, as opposed to motion estimation or compensation which uses vectors for blocks of pixels, as in video compression methods such as MPEG. Some consider using optical flow for collision avoidance and altitude acquisition system for micro air vehicles (MAVs).

Figure 3.1: Example images for Optical Flow

– 21 –

Figure 3.2: Optical Flow Vectors calculated from images in Figure 3.1 3.4.1 Horn Schunck Method As described in Appendix A.2, The Horn-Schunck method of estimating optical flow is a global method which introduces a global constraint of smoothness to solve the aperture problem. 3.4.2 Lucas Kanade Method As described in Appendix A.1, Lucas Kanade method of estimation of optical flow has sufficient degree of resistance towards the aperture problem. This is the method we have decided to use in our application. 3.5 Image Processing In the broadest sense, image processing is any form of information processing for which both the input and output are images, such as photographs or frames of video. Most image processing techniques involve treating the image as a two-dimensional signal and applying standard signal processing techniques to it. 3.5.1 Solution Methods A few decades ago, image processing was done largely in the analog domain, chiefly by optical devices. These optical methods are still essential to applications such as holography because they are inherently parallel; however, due to the significant increase

– 22 – in computer speed, these techniques are increasingly being replaced by digital image processing methods. Digital image processing techniques are generally more versatile, reliable, and accurate; they have the additional benefit of being easier to implement than their analog counterparts. Specialized hardware is still used for digital image processing: computer architectures based on pipelining have been the most commercially successful. There are also many massively-parallel architectures that have been developed for the purpose. Today, hardware solutions are commonly used in video processing systems. However, commercial image processing tasks are more commonly done by software running on conventional personal computers. 3.5.2 Commonly Used Signal Processing Techniques Most of the signal processing concepts that apply to one-dimensional signals also extend to the two-dimensional image signal. Some of these one-dimensional signal processing concepts become significantly more complicated in two-dimensional processing. Image processing brings some new concepts, such as connectivity and rotational invariance that are meaningful only for two-dimensional signals. The fast fourier transform is often used for image processing operations because it reduces the amount of data and the necessary processing time. One-Dimensional Techniques •

Resolution



Dynamic range



Bandwidth



Filtering



Differential operators

– 23 – •

Edge detection



Domain modulation



Noise reduction

Two-Dimensional Techniques •

Connectivity



Rotational invariance

Applications •

Photography and printing



Satellite image processing



Medical image processing



Face detection, feature detection, face identification



Microscope image processing



Car barrier detection

3.5.3 Feature Extraction Feature Extraction is the process of generating a set of descriptors or characteristic attributes from an image, like edge, curves, etc. I. Low-level feature extraction Overview Low-level feature detection refers to those basic features that can be extracted automatically from an image without any shape information. As such, thresholding is actually a form of low-level feature extraction performed as a point operation. Other examples include edge detection and curvature estimation.

– 24 – First-order edge detection operators First-order edge detection is akin to first-order differentiation. In image processing, differentiation is implemented using finite differences. Basic operators

Basic edge detection can be implemented using differencing between adjacent pixels. Differencing horizontally adjacent pixels leads to the detection of vertical edges, the differencing operator is called a horizontal edge detector, perhaps a misnomer. Similarly horizontal edges can be detected using a vertical edge detector. The vertical and horizontal edge detectors can be combined to form a general first-order edge detector. A sample first-order edge detector might be like: ⎡ 2 − 1⎤ ⎢− 1 0 ⎥ ⎣ ⎦

Mat. 3.1

Figure 3.3 (a): Example Image for Edge

Figure 3.3 (b): Image showing edges of

Detection

3.3 (a)

– 25 – Prewitt edge detection operator

The Prewitt edge detector has the following mask:

⎡1 0 − 1⎤ M x = ⎢⎢1 0 − 1⎥⎥ ⎢⎣1 0 − 1⎥⎦

Mat. 3.2

⎡1 1 1⎤ M y = ⎢⎢ 0 0 0 ⎥⎥ ⎢⎣− 1 − 1 − 1⎥⎦

Mat. 3.3

Figure 3.4 (a): Example Image for Prewitt

Figure 3.4 (b): Image showing edges of

Edge Detection

3.4 (a)

Sobel edge detection operator

The sobel edge detector has the following mask: ⎡1 0 − 1 ⎤ M x = ⎢⎢2 0 − 2⎥⎥ ⎢⎣1 0 − 1⎥⎦

2 1⎤ ⎡1 ⎢ My = ⎢ 0 0 0 ⎥⎥ ⎢⎣− 1 − 2 − 1⎥⎦

Mat. 3.4

Mat. 3.5

– 26 –

Figure 3.5 (a): Example Image for Sobel

Figure 3.5 (b): Image showing edges of

Edge Detection

3.5 (a)

Canny edge detector

One of the most popular edge detectors of recent years, developed by Canny, uses the outputs of two Gaussian derivative masks. The two outputs are combined by squaring and adding. The peaks of ridges are then found. Ridges that contain a peak over a given threshold are retained as long as they stay above another, lower threshold.

Figure 3.6 (a): Example Image for Canny

Figure 3.6(b): Image showing edges of 3.6

Edge Detector

(a)

– 27 – Second-order edge detection operators

Second-order edge detection is a form of second-order differentiation. Laplacian operator

The Laplacian operator is a template which implements second-order differencing. The second-order differential can be approximated by the difference between two adjacent first-order differences : f"(x) = f'(x) - f'(x+1)

Eq. 3.1

f"(x) = -f(x) + 2f(x+1) - f(x+2)

Eq. 3.2

which leads to :

This gives a horizontal second-order template :

[− 1

2 − 1]

Mat. 3.6

or a combined 2D template :

⎡ 0 −1 0 ⎤ ⎢− 1 4 − 1⎥ ⎥ ⎢ ⎢⎣ 0 − 1 0 ⎥⎦

Mat. 3.7

or ⎡− 1 − 1 − 1⎤ ⎢− 1 8 − 1⎥ ⎥ ⎢ ⎢⎣− 1 − 1 − 1⎥⎦

Figure 3.7 (a): Example Image for Laplacian Operator

Mat. 3.8

Figure 3.7 (b): Image showing edges of 3.7 (a)

– 28 – Marr-Hildreth operator

The Marr-Hildreth operator is a combination of the Gaussian smoothing filter (mask) and the Laplacian filter (mask). Combining them gives a LoG (Laplacian of Gaussian) or DoG (Difference of Gaussians) or the mexican-hat operator. A sample mask can be: 0 −1 0 0⎤ ⎡0 ⎢ 0 −1 − 2 −1 0 ⎥ ⎥ ⎢ ⎢− 1 − 2 16 − 2 − 1⎥ ⎥ ⎢ ⎢ 0 −1 − 2 −1 0 ⎥ ⎢⎣ 0 0 −1 0 0 ⎥⎦

Mat. 3.9

Figure 3.8 (a): Example Image for Marr-

Figure 3.8 (b): Image showing edges of

Hildreth Operator

3.8(a)

II. High-level feature extraction Template Matching

Template matching is conceptually a simple process. We need to match a template to an image, where the template is a sub-image that contains the shape we are trying to find. Accordingly, we center the template on an image point and count up how many points in the template match those in the image. The procedure is repeated for the entire image and the point which led to the best match, the maximum count, is deemed to be the point where the shape (given by the template) lies within the image. The commonly methods used to match templates are:

– 29 – •

Sum of squared differences ( minimization )



Normalized sum of squared differences ( minimization )



Cross correlation ( maximization )



Normalized cross correlation ( maximization )



Correlation coefficient ( maximization )



Normalized correlation coefficient ( maximization )

Hough Transform

The Hough Transform [3] is a technique that locates shapes in images. In particular, it has been used to extract lines, circles and ellipses (or conic sections). In the case of lines it mathematical definition is equivalent to the Radon transform. The HT implementation defines a mapping from the image points into an accumulator space (Hough space). The mapping is achieved in a computationally efficient manner, based on the function that describes the target shape. This mapping requires much less computational resources than template matching. However, it still requires significant storage and high computational requirements. Though the GHT (Generalized Hough Transform) can be used to extract any shape from an image, specialised versions are used for lines, circles, ellipses and other commonly encountered shapes.

Figure 3.9 (a): Example Image for Hough

Figure 3.9 (b): Image showing Canny

Transform

Edges of 3.9(a)

– 30 –

Figure 3.9 (c): The Hough Transform of

Figure 3.9 (d): Most prominent recognized

3.9(a)

object of 3.9(a)

Flexible shape extraction (Snakes)

Active contours or snakes are a completely different approach to feature extraction. An active contour is a set of points which aims to enclose a target feature, the feature to be extracted. It is a bit like using a balloon to find a shape: the balloon is placed outside the shape, enclosing it. Then by taking air out of the balloon, making it smaller, the shape is found when the balloon stops shrinking, when it fits the target shape. By this manner, active contours arrange a set of points so as to describe a target feature by enclosing it. An initial contour is placed outside the target feature, and is then evolved so as to enclose it. Active contours are actually expressed as an energy minimization process. The target feature is a minimum of a suitably formulated energy functional. This energy functional includes more than just edge information: it includes properties that control the way the contours can stretch and curve. In this way, a snake represents a compromise between its own properties (like its ability to bend and stretch) and image properties (like the edge magnitude). Accordingly, the energy functional is the addition of a function of the contour's internal energy, its constraint energy, and the image energy: these are denoted Eint, Eimage, and Econ, respectively. These are functions of the set of points which make up a snake, v(s), which is the set of x and y coordinates of the points in the snake. The energy functional is the integral of these functions of the snake, given s E [0, 1] is the normalized length around the snake.

– 31 –

CHAPTER 4 4.

INTRODUCTION TO ROBOTICS

4.1 Robot

A robot is an electro-mechanical device that can perform autonomous or preprogrammed tasks. A robot may act under the direct control of a human (eg. the robotic arm of the space shuttle) or autonomously under the control of a programmed computer. Robots may be used to perform tasks that are too dangerous or difficult for humans to implement directly (e.g. nuclear waste clean up) or may be used to automate repetitive tasks that can be performed with more precision by a robot than by the employment of a human (e.g. automobile production.) Specifically, robot can be used to describe an intelligent mechanical device in the form of a human. This form of robot (culturally referred to as androids) is common in science fiction stories. However, such robots are yet to become common-place in reality. South Korea says it will have a robot in every home by 2015-2020. 4.2 Robotics

According to the Wikitionary [4], robotics is the science and technology of robots, their design, manufacture, and application. Robotics requires a working knowledge of electronics, mechanics, and software and a person working in the field has become known as a roboticist. The word robotics was first used in print by Isaac Asimov, in his science fiction short story "Runaround" (1941). Although the appearance and capabilities of robots vary vastly, all robots share the features of a mechanical, movable structure under some form of control. The structure of a robot is usually mostly mechanical and can be called kinematic chain (its functionality being akin to the skeleton of a body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of

– 32 – freedom. Most contemporary robots use open serial chains in which each link connects the one before to the one after it. These robots are called serial robots and often resemble the human arm. Some robots, such as the Stewart platform, use closed parallel kinematic chains. Other structures, such as those that mimic the mechanical structure of humans, various animals and insects, are comparatively rare. However, the development and use of such structures in robots is an active area of research (e.g. biomechanics). Robots used as manipulators have an end effector mounted on the last link. This end effector can be anything from a welding device to a mechanical hand used to manipulate the environment. The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases - perception, processing and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). Using strategies from the field of control theory, this information is processed to calculate the appropriate signals to the actuators (motors) which move the mechanical structure. The control of a robot involves various aspects such as path planning, pattern recognition, obstacle avoidance, etc. More complex and adaptable control strategies can be referred to as artificial intelligence. Any task involves the motion of the robot. The study of motion can be divided into kinematics and dynamics. Direct kinematics refers to the calculation of end effector position, orientation, velocity and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance and singularity avoidance. Once all relevant positions, velocities and accelerations have been calculated using kinematics, methods

– 33 – from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used to improve the control algorithms of a robot. In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure and control of robots must be developed and implemented. For more information, see [12]. 4.3 Robot Navigation

Systems that control the navigation of a mobile robot are based on several paradigms. Biologically motivated applications, for example, adopt the assumed behavior of animals. Geometric representations use geometrical elements like rectangles, polygons, and cylinders for the modeling of an environment. Also, systems for mobile robots exist that do not use a representation of their environment. The behavior of the robot is determined by the sensor data actually taken. Further approaches were introduced which use icons to represent the environment. 4.3.1 Navigation

Systems that control the navigation of a mobile robot are based on several paradigms. Biologically motivated applications, for example, adopt the assumed behavior of animals. Geometric representations use geometrical elements like rectangles, polygons, and cylinders for the modeling of an environment. Also, systems for mobile robots exist that do not use a representation of their environment. The behavior of the robot is

– 34 – determined by the sensor data actually taken. Further approaches were introduced which use icons to represent the environment. 4.3.2 Coordinate Systems

Movement in robotics is frequently considered as the local change of a rigid object in relation to another rigid object. Translation is the movement of all mass points of a rigid object with the same speed and direction on parallel tracks. If the mass points run along concentric tracks by revolving a rigid axis, it is a rotation. Every movement of an object can be described by declaration of the rotation and the translation. The Cartesian coordinate system is often used to measure the positions of the objects. The position of a coordinate system XC relative to a reference coordinate system XM is the origin O from XC written in coordinates from XM. For example, the origin of XM could be the base of a robot and the origin from XC could be a camera mounted on the robot. A vector of angles gives the orientation of a coordinate system XC with respect to a coordinate system XM. By applying these angles to the coordinate system XM, it rotates so that it is commutated with the coordinate system XC. Angle aC determines the rotation for the XM-axis of XM, angle bC the rotation for the YM-axis of XM, and angle cC the rotation for the ZM-axis of XM. These angles must be applied to the original coordinate system of XM. The location of a coordinate system XC comprises the position and the rotation in relation to a coordinate system XM. So the location is determined with vector lC that has six values Eq.4.1 The values xM, yM, and zM give the position in the reference coordinate system XM and the angles aC, bC, and cC the orientation. It is possible to write the orientation of a

– 35 – coordinate system XC in relation to a coordinate system XM with the aid of a 3x3 rotation matrix. Rotation matrices consist of orthogonal unit vectors. It holds that: Eq.4.2 The rotation matrix M can be processed from elemental 3x3 rotary matrices of the three orientation angles aC, bC, and cC. The rotation with aC for the XM-axis is stated as rotation with b for the YM-axis as

, the

, and so forth.

Figure 4.1: Six Degrees of Freedom Figure 4.1 shows the three axes XM, YM, and ZM for the coordinate system XM. Rotation angles aC, bC, and cC are attached to the axes. The reference coordinate system XM can be moved in the direction of the three axes to obtain the coordinate system XC. It can also be rotated around the three axes. This means that six degrees of freedom are possible.

– 36 – Homogeneous transformation uses a 4x4 matrix for rotation and translation. The transformation of the coordinate system XM into the coordinate system XC is written with the homogeneous matrix HM,C. Let

and

be the

position of the same scene point in homogeneous coordinates, then the following formula holds:

Eq. 4.3 and always Eq. 4.4.

The location of a rigid object in the coordinate system XC and in the coordinate system XM can be represented with a homogeneous matrix HM,C:

Figure 4.2: Conversion between coordinate systems A further coordinate system XQ is introduced in Figure 4.2. If relations are given as in Figure 17, HM,Q can be processed: Eq. 4.5 Often several coordinate systems are necessary in robotics. For example, the view of a robot is represented with coordinate system XM. Therefore, the origin of XM is the base

– 37 – of the robot. If the robot is equipped with a sensor like a camera, it can be used as a second coordinate system XC, whereby the origin of XC represents the camera that is mounted on the robot, see Figure 4.3.

Figure 4.3: Co-ordinate Systems for a Mobile Robot For example, the mounted camera can be used for depth estimation. The taking of two images from different positions can perform this. It is possible to process the depth for the taken scenario with the coordinates of the camera’s two different positions. For more information, see [5]. 4.4 H Bridge

An H-bridge is an electronic circuit which enables DC electric motors to be run forwards or backwards. These circuits are often used in robotics. H-bridges are available as integrated circuits, or can be built from separate components.

Figure 4.4 (a): H-Bridge Circuit

– 38 –

Figure 4.4 (b): H-Bridge Circuit Operating

The term "H-bridge" is derived from the typical graphical representation of such a circuit. An H-bridge is built with four switches (solid-state or mechanical). When the switches S1 and S4 (according to the first figure) are closed (and S2 and S3 are open) a positive voltage will be applied across the motor. By opening S1 and S4 switches and closing S2 and S3 switches, this voltage is reversed, allowing reverse operation of the motor. Using the nomenclature above, the switches S1 and S2 should never be closed at the same time, as this would cause a short circuit on the input voltage source. The same applies to the switches S3 and S4. This condition is known as shoot-through. A solid-state H-bridge is typically constructed using reverse polarity devices (i.e., PNP BJTs or P-channel MOSFETs connected to the high voltage bus and NPN BJTs or Nchannel MOSFETs connected to the low voltage bus). The most efficient MOSFET designs use N-channel MOSFETs on both the high side and low side because they typically have a third of the ON resistance of P-channel MOSFETs. This requires a more complex design since charge pump circuits must be used to drive the gates of the high side MOSFETs. However, integrated circuit MOSFET drivers like the Harris Semiconductor HIP4081A make this easy.

– 39 – 4.5 Servo Motors

A Servo is a small device that has an output shaft. This shaft can be positioned to specific angular positions by sending the servo a coded signal. As long as the coded signal exists on the input line, the servo will maintain the angular position of the shaft. As the coded signal changes, the angular position of the shaft changes. In practice, servos are used in radio controlled airplanes to position control surfaces like the elevators and rudders. They are also used in radio controlled cars, puppets, and of course, robots.

Figure 4.5: Servo Motor Servos are extremely useful in robotics. The motors are small, as you can see by the picture above, have built in control circuitry, and are extremely powerful for thier size. A standard servo such as the Futaba S-148 has 42 oz/inches of torque, which is pretty strong for its size. It also draws power proportional to the mechanical load. A lightly loaded servo, therefore, doesn't consume much energy. A servo has 3 wires that connect to the outside world. One is for power (+5volts), ground, and the white wire is the control wire.

– 40 –

Figure 4.6: Servo Motor Components The servo motor has some control circuits and a potentiometer (a variable resistor, aka pot) that is connected to the output shaft. In the picture above, the pot can be seen on the right side of the circuit board. This pot allows the control circuitry to monitor the current angle of the servo motor. If the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not correct, it will turn the motor the correct direction until the angle is correct. The output shaft of the servo is capable of travelling somewhere around 180 degrees. Usually, its somewhere in the 210 degree range, but it varies by manufacturer. A normal servo is used to control an angular motion of between 0 and 180 degrees. A normal servo is mechanically not capable of turning any farther due to a mechanical stop built on to the main output gear. The amount of power applied to the motor is proportional to the distance it needs to travel. So, if the shaft needs to turn a large distance, the motor will run at full speed. If it needs to turn only a small amount, the motor will run at a slower speed. This is called proportional control. The control wire is used to communicate the angle. The angle is determined by the duration of a pulse that is applied to the control wire. This is called Pulse Coded Modulation. The servo expects to see a pulse every 20 milliseconds (.02 seconds). The length of the pulse will determine how far the motor turns. A 1.5 millisecond pulse, for

– 41 – example, will make the motor turn to the 90 degree position (often called the neutral position). If the pulse is shorter than 1.5 ms, then the motor will turn the shaft to closer to 0 degress. If the pulse is longer than 1.5ms, the shaft turns closer to 180 degress.

Figure 4.7: Pulse and Degree Turns of Motor Shaft The duration of the pulse dictates the angle of the output shaft (shown as the circle with the arrow). 4.6 Robotics Future

Over the next few years, autonomous robots will become increasingly sophisticated and able to do more tasks other than simply entertain their owners. Extrapolations have shown that a current PC has the computational equivalence of a low-order animal brain, and during the next decade it is likely that PC's will grow in speed to be equivalent to the brain of a higher animal such as a rat. If one considers the mobility and level of intelligence of these animals, it can be seen that there is enormous potential for converting a sophisticated entertainment device into something useful.

– 42 – It is our contention that the robot designs that will succeed will be those that are adaptable to our environment – and one class of these are legged robots. The legged body form that is most suitable for our environment is a biped – identical in layout and size to us. Unfortunately, the biped is one of the most difficult body-forms to control, and therefore practical consumer autonomous legged robot design needs to evolve towards this goal. The steps needed to achieve a practical and affordable manifestation of the biped are complex and at the cutting edge of robotics – nevertheless it is our contention that these steps are tractable and achievable in the short term. Such an autonomous biped would have the ability to interact with the physical world and integrate into society in a way that has never been seen before in a machine. The initial ability of movement and access to most areas of a domestic environment would quickly be matched by the machine's ability to move or carry objects around within the environment. Already, it is possible to imagine almost limitless uses for a device that can perform such a rudimentary task, in offices and factories as well as the home. Beyond this point, with more sophisticated Artificial Intelligence and control, comes a vast number of tasks for which the machine can replace a person, including, in a domestic environment; cleaning, tidying, cooking, ironing, mowing, gardening, D.I.Y repair, building, child monitoring, playing, security – the list is limited only by the imagination. There are also countless commercial applications including high-risk environments, special effects and security or military applications.

– 43 – 4.7 Some Robots

Figure 4.8 (a): Panoramic View from Mars Rover ‘Spirit’.

Figure 4.8 (b): Famous Humanoid Robot ‘Asimo’ manufactured by Honda.

Figure 4.8 (d): The Micro-Robot Explorer, nicknamed “Spider-bot,” developed by NASA’s Jet Propulsion Laboratory. Figure 4.8 (c): A humanoid robot manufactured by Toyota "playing" a trumpet.

– 44 –

CHAPTER 5 5.

THE BOT

5.1 General Design and Structure

The robot is a mobile unit controlled wirelessly through a computer. It consists of a remote controlled toy car with a camera mounted on top. The camera and the car's remote control both operate on RF.

Figure 5.1 (a) : The Roverbot Vehicle

Figure 5.1 (b) : The Roverbot Vehicle

[Front-View]

[Front-View]

Figure 5.1 (c) : The Roverbot Vehicle [Side-View]

– 45 – 5.2 Hardware Specifications 5.2.1 Camera specs

A wireless RF based camera sold by a JMK is used in the project to transmit live video. Image pickup device 1/3 ¼ inch CMOS TV System

PAL/CCIR NTSC/EIA

Definition

380 TV lines

Scan frequency

PAL/CCIR:50Hz NTSC/EIA:60Hz

Min illumination

3 LUX

Output Power

50mW - 200mW

Output Frequency

900MHz - 1200MHz

Power Supply

DC +6 ~ 12 V

5.2.2 Hardware modules used

Figure 5.2: Roverbot Remote-Control Circuitry

– 46 – The two circuits above are a part of remote control circuitry which operates over RF. The real brain of the circuit is the SCTX2B IC. It works on active low and sending with separate pins for Forward, Reverse, Left and Right commands. 5.2.3 Integration

Figure 5.3: Integration of Roverbot Vehicle and Base-Station Control •

Camera sends video stream to the computer.



Software processes the video stream through scene analysis.



Software generates signals on the parallel port.



Parallel port connects to an interface circuit



Interface circuit connects to the remote control



Remote controller communicates with the vehicle

– 47 –

Figure 5.4: Control Flow Chart for the Roverbot Vehicle

Running •

Wireless Camera transmits to the RF receiver on the terminal.



RF receiver connects to the frame grabber card.



Frame grabber card transmits data through PCI.

MATLAB acquires video through the Image Acquisition Toolbox.

– 48 –

CHAPTER 6 6.

THE INTERFACE

All communication between the mobile robot and the base-station needed a simple yet efficient interfacing mechanism, one that would enable us to exercise adequate control over the robotic vehicle. Of the several choices, the Parallel Printer Port (LPT1) was chosen due to its simplicity and adequate data-transfer capability required to control our robot. The interface was, therefore, a custom-made electronic circuit designed to communicate with both the Parallel Port and the robot’s remote-control. The remote-control for the robot was modified to work in conjunction with the interface circuit that transmitted data between the two. A detailed explanation of the circuit follows. 6.1 The Parallel Port

The Parallel Port is the most commonly used port for interfacing home made projects. This port will allow the input of up to 9 bits or the output of 12 bits at any one given time, thus requiring minimal external circuitry to implement many simpler tasks. The port is composed of 4 control lines, 5 status lines and 8 data lines. It is found at the back of the PC as a DB-25 pin female connector. Newer parallel ports are made in accordance with the IEEE 1284 standard.

Figure 6.1: The DB-25 Female Parallel Port Connector

– 49 –

6.1.1 Hardware

Figure 6.2: Parallel Port Pin Configuration The Parallel Port pins can be further sub-divided into three ports namely: Data Port or Port 0 (Pins 2-9): Used for data output. Status Port or Port 1 (Pins 10-13 and Pin15): Used for status input. Control Port or Port 2 (Pin 1, 14, 16, 17): Control port is a read/write port. The Parallel Port (LPT1) address on most modern computers is 0x378. A tabular representation of Parallel Port pins follows. Pin

Description

Direction

1

Strobe

In/Out

2

Data 0

Output

3

Data 0

Output

4

Data 0

Output

5

Data 0

Output

6

Data 0

Output

7

Data 0

Output

8

Data 0

Output

9

Data 0

Output

– 50 – 10

Acknowledge

Input

11

Busy

Input

12

Paper Empty

Input

13

Select

Input

14

Autofeed

Output

15

Error

Input

16

Initialize Printer

Output

17

Select Input

Output

18-25

Ground

Gnd

6.1.2 Software I. Parallel Port on Linux

See Appendix C for code example. II. Parallel Port on WindowsXP

Windows XP does not allow direct port access to user mode processes. It requires installation of a driver to allow port access to a process. The driver runs in kernel mode and provides a bridge which can be used by user mode processes to access the port. For an example of how to do this using a driver called PortTalk, see Appendix C. 6.2 Interface Design

The interface provides the glue between the robot and the scene analysis software running on the base-station. It is used to control the Roverbot’s movements. The software uses the building blocks of this module to send appropriate motion commands to the mobile unit. It consists of a parallel port interface coupled with the remote-control which operates the Roverbot.

– 51 –

Figure 6.3: Interface Design Summary

There are separate MATLAB routines corresponding to each logical block shown in Figure 6.3. For Example, there is an initialization routine that is responsible for clearing the computer’s Parallel Port buffer and in the process acquires the output lines for the Data Port which is used to communicate with the Roverbot. Once initialized, the software waits for a control signal specifying which direction to move the Roverbot in. The control signals pertaining to the different directions of motion of the Roverbot are defined later in the chapter. 6.2.1 Interface Hardware

In order to send commands pertaining to movement of the Roverbot, a hardware interface has been developed between the remote-control of the vehicle and the scene analysis software using the Parallel Port (LPT1). The software reads/writes a byte of data to the Port0 of LPT1 using this interface.

– 52 –

Figure 6.4: Interface Circuit Hardware This interface consists of three stages: •

The first stage contains the female parallel port interface (to which the computer’s parallel port connects) which, in turn, is connected to a number of components that collectively act as a buffer.



The next stage consists of transistors preceded with resistors to provide protection against reverse current which could damage the parallel port. This is ensured by using zener diodes.



The third stage provides the connection between the interface circuitry and the remote control that drives the car. This connection is provided using Relays followed by Normally-Open and Normally-Closed connections which help determine the current status of each bit of the port.

– 53 – The complete schematic of the interface circuitry is provided as follows:

Figure 6.5: Interface Circuit Schematic

The circuit is driven by 12V AC supply. Fed by a step-down transformer, the supply is followed by a capacitor whose job is to. Next, are the eight relays in either a NormallyOpen or a Normally-Closed condition. Each relay is followed by a Switch (labeled in Figure 6.5) which is used to read the status of a particular bit of the Data Port (Port0). In our configuration, the last four bits are used to send commands or control signals to the Roverbot (explained later). So, only one set of switches in Figure 6.5 is utilized while the other is idle. The four switches used are connected to the corresponding movement terminals on the remote-control circuit of Figure 5.2, thus, completing our custom interface circuit coupled with the remote-control for the Roverbot. Each control signal generated by the scene analysis software running on the base-station is communicated to the Roverbot via the Data Port (Port0) of the Parallel Port which, in turn, is sent to the Roverbot using the remote-control.

– 54 – I. Port Bit Combinations Used

In order to convey the four commands to the Roverbot, the last four bits of LPT1's Port0 are used. Reading from LSB towards MSB, each bit acts as a trigger for Forward, Backward, Right and Left commands with bit '1' signifying activation of motion and bit '0' signifying the de-activation. The bit combinations used by the software to send commands are described as follows: Command

On/Off

Bit Combination

Forward

On

XXXXXX01

Forward

Off

XXXXXX00

Backward

On

XXXXXX10

Backward

Off

XXXXXX00

Right

On

XXXX01XX

Right

Off

XXXX00XX

Left

On

XXXX10XX

Left

Off

XXXX00XX

BrakeAll

-

XXXX0000

BrakeH

-

XXXXXX00

BrakeV

-

XXXX00XX

where X denotes don't care. Note that for Forward motion, the bit signifying Backward motion is also set to zero to avoid conflicts and vice versa. The same is the case with Left and Right motion bit signals.

– 55 – 6.2.2 Interface Software

The software routines have been written in MATLAB. An 8-bit buffer reads/writes pin status from Port0 of LPT1. This ensures that the same command is never sent twice to the port's pins by storing the last-sent command in a buffer variable. The Data Acquisition Toolbox in MATLAB provides access to digital I/O subsystems through a Digital I/O object. The Digital I/O object can be associated with a parallel port or with a Digital I/O subsystem on a data acquisition board. The MATLAB routines controlling robot motion are named as: 1. MoveForward 2. MoveBackward 3. MoveLeft 4. MoveRight 5. BrakeH 6. BrakeV 7. BrakeAll Before these routines can be used, the digital I/O object needs to be initialized and all pins on LPT1 Port 0 need to be reset. Two routines, therefore, instantiate and clear the digital I/O object from memory namely ppo_start and ppo_stop [Appendix C].

– 56 –

Roverbot Motion Controller Simulink Model

Figure 6.6: Simulink Model for Roverbot Motion Control

– 57 –

CHAPTER 7 7.

SCENE ANALYSIS

7.1 Design Summary

This portion of the project is developed on MATLAB using the Video Image Processing blockset.

Figure 7.1: Scene Analysis Design Summary

Live video is grabbed through the frame grabber card which is then acquired in MATLAB through the image acquisition toolbox. Two separate operations: Optical

– 58 – Flow and Pattern matching are applied to track the object and new direction vector for the vehicle is calculated. 7.2 Object Tracking using Lucas Kanade 7.2.1 Overview

The Simulink model (Figure 7.2) consists of several blocks and sub-systems which make up the entire application. The input video is acquired through Image Acquisition Toolbox through a frame grabber card and the video processing is done on the framelevel through the Simulink model (Figure 7.2). Firstly, the object motion is tracked by Lucas Kanade and we represent all the objects by surrounding it with a green bounding box. There is a threshold in place (Figure 7.3) which filters out very small objects and the inherent nature of Lucas Kanade takes care of many of Ego-motion problems which works best for our application. After surrounding it with the bounding boxes (Figure 7.5) it starts with the center of the screen where our object is located which we wish to track. The objects are filtered for size (Figure 7.3); for remaining objects, centroid is calculated and a vector is drawn from the camera position to the object we are tracking (Figure 7.6). The white line (Figure 7.8) is the vector path the vehicle must follow in order to track the object. The system then uses pattern match to avoid random jumps between different objects that appear on the view. In case the object leaves the vehicle view at any time the vehicle will move backward to reacquire its target to track. The vehicle stops if it comes at a certain vector distance to the object it is tracking.

– 59 – 7.2.2 Top Level Model Object Tracking using Optical Flow (Lucas Kanade)

Th. Img

R

Th. Img Count

MVI_1757.AVI V: 120x160, 15 fps

G B

R' G'

Count

R'G'B' to intensity

I'

I

Optical Flow |V|^2 (Lucas-Kanade)

Sq. Mag

BBox Video In

BBox

B'

Centroid

EOF Centroid

From Multimedia File T hresholding and Region Filtering

EOF StartCoords VidSize

Display Results

-CStartCoords

STOP Stop Simulation

-CVidSize

Figure 7.2: Top level Simulink Model for Object Tracking

The first block in the sequence, named “From the Multimedia File”, is used for acquiring the image from the video file or a camera attached to the computer. The image is then converted to grayscale, using the second block in the sequence “RGB to Intensity”. The third block in the sequence, labeled as “Optical Flow (Lucas-Kanade)”, is the block that implements the Lucas-Kanade algorithm for motion detection. This block comes from the Video and Image Processing Blockset (section 2.2.1.II.b). The Lucas-Kanade block returns the squared magnitude of the velocity vectors for all image pixels. This resulting pseudo-image (the square of the magnitude of velocity vectors forms a grayscale image) is then fed in to the “Region filtering and Thresholding” block, which is described in the next section.

– 60 – 7.2.3 Thresholding and Region Filtering

Thresholding and Region Filtering 1

>= 0.0012

1

Sq. M ag

T h. Im g

Count Regions BW

2 Count 3 BBox

CentroidOld Centroid

Region Filtering

4 Centroid

Figure 7.3: Simulink Block for Thresholding and Region Filtering The “Thresholding and Region Filtering” block implements the logic that determines which pixel regions are to be considered as moving and which stationary. The first block in the sequence is the comparison block that compares the input (square of the magnitude of velocity vector for each pixel) against a predefined threshold value. This value is application and environment dependent. It influences the accuracy with which moving objects are detected and tracked. The filtered image, which is binary (black and white) is fed into the “Region Filtering” block which is described next.

– 61 – 7.2.4 Region Filtering

Figure 7.4: Simulink Block for Region Filtering The “Region Filtering” block implements the logic which filters moving objects based on their size. The first block in the sequence is the “Blob Analysis” block which returns different attributes about blobs (regions in the image which are treated as one object) in the input image, e.g., Area, Centroid, Bounding Box, etc. This block is being used to calculate the Bounding Box of the blobs in the input image (moving blobs). These bounding boxes are then fed into the “Merge Box” block that merges blocks that intersect each other. There is also a “Centroid Calculation” block which is explained in the next section.

– 62 –

7.2.5 Centroid Calculation

Figure 7.5: Simulink Block for Centroid Calculation The “Centroid Calculation” block calculates the centroids of objects in the image. The input port of this subsystem is fed with an array of bounding boxes (calculated by the Blob Analysis block). Each bounding box is specified by four numbers, x and y coordinates of the top left corner, and the width and height of the box. The subsystem partitions these four components and then finds the center of each box. This is taken as the centroid of the blob whose bounding box was input to the subsystem.

– 63 –

7.2.6 Display Results

Display Results T o Video Display

I

1 T h. Im g

Count

2 Count

3

T hreshold

BBox Video In

BBox

Centroid Video Out

4



R G

T o Video Display

B

EOF

Results

Video In

StartCoords 5 Centroid

VidSize

Dis play bounding boxes R



G



T o Video Display

B



Original 6 EOF

7 StartCoords

sizething

8

M AT LAB Function

VidSize

T o Workspace

M AT LAB Fcn

120 Display

Figure 7.4: Simulink Block for Video Display As its name implies, the “Display Results” block is used to display the results of the calculations done in the previous stages. This block consists of three “Video Display” blocks, which represent the computer monitor, and a subsystem names “Display Bounding Boxes”. This subsystem contains the logic to render bounding boxes and other parameters in textual form onto the final result.

– 64 – 7.2.7 Display Bounding Boxes Display Bounding Boxes Create white line Video In 3



2 BBox

Black background for count

R G B

R

A

B

A

B

Draw Rectangles G

A

B

A

B

B

Pts

A

B

A

B

R R G Draw markers G B (X-mark) Pts

R G B Pts

B Draw Markers

Draw Shapes

Draw Lines

R

R

G

G '(%2.0f,%...'G B

B Draw vector

R

Variables B 1

Insert Text1

Count 4 Centroid Line Vector FirstCentroid CentroidArray FirstCentroidRow

R

R

FirstCentroidCol

R

G

LineVector

'%4d'

G

B B

Variables CentroidArray CentroidCoords booleof

G

B

Insert Text

ObjectTracker StartCoords linev ector VidSize

Embedded MATLAB Function

R

Red

5 EOF

Green G Blue

6 StartCoords 7 VidSize

B

CentroidCol

Direction

ScreenCenter

NavigationLogic

80 Constant1

Figure 7.5: Simulink Model for Displaying Bounding Boxes The first block in the sequence “Draw Shapes” is the Simulink Block that is responsible for overlaying different shapes, e.g., circles, rectangles, etc, on a video. This block is used to draw the bounding boxes on moving objects. The “Draw Marker” subsystem is

1 Video Out

– 65 – used to draw the “X” mark at the location of centroids of moving objects. The next block, “Draw Vector” draws a vector from the current position of the vehicle to the position of the target (which is determined by its centroid). This is followed by a couple of “Draw Text” blocks that overlay useful textual information over the resulting video, like the current position of the target, the direction the vehicle should move in to reach the target, etc. The other subsystems are explained in the following sections. 7.2.8 Line Vector Calculation

Line Vector

1 CentroidArray

In1

Select Out1 Colum ns

SelectFirstCentroid

uT

1 Line Vector

T ranspose M atrix Concatenate [1x2]

2

Constant

FirstCentroid

3 FirstCentroidRow

Subm atrix1

4 FirstCentroidCol

Subm atrix

Figure 7.6: Simulink Block for Line Vector Calculation The “Line Vector Calculation” subsystem calculates the vector from the current vehicle position to the target. The first block in the model selects the first target in the list of targets. It then output the row and column of the end-point of the vector that is to be drawn. The results box (7.2.4) then draws the arrow from the reference point to the target.

– 66 – 7.2.9 Navigation Logic Navigation Logic 4 CentroidCol

<=

Relational Operator



5



ScreenCenter

1 Red 2 Green 3

Switch

R

R

G 'LEFT ' G B

1 R 2 G 3 B

4 Direction

B

Insert T ext2

Blue

R

R

G 'RIGHT ' G B

B

Insert T ext3

Figure 7.7: Simulink Block for Navigation Logic The “Navigation Logic” block is used to determine whether the vehicle should move left or right relative to the current position of the vehicle. It uses the current xcoordinate of the centroid calculated in the previous stage, and compares this the xcorrdinate of the current position of the vehicle. The middle of the last row of the image is taken as the reference point. The string “LEFT” or “RIGHT” is overlayed on the final display depending on whether the vehicle should move to the left of its current position or to the right. 7.2.9 Object Tracker Embedded Code function [CentroidCoords,linevector] = ObjectTracker(CentroidArray,booleof,StartCoords,VidSize) persistent LastCentroid;

if isempty(LastCentroid) || booleof == true

– 67 – LastCentroid = StartCoords; end

z = LastCentroid;

allx = CentroidArray(1,:) - z(1); ally = CentroidArray(2,:) - z(2); myvector = sqrt(allx.^2 + ally.^2); [m i] = min (myvector); centroid = CentroidArray(:,i(1));

if myvector(i(1)) < 30 LastCentroid = centroid; CentroidCoords = centroid; else CentroidCoords = z; end

linevector = [ VidSize(1) , VidSize(2)/2 , CentroidCoords(1) , CentroidCoords(2) ];

The above code segment is contained in the Embedded MATLAB function block in Figure 7.5. This code segment implements algorithm that filters out the current target being tracked from a list of potential targets. The algorithm works by comparing the position of each target (specified by its centroid) to the last position of the target being tracked. If the difference (measured by Euclidean distance) falls within a threshold value, this object is considered to be the target, and other objects are eliminated. The new position of the target, that is the current position of the object selected after filtering, is then stored for use in the next iteration.

– 68 –

Figure 7.8: A sampled frame from Roverbot’s Camera

Figure 7.9: Another frame from Roverbot’s Camera

Figure 7.10: Another frame showing that the object has come closer.

Figure 7.11: A frame showing that the Roverbot has changed its direction.

– 69 –

CONCLUSION

Although the complete robot package is ready for deployment, there is still a lot of room for improvements on all fronts namely the Interface, the Robot itself and the Scene Analysis algorithms. As far as the interface goes, it could be upgraded from Parallel Port to Universal Serial Bus Port (USB), which has become the norm today. The interface could also be extended to provide greater accuracy of the mobile robot in terms of Speed Control and Degree-based Left/Right movements. Future Directions 1. Smarter target tracking using Pattern matching

In addition to motion tracking, pattern matching can be implemented as a supplemental algorithm to the baseline approach of centroid approximation. This would give sufficient improvement over current algorithm. A hypothetical scenario could be:

a) Track all moving targets and speculate the object we need to lock on to using Lucas Kanade (LK) and Pattern Match (PM) separately.

b) If LK fails and the PM is within the acceptable range, we acquire a new target.

c) If PM failed, we supply the new image pattern to the pattern match engine which we're tracing through LK. This happens because of the motion in the third dimension perpendicular to the plane we're actually observing.

– 70 – 2. Obstacle avoidance using Harr classifiers and pattern matching

Harr classifiers can be used to map known objects in an image. This way obstacles can not only be avoided but a report of the nature of the objects can be stored on the server side. This is a very important feature of live video surveillance to send out alert for any particular items.

The Haar classifiers can also be used to track known targets. In that case the vehicle will first search its environment for the targets that match that pattern and then lock them and apply LK tracking over them.

3. Distributed processing of frames over multiple processing nodes

Fore-mentioned techniques can be deployed through parallel processing which will make the system very efficient. MATLAB is now offering a distributed computing toolbox in releases 2006a and later. This toolbox allows the distribution and control of jobs on multiple workstations running MATLAB.

– 71 –

Appendix A: Algorithms A.1 Lucas Kanade Method For original publication, see [10]

Optical Flow methods try to calculate the motion between two image frames which are taken at times t and t + δt at every pixel position. As a pixel at location (x,y,z,t) with intensity I(x,y,z,t) will have moved by δx, δy, δz and δt between the two frames, following image constraint equation can be given: I(x,y,z,t) = I(x + δx,y + δy,z + δz,t + δt)

Eq. A.1

Assuming the movement to be small enough, we can develop the image constraint at I(x,y,z,t) with Taylor series to get:

Eq. A.2 where H.O.T. means higher order terms, which are small enough to be ignored. From these equations we achieve: Eq. A.3

or Eq. A.4

which results in Eq. A.5

– 72 – where Vx,Vy,Vz are the x,y and z components of the velocity or optical flow of I(x,y,z,t)

and

,

,

and

are the derivatives of the image at (x,y,z,t) in the

corresponding directions. We will write Ix,Iy, Iz and It for the derivatives in the following. Thus IxVx + IyVy + IzVz = − It

Eq. A.6

or Eq. A.7 This is an equation in three unknowns and cannot be solved as such. This is known as the aperture problem of the optical flow algorithms. To find the optical flow we need another set of equations which is given by some additional constraint. The solution as given by Lucas and Kanade is a non-iterative method which assumes a locally constant flow. Assuming that the flow (Vx,Vy,Vz) is constant in a small window of size with m > 1, which is centered at voxel x,y,z and numbering the pixels as 1...n we get a set of equations:

Eqs. A.8

With this we get more than three equations for the three unknowns and thus an overdetermined system. We get:

– 73 –

Eqs. A.9

or Eq. A.10 To solve the over-determined system of equations we use the least squares method: Eq. A.11 or

Eq. A.12

or Eq. A.13

This means that the optical flow can be found by calculating the derivatives of the image in all four dimensions. A weighting function W(i,j,k), with should be added to give more prominence to the center pixel of the window. Gaussian functions are preferred for this purpose. Other functions or weighting schemes are possible. Besides for computing local translations, the flow model can also be extended to affine image deformations. When applied to image registration, such as stereo matching, the Lukas-Kanade method is usually carried out in a coarse-to-fine iterative manner, in such a way that the spatial derivatives are first computed at a coarse scale in scale-space (or a pyramid), one of the images is warped by the computed deformation, and iterative updates are then computed at successively finer scales.

– 74 – One of the characteristics of the Lucas-Kanade algorithm, and that of other local optical flow algorithms, is that it does not yield a very high density of flow vectors, i.e. the flow information fades out quickly across motion boundaries and the inner parts of large homogenous areas show little motion. Its advantage is the comparative robustness in presence of noise. A.2 Horn Schunck Method For original publication, see [9].

The Horn-Schunck method of estimating optical flow is a global method which introduces a global constraint of smoothness to solve the aperture problem (see Lucas Kanade method for further description). A global energy function is sought to be minimized, this function is given as: Eq. A.14 where Eq. A.15

are the derivatives of the image intensity values along the x,y,z and t dimensions,

is

the optical flow vector with the components Vx,Vy,Vz. The parameter α is a regularization constant. Larger values of α lead to a smoother flow. This function can be solved by calculating the Euler-Lagrange equations corresponding to the solution of the above equation. These are given as follows: Eq. A.16

– 75 – Eq. A.17

Eq. A.18

where Δ denotes the laplace operator so that Eq. A.19 . Solving these equations with Gauss-Seidel for the flow components Vx,Vy,Vz gives an iterative scheme: Eq. A.20

Eq. A.21

Eq. A.22

where the superscript k+1 denotes the next iteration, which is to be calculated and k is the last calculated result. ΔVi can be obtained as: ΔVi =



Vi(N(p)) − Vi(p)

Eq. A.23

N(p) where N(p) are the six neighbors of the pixel p. An alternative algorithmic implementation based upon the Jacobi method is given as: Eq. A.24

Eq. A.25

– 76 – Eq. A.26

where

refers to the average of

in the neighborhood of the current pixel position.

Advantages of the Horn-Schunck algorithm include that it yields a high density of flow vectors, i.e. the flow information missing in inner parts of homogeneous objects is filled in from the motion boundaries. On the negative side, it is more sensitive to noise than local methods.

– 77 –

Appendix B: More Bot Evaluations Showcase items •

New 2CH RTF SYMA DragonFly Radio Remote Control Helicopter, $36.95, ships next day: http://www.elitehobby.com/rc5-1012.html



Desktop Rover: http://lego-mindstormsroboticskits.stores.yahoo.net/desktoprover1.html



Brink Rover: http://www.brink.com/brink_rover.php



http://www.hobbytron.com/

Miscellaneous items •

RC Cars http://www.elitehobby.com/rc-toys-rc-cars.html



SYMA Dragonfly http://www.egrandbuy.com/newsydrrarec.html



Digital Programmable Remote Copycat http://www.hobbytron.com/G-603A.html



Silver fly http://www.nitroplanes.com/sidr2rarecoe.html



LEGO Mindstorm http://www.hobbytron.com/ProgrammableRobotKits.html

– 78 –

Appendix C: MATLAB Hardware Routines Parallel Port on Linux #include #include #include #include

#define base 0x378

/* printer port base address */

#define value 255

/* numeric value to send to printer port */

main(int argc, char **argv) { if (ioperm(base,1,1)) fprintf(stderr, "Couldn't get the port at %x\n", base), exit(1);

outb(value, base); }

Parallel Port on Windows #include #include #include "pt_ioctl.c"

void __cdecl main(void) { unsigned char value; printf("IoExample for PortTalk V2.0\nCopyright 2001 Craig Peacock\nhttp://www.beyondlogic.org\n"); OpenPortTalk(); value = 0xFF; printf ( "Value sent: 0x%02X \n", value ); outportb(0x378, value); value = inportb(0x378); printf("Value returned = 0x%02X \n",value);

– 79 – getch(); puts ( "Press any key to continue..." ); value = 0x00; printf ( "Value sent: 0x%02X \n", value ); outp(0x378, 0x00); value = inp(0x378); printf("Value returned = 0x%02X \n",value); ClosePortTalk(); }

PPO_START Routine %Declare our global variables global ppo; global LEFT_ON; global LEFT_OFF; global RIGHT_ON; global RIGHT_OFF; global UP_ON; global UP_OFF; global DOWN_ON; global DOWN_OFF; global Buffer;

%Set constants %LEFT_ON = [0 0 0 0 1 0 0 0]; LEFT_ON=hex2dec('08'); %LEFT_OFF = [1 1 1 1 0 1 1 1]; LEFT_OFF=hex2dec('F7'); %RIGHT_ON = [0 0 0 0 0 1 0 0]; RIGHT_ON=hex2dec('04'); %RIGHT_OFF = [1 1 1 1

1 0 1 1];

RIGHT_OFF = hex2dec('FB'); %UP_ON = [0 0 0 0 0 0 0 1]; UP_ON = hex2dec('01');

– 80 – %UP_OFF = [1 1 1 1 1 1 1 0]; UP_OFF=hex2dec('FE'); %DOWN_ON = [0 0 0 0 0 0 1 0]; DOWN_ON=hex2dec('02'); %DOWN_OFF = [1 1 1 1 1 1 0 1]; DOWN_OFF=hex2dec('FD'); Buffer=hex2dec('00');

%Instantiate our digitalio device object ppo = digitalio('parallel','LPT1');

%Add output lines and start ppolines = addline(ppo,0:7,0,'out'); start(ppo);

%Sending all zeros to the port at first putvalue(ppo.Line(1:8),dec2binvec(Buffer,8));

PPO_STOP Routine function ppo_stop global ppo; stop(ppo); delete(ppo); clear ppo

MoveForward Routine function MoveForward(x) %Declaring globals needed for this operation global ppo; global DOWN_OFF; global UP_ON; global UP_OFF; global Buffer;

– 81 –

%First, we need to ensure the object is runnig if(isrunning(ppo)) %Get present port pin status ppoval=getvalue(ppo);

%AND with DOWN_OFF to ensure that particular bit is 0 Buffer = bitand(ppoval,dec2binvec(DOWN_OFF,8));

%Depending on input param. 'x', use UP_ON or UP_OFF if(x==1) Buffer = bitor(Buffer,dec2binvec(UP_ON,8)); else if(x==0) Buffer = bitand(Buffer,dec2binvec(UP_OFF,8)); end end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end

end

MoveBackward Routine function MoveBackward(x) %Declaring globals needed for this operation global ppo; global UP_OFF; global DOWN_ON; global DOWN_OFF; global Buffer;

– 82 – %First, we need to ensure the object is runnig if(isrunning(ppo))

%Get present port pin status ppoval=getvalue(ppo);

%AND with UP_OFF to ensure that particular bit is 0 Buffer = bitand(ppoval,dec2binvec(UP_OFF,8));

%Depending on input param. 'x', use DOWN_ON or DOWN_OFF if(x==1) Buffer = bitor(Buffer,dec2binvec(DOWN_ON,8)); else if(x==0) Buffer = bitand(Buffer,dec2binvec(DOWN_OFF,8)); end end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end

end

MoveLeft Routine function MoveLeft(x) %Declaring globals needed for this operation global ppo; global RIGHT_OFF; global LEFT_ON; global LEFT_OFF; global Buffer;

%First, we need to ensure the object is runnig

– 83 – if(isrunning(ppo))

%Get present port pin status ppoval=getvalue(ppo);

%AND with RIGHT_OFF to ensure that particular bit is 0 Buffer = bitand(ppoval,dec2binvec(RIGHT_OFF,8));

%Depending on input param. 'x', use LEFT_ON or LEFT_OFF if(x==1) Buffer = bitor(Buffer,dec2binvec(LEFT_ON,8)); else if(x==0) Buffer = bitand(Buffer,dec2binvec(LEFT_OFF,8)); end end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end

end

MoveRight Routine function MoveRight(x) %Declaring globals needed for this operation global ppo; global LEFT_OFF; global RIGHT_ON; global RIGHT_OFF; global Buffer;

%First, we need to ensure the object is runnig if(isrunning(ppo))

– 84 –

%Get present port pin status ppoval=getvalue(ppo);

%AND with LEFT_OFF to ensure that particular bit is 0 Buffer = bitand(ppoval,dec2binvec(LEFT_OFF,8));

%Depending on input param. 'x', use RIGHT_ON or RIGHT_OFF if(x==1) Buffer = bitor(Buffer,dec2binvec(RIGHT_ON,8)); else if(x==0) Buffer = bitand(Buffer,dec2binvec(RIGHT_OFF,8)); end end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end

end

BrakeAll Routine function BrakeAll %Declaring globals needed for this operation global ppo; global Buffer;

%First, we need to ensure the object is runnig if(isrunning(ppo)) %Get present port pin status ppoval=getvalue(ppo); %AND with all zeros

– 85 – Buffer = bitand(ppoval,dec2binvec(hex2dec('00'),8)); end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end end

BrakeH Routine function BrakeH %Declaring globals needed for this operation global ppo; global UP_OFF; global DOWN_OFF; global Buffer;

%First, we need to ensure the object is runnig if(isrunning(ppo)) %Get present port pin status ppoval=getvalue(ppo); %AND with all zeros Buffer = bitand(ppoval,dec2binvec(UP_OFF,8)); Buffer = bitand(Buffer,dec2binvec(DOWN_OFF,8)); end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end end

– 86 – BrakeV Routine function BrakeV %Declaring globals needed for this operation global ppo; global LEFT_OFF; global RIGHT_OFF; global Buffer;

%First, we need to ensure the object is runnig if(isrunning(ppo)) %Get present port pin status ppoval=getvalue(ppo); %AND with all zeros Buffer = bitand(ppoval,dec2binvec(RIGHT_OFF,8)); Buffer = bitand(Buffer,dec2binvec(LEFT_OFF,8)); end

%Check to see whether the resulting bits are not the same %as the ones currently on the port's pins. if(binvec2dec(bitxor(Buffer,ppoval))~=0) putvalue(ppo,Buffer); end end

– 87 –

REFERENCES

[1]

Robot Flow. http://robotflow.sourceforge.net/

[2]

Lego Mindstorm with IR. http://www.palmtoppaper.com/PTPHTML/43/43c00010.htm

[3]

The Hough Transform. Method and Means for Recognizing Complex Patterns. US Patent 3969654, 1962.

[4]

Wikitionary. http://www.wikitionary.com.

[5]

Stefan Florczyk, Robot Vision, Video-based Indoor Exploration with Autonomous and Mobile Robots. WILEY. ISBN: 3527405445.

[6]

Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing. Prentice Hall. ISBN: 0201180758.

[7]

Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing with MATLAB. Prentice Hall. ISBN: 0130085197.

[8]

Mark Nixon & Alberto Aguado, Feature Extraction and Image Processing. Newnes. ISBN: 0750650788.

[9]

Horn, B.K.P. and Schunck, B.G., "Determining optical flow." Artificial Intelligence, vol 17, pp 185-203, 1981.

[10] Lucas B D and Kanade T 1981, An iterative image registration technique with an application to stereo vision. Proceedings of Imaging understanding workshop, pp 121-130. [11] J.R. Parker, Algorithms for Image Processing and Computer Vision. Wiley. ISBN: 0471140562. [12] Wikipedia – The Free Encyclopedia. http://en.wikipedia.org. [13] David Young, Computer Vision Lecture Series. University of Sussex, UK. http://www.cogs.susx.ac.uk/users/davidy/compvis/.

– 88 – [14] Computer

Vision

Course

http://cs223b.stanford.edu/.

CS-223b.

Stanford

University.

final year project

behind the project is to develop an autonomous vehicle which will be ...... Precision Target Tracking – To track and follow a moving object through area or a terrain. ...... Systems that control the navigation of a mobile robot are based on several ...

2MB Sizes 2 Downloads 295 Views

Recommend Documents

final year project
School of Computing Sciences. FINAL YEAR PROJECT. Simulating Human Behaviour in a Zoo Environment. Emma Cotgrove. Year 2005/2006. Supervisor: Prof AR Forrest .... For collision avoidance to work to its full potential; a certain amount of artificial i

PHY4610H Final Year Project Mid-Program Report ...
impact noise due to collisions. When random fluctuation of the environment is considered, a phase shift from the unperturbed system is expected to be found. One of the aims of the project is to calculate the analytical expression of the magnitude ech

Final Year Project Report “Online Measurement of ... - Semantic Scholar
Mar 24, 2006 - The website was implemented using PHP, CSS, and XHTML, which is a ... to be embedded in HTML, and it is possible to switch between PHP ...

Final Year Project Report “Online Measurement of ... - Semantic Scholar
Mar 24, 2006 - theory of quantum entanglement. This report summarises the development of a website that provides a fast and simple way of calculating ...

IEEE Final Year Project Titles 2016-17 - Java - Cloud Computing.pdf ...
IEEE Final Year Project Titles 2016-17 - Java - Cloud Computing.pdf. IEEE Final Year Project Titles 2016-17 - Java - Cloud Computing.pdf. Open. Extract.

IEEE Final Year Project Titles 2016-17 - Java - Cloud Computing.pdf ...
Java-CC-2016-012 Real-Time Semantic Search Using Approximate Methodology for Large- Scale Storage Systems. Java-CC-2016-013 Secure Data Sharing in ...

IEEE Final Year Project Titles 2016-17 - Java - Data Mining.pdf ...
Recommendation Using Microblogging Information. Java-DM-2016-011 Efficient Algorithms for Mining Top-K High Utility Itemsets. Java-DM-2016-012 A Novel ...

final project requirements - GitHub
In the course of the project, we expect you to complete the following tasks: 1) Gather ... The presentations should target a non-technical audience and serve the ...

COWRIE FINAL PROJECT IMPLEMENTATION REPORT_WebReport ...
Retrying... COWRIE FINAL PROJECT IMPLEMENTATION REPORT_WebReport.pdf. COWRIE FINAL PROJECT IMPLEMENTATION REPORT_WebReport.pdf.

Alcohol Final Project
hope that you have learned at least something new from this paper because I do believe I am an expert on alcohol and its effects along with many statistics.

Project Final Report
Dec 27, 2007 - It is a good idea to divide a FIR into two parts and implement its multipliers with hardware ..... http://www.mathworks.com/access/helpdesk/help/pdf_doc/hdlfilter/hdlfilter.pdf ...... feel free to send your comments and questions to ..

2015 5-YEAR & 7-YEAR STRAIGHT PROGRAM BSA(FINAL COPY ...
2015 5-YEAR & 7-YEAR STRAIGHT PROGRAM BSA(FINAL COPY).pdf. 2015 5-YEAR & 7-YEAR STRAIGHT PROGRAM BSA(FINAL COPY).pdf. Open. Extract.

2015 5-YEAR & 7-YEAR STRAIGHT PROGRAM BSA(FINAL COPY ...
Whoops! There was a problem loading more pages. 2015 5-YEAR & 7-YEAR STRAIGHT PROGRAM BSA(FINAL COPY).pdf. 2015 5-YEAR & 7-YEAR STRAIGHT PROGRAM BSA(FINAL COPY).pdf. Open. Extract. Open with. Sign In. Details. Comments. General Info. Type. Dimensions

Project Final Report
Dec 27, 2007 - Appendix F. A Tutorial of Using the Read-Only Zip File. System of ALTERA in NIOS II ..... Tutorial of how to use the flash device and build a read-only file system in NIOS II. IDE is in the ...... Local Functions. -- Type Definitions.

Mid-Year Report Final Web.pdf
Page 3 of 28. Showing what's possible with open data and. good service design. Contents. Background pg5. Introduction pg7. Our Team pg8. Civic Tech ...

One Glorious Year final version.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. One Glorious ...

timesheet name : month/ year : project no. project ...
Jun 2, 2014 - 12-009' blazon cad. 8. 8ca - construction administration pm - permit issue cad - cad drawing production. 13-999-21 cad. 8. 83d - 3d computer ...

Final Robotics Project Proposal pdf.pdf
Final Robotic ... posal pdf.pdf. Final Robotics ... oposal pdf.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Final Robotics Project Proposal pdf.pdf.

APES FINAL PROJECT ideas list.pdf
Look at aesthetics, economics,. ethical/moral, recreational, social/cultural values that might affect the decision making. process. 3) Explore the consequences ...

APES FINAL PROJECT ideas list.pdf
APES FINAL PROJECT ideas list.pdf. APES FINAL PROJECT ideas list.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying APES FINAL PROJECT ...

Final Project Report 5
This type of software development life cycle ... This type of lifecycle is very useful ...... http://www.csis.gvsu.edu/~heusserm/CS/CS641/FinalSpiralModel97.ppt ...

Katherine Haxton - Final Project Report.pdf
Learning and Professional Development Centre. July 2013. Safety Quiz. An extended laboratory safety briefing was conducted with the new first years in week 2 ...

Econ 3558: Final project.
100. 120. 140. UK 10 Yr. Jul 2013 Jan 2017. 110. 112. 114. German 2 Yr. Jul 2013 Jan 2017. 120. 130. 140. German 5 Yr. Jul 2013 Jan 2017. 100. 150. 200 .... Our job is to figure out what qLt an qSt should be. Note, for a given choice, our capital gai

Fiscal Year 2015-2016 Final Budget.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Fiscal Year ...