Title: Mathematical Morphology Based Automated Control Point Detection from Human Facial Image Authors: Md. Haider Ali, Ishrat Rahman Sami, Mahzabeen Islam and Mohammad Shahiduzzaman. Department of Computer Science and Engineering University of Dhaka, Dhaka-1000, Bangladesh [email protected] ABSTRACT Facial feature control point means facial feature (i.e. eye, lip etc.) surrounding points and other important points in face which can be used to create image metamorphosis based facial animation. The goal of this research is to incorporate image morphing based facial animation in very narrow bandwidth video transmission/compression, especially in video conferencing, news telecast etc. where the background as well as the object in the image changes a little. As a part of the whole work, in this paper, an efficient mathematical morphology based facial feature control point detection technique is proposed. Here mathematical morphology tools are used both for filtering and pattern matching. First skin color based segmentation is used with some morphological processing on the input image to separate skin regions. Then the parallel eye segments are searched by erosion of the edge thinned image with eye corner structuring elements. Combining the skin region result and the erosion result, the probable eye segment pair is identified. Then using facial feature distance measurements and by possible refinements, lip and other control points are detected. The accuracy of the proposed method is within an acceptable limit and it is capable to work with images of average quality or close to average quality. The ease of implementation is also an important property of the proposed method which consists of simple matching and morphological operations.

Keywords: Facial Feature Detection, Mathematical Morphology

1. INTRODUCTION Face and facial feature detection has its application in many fields of technology. The most dominant applications are in the field of computer vision (e.g. face detection), machine manipulation of human expression (e.g. facial expression detection) and computer graphics (e.g. image metamorphosis based facial animation generation) etc. A relatively new application of control point detection is in the field of low bandwidth video transmission or video compression. In this Internet age, online video transmission trend is increasing day by

day. But even with the advent of high speed Internet, video transmission over Internet is still challenging considering the high bandwidth needed for this task. Bandwidth is always costly to the worldwide internet users especially for third world countries and real time good quality video transmission demands a certain bandwidth for uninterrupted transmission. For wireless gadgets - only a minimum level of bandwidth can be maintained for data transmission over internet. Most of the work done on this area is emphasizing on different compression encoding of the video data. A different approach comes from the fact that among consecutive video frames a certain portion of the frame remains unchanged. So for a set of similar frames only the difference data of control points are needed to be transmitted and thus video transmission can be achieved using very narrow bandwidth. These transmitted control points are used on a previously sent frame to generate facial animation using image metamorphosis [9].This scheme can be applied most easily to face based videos, where a single video frame can be referenced with respect to human facial features. So, facial feature points or control points detection accuracy and efficiency, is crucial for the success of this method. As the animation is totally dependent on the detected control points, so error in control points will cause abnormal animation. Control points are extracted mostly using mathematical morphology operators along with some image processing tools like image enhancement, edge detection and edge thinning. The proposed method of control point detection works with color images converted to gray scale images. It combines the information gained by two separate processes – skin color segmentation and morphological processing on edge thinned image. At first step parallel eye segments are located. In the next step eye location and some statistical measurements of facial feature distances like the approximate equality of distance between extreme eye corner points and the distance from the middle of the eye to the bottom of the lip are used. Then after some possible refinements, lip and other control points are detected. The rest of the paper is organized as follows – Section 2 has been started with a brief review of literatures of face detection and feature extraction. Section 3 contains the extraction method of facial feature control points. In section 5 experimental results has been discussed in details. Finally conclusions and further research directions have been derived in section 6.

2. RELATED WORKS Facial feature extraction is the next step of face detection. The difficulty of facial feature extraction depends on extraction criteria. Obviously most precise specification would be exact curve fitting around all the facial features. But going for that extreme is sometimes costly with respect to execution time. For simple facial animation it is sufficient to find a good number of control points surrounding the main features of the face. In this section some of the literature related to face detection and facial feature extraction are discussed.

2

The classification of face detection methods fall into three categories which are described below. A complete discussion of these categories can be found in [13]. 1. Knowledge-based methods. These rule-based methods encode human knowledge of what constitutes a typical face. Usually, the rules capture the relationships between facial features. These methods are designed mainly for face localization. 2. Feature invariant approaches. These algorithms aim to find structural features that exist even when the pose, viewpoint, or lighting conditions vary, and then use these to locate faces. These methods are designed mainly for face localization. 3. Template matching methods. Several standard patterns of a face are stored to describe the face as a whole or the facial features separately. The correlations between an input image and the stored patterns are computed for detection. These methods have been used for both face localization and detection.

According to Gargesha and Panchanathan [1], existing techniques proposed in the literature for detection of facial feature points can be broadly classified as (i) Template matching based approaches [2], (ii) approaches based on luminance, chrominance, facial geometry and symmetry [3], and (iii) PCA-based approaches [4]. Mathematical morphology based method of feature extraction method is not widely investigated. Dynamic link matching based on mathematical morphology is used in [5] for frontal face detection. In [6], mathematical morphology is used as preprocessing tool along with neural network for face detection. Morphological operators are used in [7] for feature extraction from range image and curvature maps of human facial image. Some other works also have been done aiming to detect a specific feature of face. For example, a special circle fitting method for real time eye detection is employed in [14], a hybrid technique combining both feature based and model based methods is used for mouth feature detection in [15].

3. METHODOLOGY The major steps of the proposed method are – ▪ Preprocessing ▪ Skin detection ▪ Control point detection

3

3.1 PREPROCESSING All the parameters of the system are initialized properly and a gray image is constructed from the input color image and stored in the appropriate buffer which is used in extraction of thin edge.

(a)

(b) Fig. 3.1: (a) Input color image (b) Gray image constructed from the input image

For morphological processing, structuring element consists of 1’s are denoted as white (1) and 0’s are denoted as black (0).

3.2 SKIN DETECTION Given a color frame, most face processing systems require detection of face from the background. We isolated candidate skin region using skin color predicates used in [8]. The RGB value of a pixel has been transformed to YUV and YIQ values using the standard transformations given below-

0.587 0.114 ⎤ ⎡ R ⎤ ⎡ Y ⎤ ⎡ 0.299 ⎢U ⎥ = ⎢− 0.147 − 0.289 0.436 ⎥.⎢G ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢⎣V ⎥⎦ ⎢⎣ 0.615 − 0.515 − 0.100⎥⎦ ⎢⎣ B ⎥⎦

..................................(3.1)

4

0.114 ⎤ ⎡ R ⎤ ⎡Y ⎤ ⎡0.299 0.587 ⎢ I ⎥ = ⎢0.596 − 0.274 − 0.322⎥.⎢G ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢⎣Q ⎥⎦ ⎢⎣ 0.211 − 0.523 0.312 ⎥⎦ ⎢⎣ B ⎥⎦

..................................(3.2)

In the first transformation 3.1, the chromaticity information is encoded in the U and V variables. If the hue and the saturation are represented by θ and CH respectively then,

θ = arctan (│V│/ │U│)

..................................(3.3)

2

..................................(3.4)

CH =

U

+

V

2

In the second transformation, variable I gives the value of hue. Proper hue thresholds are obtained according to the observation that, the hues (given by θ and I) of human skin vary universally in the range from 80° to 150° and in the range from 10° to 55° respectively. Skin pixel satisfies θ [80, 150] and I [10, 55]. Using these rules, skin pixels are separated from the frame pixels. And it is stored in a buffer, skinbuffer with 1s for the skin color area and 0s for others.

3.3 CONTROL POINT DETECTION After preprocessing and skin detection, the following operations are performed for control point detection – 1. Filtering of the skinbuffer by morphological closing operation by 2×2 black (0) structure. 2. Reducing noise of the filtered skinbuffer by morphological opening operation by 2×2 white (1) structure.

Fig. 3.2: Result of skin detection 5

3. Because of shadows and varying lighting conditions, some areas of human skin region have left undetected. Performing dilation on the resulting buffer by 7×7 white (1) structure connects very closely located skin regions such as face and neck so that a more connected skin region can be obtained. 4. The skinbuffer is further made connected by considering the pixel of non-skin region’s neighbors. So that the areas inside the face such as eyebrow, eye, lip which do not have the skin color are included in the face region after this step. The resulting buffer is referred as fullface buffer.

Fig. 3.3: Result of processing the skinbuffer

3.3.1 Edge Thinning For edge thinning, a grey level edge thinning method suggested by Park, Chen and Huang [10] is used. This method first runs gradient based Sobel edge operator to find out the edge outline. Then the edge is thinned by comparing the 3x3 neighboring gradient magnitudes for every pixel. The thinning algorithm is based on comparing the gradient magnitude within 3×3 neighboring region. If the gradient magnitude at pixel p is greater than or equal to the third largest gradient magnitude value within the 3×3 neighborhood centered at p, then it is considered a real edge point at pixel p. If the gradient magnitude at location (x, y) is greater than or equal to the fifth largest magnitude the decision is made based on the edge directions

θ g and edge magnitudes Gm within the 3×3 neighborhood. If the gradient magnitude at p is less than the fifth largest magnitude, it is removed from the edge point. The result of thinning is stored in thin_edge buffer.

6

Fig. 3.4: Result of extraction of thin edge

3.3.2 Single Pixel Noise Reduction After thinning, single pixel black and white noise are removed by morphological operation.

3.3.3 Locating Candidate Face Area To locate candidate face areas, the following steps are carried out –

3.3.3.1. Finding concentrated ramp area: By observation it is noticed that in binary frames human faces contain lots of closely located (concentrated) ramps of small and sharp edges in eyebrow, eye, nose and lip areas. Two structures are used for locating these areas – ⎡1 1⎤ ⎢1 0⎥ ⎣ ⎦

⎡0 1⎤ ⎢1 1⎥ ⎦ ⎣

At least four (two of both structures) ramps are searched in a radius of twenty pixels on thin_edge buffer (mainly aimed to hit on the eye, lip and other sharp corners of face). And all such hits are stored in appropriate hit records. All the hit records are sorted first horizontally and then vertically.

7

Fig. 3.6: Finding concentrated ramp area

3.3.3.2. Finding out the face candidate positions: Eye-nose-lip area should have significant number of hits in the previous step. It is assumed that the center of all the hit boxes which are contained in a particular connected skin region is a point in a probable candidate face area. All hit boxes that are in a particular probable face area, are grouped together to refer that candidate face area. After this step, a buffer named regHit contains the locations of candidate face areas.

Fig. 3.7: Centers of the candidate face areas

3.3.3.3. Finding candidate face area box: For each candidate face area the following calculations are done - Finding the upper top point. - Finding the maximum width (ear-face-ear). - Constructing the face area box using this information and the center found from the previous step. - storing the candidate face area box information in variable facehit.

8

Fig. 3.8: Finding candidate face area box

3.3.4 Facial Feature Extraction In this step, facial features are extracted from each facehit record. After then a validation process is performed for candidate face areas using the detection status and position of the identified features. So after this step the correct face area and control points come out. For facial feature extraction the following steps are carried out in each candidate face areas –

1. Finding holes inside the face area: Holes inside the face area having color other than skin color (some part of eye, eye brow and lip) are searched. Later dilation is performed on the resulting buffer redPos to emphasize these holes.

Fig. 3.9: Result of finding holes inside the face area 2. Control point selection: For each facehit record (candidate face area) perform the following steps – (a) Finding eye brow and eye candidate regions: Search for eyebrow and eye region is performed only in the upper half of the face area using fullface buffer and redPos buffer. Obtained results are validated using the following conditions – 9

- Candidate eye-brow and eye match in relatively small areas (less than 36 pixels) are considered as invalid. - Candidate eye-brow and eye match near the upper boundary of the face are considered as invalid. As only frontal face images are considered, so it is assumed that there will be always a partner candidate of some features (i.e. pairs of eyes and eye brows). A candidate that is in the upper left of the face, it should have a partner in the upper right. Partners are found and attached to each other. Result is stored in eyeinfo record. If there is no eye pair information obtained by searching a candidate face area, then it is discarded as invalid face area and any further processing on it is stopped. If there exists, at least one pair of information, then further search for lip in the lower region of that face area is performed.

Fig. 3.10: Finding eye brow and eye candidate regions (b) Finding lip candidate region: Let, e1.x denotes the leftmost X co-ordinate of the left eye and its partner’s (right eye’s) rightmost X co-ordinate is e2.x.Then dis will be the approximate distance between these extreme eye corners i.e. dis = e2.x – e1.x . For searching candidate lip region the following steps are carried out – - Extract the thin edges inside the face region using thin_edge and skinbuffer and store the result in thinFace buffer. - The distance between two eyes’ extreme corners and the distance from the middle of eyes to lower lip is assumed to be same by observation. Nose is placed at mid point of this vertical distance area. Lip is searched in the remaining region (below the nose region and before end limit of dis) as follows – ♦ Finding the leftmost probable corner of the lip from thinFace buffer. ♦ Finding the rightmost probable corner of the lip from thinFace buffer. ♦ Finding the lower limit of lower lip in the vertical distance line.

10

♦ Finding an approximate limit of upper top of the upper lip in the vertical distance line. ♦ 4 points on the lip are identified and stored in lip record as region information.

Fig. 3.11: Finding lip candidate region

3.3.5 Extracting and Storing Control Point Information Extracting and storing of control points are performed as follows from the specific face detection box and feature detection box –

Fig. 3.12: Extraction of control points

1. Finding 8 boundary points of each face. 2. Finding 16 boundary points of eye brow pair and eye pair, or only 8 boundary points of eye pair (in the case, where eye brows are not possible to detect), four boundary points for each. 3. Finding 4 boundary points for lip.

11

Fig. 3.13: Final output

4. EXPERIMENTAL RESULTS The implemented facial control point detection algorithm is tested with a database named CSEDU6th database containing 65 images. The facial control points specification are - eight control points of face boundary, four surrounding control points of each eye brow, four surrounding control points of each eye and four surrounding control points of lip.

4.1 Description of the Csedu6th Image Database The CSEDU6th (the name derives from the Dept. of Computer Science and Engineering, Dhaka University, 6th batch) image database is consisting of 65 images of 6th batch students (Dept of CSE, DU), little Adrian and Tithi. Most of the images contain single face image, whereas five of them are of multiple face image. The images are captured by a simple digital camera in normal outdoor daylight. So the lighting and image quality is pretty natural.

4.2 Experiment Setup Camera used for image capturing – CANON Power Shot A75 (3.2 Mega Pixel) PC configuration for program processing – Intel Pentium (4) – 2.4 GHz 512 MB of RAM

12

4.3 Sample Output of Facial Feature Detection Meaning of some terms used in output labeling are as follows – Accurate detection – All control points are detected in place correctly. Partial detection – Some Control points are in place and the rest are either misplaced or not detected. False detection – No control points are in place. Also note that this method first detects face, then eyes and eye brows and at the end, lips. So the accuracy of the detection of a former element has impact on detection of the later elements in the sequence. Partial and false detection mainly caused by poor lighting conditions, shading and noise in the image.

4.3.1 Samples of Accurate Detection

(a)

(c)

(b)

(d)

13

(d)

(f)

(g)

(h) Fig. 4.1: (a) – (f) Samples of accurate detection on single face image (g) Sample of accurate detection on synthesized group image (h) Samples of accurate detection on group image 14

4.3.2 Samples of Partial Detection

(a)

(b) Fig. 4.2: (a) – (b) Samples of partial detection 4.3.3 Samples of False Detection

(a)

(b) Fig. 4.3: (a) - (b) Samples of false detection

15

4.6 Performance Analysis From the CSEDU6th data set of 65 samples, 70 faces out of 75 faces contained in 65 samples are correctly detected. Then detection of facial features- eye brow, eye and lip are performed on these 70 correctly detected faces. For performance analysis, the obtained result (given in table 6.1) is classified into four classes•

Perfect Detection



Partial Detection



False Detection



No Detection

Each of the four feature categories – face, eye-brow, eye and lip is then divided into these four classes for analyzing the detection performance. For face category, Perfect Detection means, when there is a face in the image, only that face area is 100% correctly identified, no other area is detected. Partial Detection means, when there is a face, that face area is correctly identified but area other than real face area is also falsely detected. False Detection means a false area is detected as a face area. No Detection means no area is detected as a face area, but there exist at least a face in the image. For eye feature, Perfect Detection means only the eye pair is 100% correctly identified. Partial Detection means a single eye of the eye pair is detected. False Detection means false eye pair is detected. No Detection implies no candidate eye pair is detected. The case of eyebrow pair is similar to eye pair. For lip feature, Perfect Detection means lip in a face is 100% correctly identified. Partial Detection means partially identified lip. False Detection means false detection of lip. If no candidate lip is detected then it is denoted as No Detection.

16

Table 4.1 Summary of different detection results Category↓

Face

Eye Brow

Eye

Lip

All

Detection

Perfect

Partial

False

No

Total

Status →

Detection

Detection

Detection

Detection

No. of faces

61

9

3

2

75

Percentage ( % )

81.40

12.00

4.00

2.60

100.00

No. of pair of eye brows

32

14

22

2

70

Percentage ( % )

45.70

20.00

31.50

2.80

100.00

No. of pair of eyes 44

12

12

2

70

Percentage ( % )

62.90

17.20

17.20

2.70

100.00

No. of Lips

49

5

4

12

70

Percentage ( % )

70.00

7.20

5.70

17.10

100.00

Total

28

42

3

2

75

Percentage ( % )

37.33

56.00

4.00

2.67

100.00

Categories

90 80 Perfect Detection Partial Detection

70 60 50

False Delection

40 30

No Detection

20 10 0 Face

Eye brow

Eye

Lip

Fig. 4.4: Representation of different feature detection class by bar diagram.

17

To sum up the results from the table, it should be noted that in case of face detection partial detection means face is detected along with some other areas detected wrongly as face. So the results of perfect detection and partial detection can be roughly summed up to evaluate the proposed method. Summing up these two measurements, face detection ratio is 93.4 %, eye detection ratio is 80.1% and lip detection ratio is 77.20%. Furthermore, processing of an image frame requires less than 2 seconds in the current simulation setup.

5. CONCLUSION Face detection and facial feature detection is one of the most challenging fields in computer vision and biometrics. In this paper, an efficient algorithm for facial feature control point detection is described, whose result is at some extent satisfactory. The overall contribution of this method can be summarized as follows – ¾ Algorithm used for facial control point detection is pretty efficient and accurate. Most importantly the image database used for this project contains image captured by a cheap digital camera in environmental lighting condition. As the algorithm runs efficiently on these low quality images, so if the image quality and lighting is of better quality, then the output of the proposed algorithm will be better. ¾ Another important thing to note is that, many face detection algorithms work with image containing only human face. As this project is developed with the intent of using it in narrow bandwidth video transmission or in facial animation, the images are half portrait sized. These patterns of image frames are usually obtained if a human is sited in front of a video camera (like a news reader). The devised algorithm successfully filters out the non-face body part and extracts features from face. ¾ In most cases, the proposed algorithm can detect multiple faces from a single image which proves the robustness of the algorithm. The feature control points of multiple faces can also be identified. ¾ The feature control point extraction works for the surrounding of the head, eyes, eye brows and lip. So it includes all the prime features necessary for facial animation and also produces sufficient points to approximate the shape of each feature. ¾ The face detection and control point detection algorithm uses mathematical morphology tools extensively, which is a new way to solve the problem in this field. The use of morphological operators simplifies the algorithm. The morphological operators have two fold contributions in the program. First, as filter for noise reduction and secondly, for certain pattern matching. The facial feature control points are identified by pixel coordinates. By a simple extension of the algorithm, it is

18

possible to obtain exact curve fitting of the facial features which may have application to facial expression detection. The possible future improvement and research directions of this project includes use of specialized algorithms to speed up the computationally costly morphological operations which will make it suitable for use in real time application like video data. The facial feature control points are identified by pixel coordinates. By a simple extension of the algorithm, it is possible to obtain exact curve fitting of the facial features which may have application to facial expression detection. Moreover making the algorithm pose invariant can also be a possible future research scope.

REFERENCE [1] M. Gargesha and S. Panchanathan, "A Hybrid Technique for Facial Feature Point Detection", Fifth IEEE Southwest Symposium on Image Analysis and Interpretation, 7 9 April 2002, Santa Fe, New Mexico. [2] R.S.Feris, T.E. de Campos, and R.M. Cesar Junior, “Detection and Tracking of Facial Features in Video Sequences”, Lecture Notes in Artificial Intelligence, vol. 1793, April 2000,pp. 127-135. [3] R.L.Hsu, M. Abdel-Mottaleb, and A.K. Jain, “Face Detection in Color Images” Proceedings of the IEEE International Conference on Image Processing, vol. 1, 2001, pp. 1046-1049. [4] Z. Xue, S.Z. Li, and E.K. Teoh, “Facial Feature Extraction and Image Warping Using PCA Based Statistic Model”, International Conference on Image Processing, vol. 2, Oct 2001, pp. 689-692. [5] Costas Kotropoulos, Anastasios Tefas, Ioannis Pitas,” Frontal Face Authentication using Variants of Dynamic Link Matching Based on Mathematical Morphology”, Proceedings of the 1998 IEEE International Conference on Image Processing (ICIP-98), Chicago, Illinois, October 4-7, pp. 122-126. [6] C-C. Han, H-Y. M. Liao, G-J. Yu and L-H. Chen, "Fast Face Detection via MorphologyBased Pre-Processing", Pattern Recognition, Vol. 33, No. 10, 2000, pp. 1701-1712. [7] Gaile G. Gordon and Luc Vincent, “Application of Morphology to Feature Extraction for Face Recognition”, In the Proceedings of SPIE, 1658:151--164, 1992. [8] Debasis Mazumdar, Santanu Dutta and Soma Mitra, “Automatic Feature Detection of a Face and Recovery of Its Pose”, ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January 2002, Melbourne, Australia.

19

[9] Shaily Kabir, Dr. Md. Haider Ali, “A Heuristic Approach of Establishing the Relationship between Full Width Half Maximum (FWHM) and Human Facial Shape Distortion in Image Metamorphosis”, Proc. of 8th Int’l conference on Computers and Information Technology, 2005. [10] J. Park, H. Chen and S. T. Huang, “A New Gray Level Edge Thinning Method”, obtained from: http://cs.ua.edu/research/TechnicalReports/TR-2000-05.pdf [11] Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing (2nd Edition), Prentice Hall, (January 15, 2002). [12] James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Computer Graphics: Principles and Practice in C (2nd Edition), Addison-Wesley Professional. [13] M-H Yang, D. Kriegman and N. Ahuja, "Detecting Face in Images: A Survey", IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, January, 2002. [14] Daw-Tung Lin, Chen-Ming Yang, “Real-Time Eye Detection Using Face-Circle Fitting and Dark-Pixel Filtering”, ICME 2004, pp. 1167-1170. [15] M. Pantic, M. Tomc, and L. J. M. Rothkrantz, "A Hybrid Approach to Mouth Features Detection", Proc. IEEE Int. Conf. Systems, Man, Cybernetics, pp. 1188--1193, 2001.

20

Mathematical Morphology Based Automated Control ...

Title: Mathematical Morphology Based Automated Control Point Detection from Human. Facial Image. Authors: Md. Haider Ali, Ishrat Rahman Sami, Mahzabeen ...

712KB Sizes 1 Downloads 157 Views

Recommend Documents

Mathematical Morphology Based Automated Control ...
news telecast etc. where the background as well as the object in the image changes a ... But even with the advent of high speed Internet, video transmission over ...

Mathematical Morphology Based Automated Control ...
Facial feature detection has a lot of applications in various technologies. The most im- ... These rule-based methods encode human knowledge of what constitutes a typical ..... Extracting and Storing Control Point Information. Extraction and ...

semi-automated extraction of channel morphology from lidar terrain data
Interferometric Synthetic Aperture Radar (IFSAR) data were used to build two high resolution ... A GIS-based alternative to conventional field surveys for the estimation of ... energy associated with water moving through the channel and any ..... gre

Rule based Automated Pronunciation Generator
Center for Research on Bangla Language Processing, BRAC University, Dhaka, Bangladesh [email protected] ..... Urdu Language Processing, National University of. Computer and Emerging Sciences, Pakistan. REFERENCES.

Appearance-Based Automated Face Recognition ...
http://sites.google.com/site/jcseuk/. Appearance-Based Automated Face. Recognition System: Multi-Input Databases. M.A. Mohamed, M.E. Abou-Elsoud, and M.M. Eid. Abstract—There has been significant progress in improving the performance of computer-ba

Investigation of Morphology and Control in Biped ...
of Europe and have the experience of a lifetime, traveling to all corners of the ...... namic walkers derive their explanatory power from the absence of control. One of ...... The control of the robot is accomplished by an on-board Hitachi H8 3664F .

A Morphology-Based Approach for Interslice ... - IEEE Xplore
damental cases: one-to-one, one-to-many, and zero-to-one corre- spondences. The proposed interpolation process is iterative. One iteration of this process ...

Automated Detection of Engagement using Video-Based Estimation of ...
Abstract—We explored how computer vision techniques can be used to detect ... supervised learning for detection of concurrent and retrospective self-reported engagement. ...... [49] P. Ekman and W. V. Friesen, Facial Action Coding System: A ... [On

Automated image-based colon cleansing for laxative ...
Pro:16 scanners (GE Healthcare). Patients underwent various ..... (Microsoft, Redmond, WA) and ITK 3.20 (National Library of Medicine, Bethesda, MD).

An automated GPS-based prompted recall survey with learning ...
of automated activity type, location, timing and travel mode identification routines, GPS-based prompted recall surveys allow a larger number of more complex ...

Automated Driving Based on Self-Organizing GenSo ...
Vehicle Simulation software to model the car operations. Programming Tools. Microsoft Visual Studio C++ 6.0 with MFC and OpenGL libraries included. Data Analysis Tools. Microsoft Excel 2000. Hardware. IBM Compatible PC. (Dell Precision 340). Pentium

Using Automated Replay Annotation for Case-Based ...
real-time strategy game StarCraft as our application domain. 2 Related Work. Applying case-based planning for building game AI requires formally modeling.

Automated Detection of Engagement using Video-Based Estimation of ...
Abstract—We explored how computer vision techniques can be used to detect engagement while ... supervised learning for detection of concurrent and retrospective self-reported engagement. ...... [Online]. Available: http://msdn.microsoft.com/en-us/l

Bezoar : Automated Virtual Machine-based Full-System ...
detecting attacks disrupt service and current recovery approaches ... the memory monitor component that tracks down network bytes, for five SPEC INT 2000 ...

Automated Locality Optimization Based on the ... - Semantic Scholar
applications string operations take 2 of the top 10 spots. ... 1, where the memcpy source is read again .... A web search showed 10 times more matches for optimize memcpy than for ..... other monitoring processes, such as debuggers or sandboxes. ...

Automated Physiological-Based Detection of Mind ...
6. Andreassi, J.L.: Psychophysiology: Human behavior and physiological response. Rout- ledge (2000). 7. Smallwood, J., Davies, J.B., Heim, D., Finnigan, F., ...

VISION-BASED CONTROL FOR AUTONOMOUS ...
data, viz. the mean diameter of the citrus fruit, along with the target image size and the camera focal length to generate the 3D depth information. A controller.

VISION-BASED CONTROL FOR AUTONOMOUS ... - Semantic Scholar
invaluable guidance and support during the last semester of my research. ..... limits the application of teach by zooming visual servo controller to the artificial ... proposed an apple harvesting prototype robot— MAGALI, implementing a spherical.

VISION-BASED CONTROL FOR AUTONOMOUS ... - Semantic Scholar
proposed an apple harvesting prototype robot— MAGALI, implementing a ..... The software developed for the autonomous robotic citrus harvesting is .... time network communication control is established between these computers using.

Specifying State-Based Supervisory Control ...
Plant in state: Door Open IMPLIES Plant in state: Car Standing Still. For the existing state-based supervisory controller synthesis tool we cannot use this as input,.

Generalized Transition-based Dependency Parsing via Control ...
Aug 7, 2016 - egant mechanisms for parsing non-projective sen- tences (Nivre, 2009). ..... call transition parameters, dictates a specific be- haviour for each ...