JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 7

Appearance-Based Automated Face Recognition System: Multi-Input Databases M.A. Mohamed, M.E. Abou-Elsoud, and M.M. Eid

Abstract—There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared eight state-of-the-art face recognition algorithms with three different databases: (i) faces94; (ii) Olivetti research lab (ORL), and (iii) Indian face database (IFD). The face detection phase had been performed using the morphological features. The recognition results had showed that in linear appearance based classifier; LDA performs better than ICA and PCA in terms of the accuracy of recognition. The computational overhead of LDA and the PCA are almost similar while ICA has a very long execution time. In addition, neural network based on DWT features perform better than classifiers based on other features. . Index Terms— biometrics, Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and Principle Component Analysis (PCA). —————————— u ——————————

1 INTRODUCTION

I

n today's networked world, the need to maintain the security of information or physical property is becoming both increasingly important and increasingly difficult. From time to time, we hear about the crimes of credit card fraud, computer breaking is by hackers, or security breaches in a company or government building. Identification numbers (PINs) or passwords are not suitable for authentication methods in some cases; they are based on things could be easily breached [1]. Biometrics is automated methods of recognizing a person based on a physiological or behavioral characteristic. Biometric technologies are becoming the foundation of an extensive array of highly secure identification and personal verification solutions. As the level of security breaches and transaction fraud increases, the need for highly secure identification and personal verification technologies is becoming apparent. It has received significant attention as it has many advantages over traditional methods in security, credibility, universality, permanence, and convenience [1]. Face recognition has recently received a blooming attention and interest from the scientific community as well as from the public. The interest from the public is mostly due to the recent events of terror around the world, which has increased the demand for useful security systems. Face recognition is one of the few biometric methods that possess the merits of both high accuracy and low intru-

siveness. Face recognition is an image analysis problem that involves the identification of human faces from a digital still image or a video sequence [2]. The task of face recognition is very complicated by factors such as variations in facial expressions, changes in illumination and the orientation of the face of the subject. In addition, affecting the accuracy of identification is the background and the inherent noise present during image acquisition. Face recognition includes enhancement and segmentation of face image, detection of face boundary and facial features, matching of extracted features against the features in a database, and finally recognition of the face [2]. The process involved consists of face segmentation, feature extraction and finally recognition or identification as in Fig.1. This paper has been organized as follows: (II) face recognition overview, (III) data collection, (IV) preprocessing of face image, (V) face detection, (VI) face feature extraction, (VII) face recognition, and (VIII) conclusion.

————————————————

Fig.1. The general steps in facial recognition. • M. A. Mohamed is with the Faculty of Engineering, Mansoura University, Mansoura, Egypt. • M. A. A. El-Soud is with the Faculty of Engineering, Mansoura University, Mansoura, Egypt. • M. M. Eid is with the Faculty of Engineering, Mansoura University, Mansoura, Egypt. © 2011 JCSE http://sites.google.com/site/jcseuk/

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 8

2 FACE RECOGNITION OVERVIEW Face is a unique feature of human beings. However, in general all faces are similar in features and structures. During the past several years, face recognition has developed into a major research area in pattern recognition and computer vision. As one of the most challenging applications in these fields, face recognition has received significant attention. Unlike other biometric systems, facial recognition can be used for general surveillance, usually in combination with public video cameras. Face recognition is a task so common to humans, that the individual does not even notice the extensive number of times it is performed every day [2].

2.1 Face Recognition system challenges The obstacles that a face recognition system must overcome are differences in appearance due to variations in illumination, viewing angle, facial expressions, occlusion, and changes over time. Face image acquisition can be accomplished by digitally scanning an existing photograph or by using an electro-optical camera to acquire a live picture of a subject. The main requirements for the camera to provide these high quality images are sufficient resolution, speed (images per second), color, format (acquisition of raw images must be available), connection, and the ability to trigger the camera. In addition, these images must be taken at high speed. Due to these constraints, there will be a huge amount of data collected in a short time [2], [5]. 2.2 Advantages of Face Recognition System Although the concept of recognizing someone from facial features is intuitive, facial recognition, as a biometric, makes human recognition a more automated, computerized process. Facial recognition offers several advantages: 1. The system captures faces of people in public areas, which make it non-intrusive. 2. It is accurate and allows for high enrolment and verification rates. 3. Faces can be captured from some distance away. 4. It requires no physical interaction on behalf of the user. 5. It is the only biometric that allow you to perform passive identification in a one to many environment [3]. 2.3 Human Difficulties with Face Recognition Surveillance People are generally very good at recognizing faces that they know. However, people have trouble when they perform facial recognition in surveillance or watch post scenario. Several factors account for these difficulties: most notably, humans have a hard time recognizing unfamiliar faces. Combined with relatively short attention spans, it is difficult for humans to pick out unfamiliar faces. Among the special challenges as pose variation, illumination conditions, scale variability, images taken years apart, glasses, moustaches, beards, low quality image acquisition, partially occluded faces etc. Fig.2 shows different images that present some of the problems encountered in face recognition [5], [6].

(a)

(b)

(c)

Fig.2. Difficult scenarios for face recognition: (a, b) low quality image with multiple face, and(c) low quality face image with beard and glasses.

2.4 How to Reduce Difficulties By controlling a person’s facial expression, as well as his distance from the camera, the camera angle, and the scene is lighting, a posed image minimizes the number of variables in a photograph. This control allows the facial recognition software to operate under near ideal conditions and greatly enhance its accuracy. Similarly, using a human operator to verify the system’s results enhances performance because the operator can detect machinegenerated false alarms. In search of finding solutions for difficult face recognition scenarios, some help is found in two broad areas video-based face recognition and multimodal approaches. Video-based face recognition provides several advantages over still image based face recognition: (1) good frames can be selected on which to perform the recognition stage, (2) video provides temporal continuity, which allows reuse of recognition information obtained from high quality images in processing low quality frames, and (3) motion, gait and other features can help a video-based face recognition system [5], [6].

3 DATA COLLECTION Three different face databases have been employed for comparison of performance; Faces94 database, Olivetti Research Lab (ORL) database, and the Indian Face Database (IFD). These databases were chosen as Faces94 database introduces variation in facial expression and illumination, the ORL database contains images with very small changes in orientation of images for each subject involved, while the IFD contains images different angles oriented for each subject. These three databases provide a comprehensive dataset for testing the performance of the algorithms chosen. The experiment is conducted with 10 subjects from each database; each subject has 10 face images.

3.1 Faces94 Face Image Database Faces94 consists of 153 subjects with 20 face images available for each subject. The subjects sit at fixed distance from the camera and are asked to speak, whilst a sequence of images is taken. The speech is used to introduce facial expression variation. It contains images of male and female subjects in separate directories. These face images varies in facial expression and illumination; Fig. 3. All face images resolution are RGB images, 180 × 200 pixels in JPEG format. The background is plain green, no head scale, and no variation in image lighting [7].

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 9

3.2 The ORL Face Image Database In Olivetti Research Lab (ORL) database [8], there are 10 different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement). The files are in PGM format and the size of each image is 92×112 pixels, with 256 gray levels per pixel; Fig. 4. 3.3 The IFD Face Image Database Indian face database (IFD) contains images of 40 distinct subjects with 11 different poses for each individual [9]. When available, a few additional images are also included for some individuals. All the images have a bright homogeneous background and the subjects are in an upright, frontal position. For each individual, we have included the following pose for the face: looking front, looking left, looking right, looking up, looking up towards left, looking up towards right, and looking down. In addition to the variation in pose, images with four emotions neutral, smile, laughter, sad/disgust - are also included for every individual. The files are in JPEG format. The size of each image is 640×480 pixels, with 256 gray levels per pixel; Fig.5.

4 PREPROCESSING OF FACE IMAGE It is the first step of the face recognition system, which needs to be reliable because it has a major influence on the all subsequent steps, the part of the image occupied by the face has to meet certain quality requirements; e.g., it should not be too noisy or blurred. The quality of the face image is checked to see whether it is sufficient for the steps that follow. If the quality is considered too low, the image is rejected if it is allowed. Therefore, image preprocessing is significant part of face recognition systems. The input images will be converted to grayscale if it RGB. In addition, the changes in lighting conditions produces dramatically decrease of recognition performance. It can be seen that although all the images are of the same person, due to the effect of uneven lighting, they may look quite different. Therefore, when the images are under varying illumination, illumination compensation should be performed before face recognition. If an image is low contrast and dark see Fig.6, we wish to improve its contrast and brightness by using standard techniques such as histogram equalization [10].

Fig.3. Samples of Faces94 face image database.

Fig.4. Samples of ORL face image database

Fig.5. Samples of IFD face image database.

4.1 Histogram Equalization Histogram equalization (HE) can be used as a simple but very robust way to obtain light correction when applied to small regions such as faces. The aim of HE is to maximize the contrast of an input image, resulting in a histogram of the output image that is as close to a uniform histogram as possible. However, this does not remove the effect of a strong light source but maximizes the entropy of an image, thus reducing the effect of differences in illumination within the same "setup" of light sources. In Fig.6, there is a frontal dark skin face image with bad illuminated, and the corresponding histogram, as it is possible to see that pixels are not balanced, there are more dark pixels than others and, only the facial region of an image is used in the histogram equalization. To equalize the image histogram the cumulative distribution function (cdf) has been computed [11]. ⎡ ⎢h (v ) = round ⎢⎣

⎛ cdf ( v) − cdf min ⎞⎤ ⎜ × (L − 1) ⎟⎥ ⎜ (M × N) − cdf ⎟⎥ min ⎝ ⎠⎦

(1)

The widespread histogram equalization cannot correctly improve all parts of the image in all cases. When the original image is irregularly illuminated, some details on resulting image will remain too bright or too dark; Fig.7 [9].

4.2 Adaptive Histogram Equalization Adaptive Histogram Equalization (AHE) [11] computes the histogram of a local image region centered at a given pixel to determine the mapped value for that pixel leading to local contrast enhancement. However, the enhancement often leads to noise amplification in "flat" regions, and "ring" artifacts at strong edges. In addition, this technique is computationally intensive as shown in Fig.8. 4.3 Histogram Truncation This operation allows gray levels to be distributed across the primary part of the histogram [11]. This solves the problem when one has a few very bright values in the image that have the overall effect of darkening the rest of the image after rescaling as shown in Fig.9. 4.4 Gamma Correction Gamma correction operation performs nonlinear operation in brightness adjustment that focuses on the basic information in the face and normalizes all the other parts [12]. Brightness for darker pixels is increased, but it is almost the same for bright pixels. As result, more details

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 10

are visible as shown in Fig.10. 800 700 600 500 400 300 200 100 0 0

(a)

50

(b)

100

150

200

250

(c)

Fig.6. (a) Sample of Faces94 illuminated image with dark skin, (b) gray scaled image, and (c) its histogram equalization. 900 800

difficult task due to variability in pose, size, orientation, color, expression, occlusion, and lighting condition. To build a fully automated system that extracts information from images of human faces, it is essential to develop efficient algorithms to detect human faces. Without considering feature locations, face detection is declared successful if the presence and rough location of a face has been correctly identified. However, without accurate face and feature location, noticeable degradation in recognition performance is observed [2]. The system is commenced with estimate the location of the face area by skin color segmentation follows subsequences of morphological operations.

700 600 500 400 300 200 100 0 0

50

100

150

200

250

Fig.7. Result Image and itʼs histogram after linear equalization. 500 450 400 350 300 250

5.1 Skin Color Segmentation Face detection based on skin color is invariant of facial expressions, rotations, scaling, and translation. Human skin color, with the exception of very black complexion, is found in a relatively narrow color space. Taking advantage of this knowledge, skin regions are segmented using the skin color space and thresholding the resulting image; Fig.11c. Then the face, containing a couple of eyes, nose and mouth, is being searched for around the largest connected area of the skin color in gray scale mode.

200 150 100 50 0 0

50

100

150

200

250

Fig.8. Result Image and itʼs histogram after adaptive equalization.

1000 800 600 400 200 0 0

0.2

0.4

0.6

0.8

1

Fig.9. Result image and itʼs histogram after truncation.

800 700 600 500 400 300 200 100 0 0

50

100

150

200

5.2 Background Subtraction The color segmentation generates a binary mask with the same size of the original image. However, some regions similar to skin also appear white: pseudo color pixels like clothes, floors, building etc. The goal of the connected component algorithm is to analyze the connection property of skin regions, reduce the unnecessary background noise in the face image, and identify the face, which are described by rectangular boxes. Furthermore, with the addition of background subtraction, it was possible to rule out any skin tone colored items that exist in the background. The background subtraction is done using a combination of basic operators to add, subtract, and divide the pixel values as shown in Fig.11d.

250

Fig.10. Result image and itʼs histogram after Gamma correction (γ=0.4).

5 FACE DETECTION BASED ON MORPHOLOGICAL FEATURES Face detection is the first step in a face features detection process to reduce the searching space in image. Face detection from a single image or an image sequences is a

5.3 Morphological Operations After thresholding, the segmented image may be encountered by some holes in the face skin region. It is necessary to remove the unwanted specs in order to speed future processing. The fundamental morphological operators are the dilatation and the erosion, one of the simplest applications of the dilatation is bridging gaps, whereas the erosion deletes irrelevant details. Others operations are opening and closing. Opening generally smoothies the contour of an object and removes objects of small size, closing also tends to smooth object contour but also removes holes of small size. Facial feature extraction consists in localizing the most characteristic face components (eyes, nose, mouth, etc.) within images that depict human faces. To extract the facial components equals to locate certain characteristic points. Eyes are a crucial source of information about the state of human beings, It is unaffected by the presence of facial hair (like beard or mustaches), and are little altered by small in-depth rotations and by transparent spectacles, the knowledge of the eye positions allows to roughly identify the face scale and its in-plane rotation. Simple

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 11

edge detection should be able to find its outside edge very easily. Furthermore, thresholding on the edge detection can be set very high as to ignore smaller less contrasting edges while still being able to retrieve the entire features of the face. Due to its structure, we used Canny operator [13]; all previous morphological operations are shown in Fig.12.

6 FACE FEATURES EXTRACTION Feature extraction is one of the most important steps in face recognition, and it usually attempts to reduce the high dimensional data space into the low dimensional feature vector. Dimension reduction of the feature vectors is essential for extracting the effective features and for reducing computational complexity in the classification step. A successful face recognition methodology depends heavily on the particular choice of the features to represent the face images. Feature selection in pattern recognition involves the derivation of salient features from raw input data, which have dimensionality reduction with enhanced discriminatory power. The representation of face features can be performed by using: (1) linear appearance-based approach; which uses holistic texture features and is applied to the face or specific region of it, also extracting statistical features of face image [2], and (2) utilizing face image transformations as DCT, FFT, DWT that provide valuable information about the image in much less memory.

6.1 Linear Appearance-Based Features The defining characteristic of appearance-based algorithms is that they directly use the pixel intensity values in a face image as the features on which to base the recognition decision. The pixel intensities that are used as features are represented using single valued variables. Appearance-based methods use training images and learning approaches to learn from the known face images. These approaches rely on the statistical analysis and machine learning techniques to find the relevant characteristics of face and non-face images. The classical linear appearance-based, PCA, ICA, and LDA are introduced for efficient and effective representation of the feature space and for face recognition. Each algorithm has its own representation (basis vectors) of a high dimensional face vector space based on different statistical viewpoints. Linear combinations are particularly attractive because they are simple to compute and analytically tractable. The process of determining these basis vectors is what differentiates these three algorithms. They use different statistical criteria for determining these basis vectors. Image data can represent by concatenating each row or column of the image [13], [15]. The three algorithms can be expressed in the general case as a linear transformation of the image vector to the projection feature vector as given by (2) (2) Y = WT X Where, W is the transformation matrix having dimensions K × 1, Y is the K × N feature vector matrix and X is the higher dimension face vector obtained by represent-

ing all the face images into a single vector.

X = {x1, x2 , x3 , …, x N }

(3)

Where each xi is a face vector of dimension "n" obtained from the M × N dimension face image, from each of the face image vectors the average face is subtracted; the average face is given by (4)

AvgFace = (1/N) ∑ x i

(4)

Xʹ′ = X − AvgFace`

(5)

N

For all the algorithms, the initial step involves calculating the average face vector for all databases; Fig.13.

6.1.1 PCA The Eigenface algorithm uses PCA for dimensionality reduction to find the vectors that best account for the distribution of face images within the entire image space [12]. This method attempts to provide the features that determine how the face images differ from one another. These features are used to identify the face from a database of images. It is a simple statistical dimensionality reducing technique which perhaps the most popular and widely used method for representation and recognition of human. The PCA basis vectors are defined as eigenvectors of the scatter matrix ST defined as in M

ST = ∑ (x i − µ )(x i − µ )T

(6)

i =1

Where µ the mean of all images and xi is the image vector. The projection matrix WPCA is composed of t eigenvectors corresponding to t largest eigenvalues.

6.1.2 ICA

(a)

(b)

(c)

(d)

Fig.11. (a) Sample of ORL face image, (b) binarized image, (c) detected skin regions, and (d) background subtraction.

(a)

(b)

(c)

(d)

Fig.12. (a) Detect face features, (b) detected face boundary, (c) detected face image, and (d) localized face feature region.

Fig.13. Average face obtained from a subset of FACES94, ORL, and IFD database respectively.

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 12

It can be viewed as a generalization of the PCA where the higher order dependencies in the input data are minimized and the set of statistically independent basis vectors are determined [15]. ICA for face recognition has been proposed under two architectures by Barlett [16]. The architecture 1 that implemented aimed at finding a set of statistically independent basis images while the architecture 2 finds a factorial code as in [17].

6.1.3 LDA Here the low-energy discriminat features of the face are used to classify the images. We consider the different images of the same subject, these images have a lot of information in common and this common information is not utilized by the PCA technique which de-correlates the face images. Hence, a class based approach has been proposed in which the faces of the same subjects are grouped into separate classes and the variations between the images of different classes are discriminated using the eigenvectors at the same time minimizing the covariance within the same class [15], [16]. 6.1.4 Statistical Features: Extracting statistical features from face sub-image to represent robust feature vector as the initial dimension reduction step, thus statistical features like mean harmonic mean, standard deviation, variance, kurtosis, skewness, and intensity-based features as number of zero-crossing point, maximum and minimum intensity pixels were computed from each sub-band image. For all faces images; the statistical features chosen to be 300 features for each face image. 6.2 Transformed Domain Features The images represented by their original spatial representation or by frequency domain coefficients. Features that are not obviously present in one domain may become obvious in the other domain. The wavelet transform represents both the spatial and frequency domain simultaneously. Moreover, multiresolution analysis makes it more appropriate to represent and extract features across different scales. Using PCA and LDA to reduce the dimensionality require high computational cost and a large number of training samples. To overcome these problems, the methods for extracting feature vector in the FFT and DCT can be used [18]. 6.2.1 FFT It is a computationally efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse as it reduced computation costs FFT plays an important role [18]. The output of the transformation represents the image in the Fourier or frequency space which each point represents a specific frequency contained in the real domain image. Magnitude component of the output complex transformed features will be taken as extracted features after suitable thresholding to get only 400 features of the overall image for each subject of each database. This feature vector will be used later as the input pattern to the classifier based on neural network (FFTNN). 6.2.2 DCT It is a powerful transform to extract proper features for face recognition. After applying DCT to the entire face images,

some of the coefficients are selected to construct feature vectors. The DCT has found popularity due to its comparative concentration of information in a small number of coefficients efficient computation, due to its relationship to the Fourier transform, and increased tolerance to variation of illumination [18]. After selecting suitable threshold, the feature vector for each image consists of 400 features; the input for neural network based on DCT features (DCTNN) classifier.

6.2.3 DWT Wavelets have been successfully used in image processing. Its ability to capture localized time-frequency information of image motivates its use for feature extraction. The decomposition of the data into different frequency ranges allows us to isolate the frequency components introduced by intrinsic deformations due to expression or extrinsic factors (like illumination) into certain subbands. Therefore, the simplest application of the wavelet transform for face recognition uses directly wavelet coefficients as features. The wavelet transform can locally detect the multiscale edges of facial images; the lineament edge information exists in the lowest spatial-frequency sub-band, while finer edge information presents in the higher spatial-frequency sub-band [18]. The output coefficients of DWT (db2) mother function is used, for each face image are truncated using three different thresholds to get only 500 features of the overall face image for each subject in all database; this features are the base for neural network classifier (DWTNN).

7 FACE RECOGNITION The task of feature extraction is to obtain features that are fed into a face classification system. Depending on the type of classification system, features can be local features such as lines or fiducial points, or facial features such as eyes, nose, and mouth. After extract features, feature selection stage is the phase of elimination feature redundancy to reduce the feature vector dimension. Face recognition and classification systems based on more than one classifier; correlation coefficient based classifier (Corr), three different statistical techniques based on PCA [15], ICA [17], and LDA [16] for face recognition, and feed forward neural network classifiers (FFNN) based on separate extracted features.

7.1 Correlation coefficient based classifier As correlation techniques are computationally expensive and require great amounts of storage, it is natural to pursue dimensionality reduction schemes. In place of using complete, face image for calculating correlation coefficient. Here in place of full-face image, the eyes, nose and mouth regions of a face image are used. The correlation coefficient is taken as the similarity score between two images and maximum value of this among different subjects is taken to find the nearest neighbor of a test image. Correlation coefficient between the test image and a certain training image is defined as:

r=

∑i (x i − x m )(y i − y m ) ∑i (x i − x m )2 (y i − y m )2

(7)

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 13

3 2 1

0

Classifier

Fig.18. The Average elapsed time for each classifier

Get the maximum of all the image correlation coefficients, then the test image is classified into the category of the train image; the above method is called local correlation classification [19]. 7.2 Classifiers based on PCA, ICA, and LDA The recognition process consists of comparing the test face data in the feature space with the set of feature space data obtained from the training face images. PCA based approaches typically include two phases: training and classification. In the training phase, an eigenspace is established from the training samples using PCA and the training face images are mapped to the eigenspace for classification. In the classification phase, an input face is projected to the same eigenspace and classified by an appropriate classifier. Contrasting the PCA which encodes information in an orthogonal linear space, the linear discriminant analysis (LDA) method which also known as fisherfaces method is another example of appearancebased techniques which encodes discriminatory information in a linear separable space of which bases are not necessarily orthogonal [2]. The similarity between these two is measured to determine if the test face is present in the trained set of images, this measure is seen as the recognition ability of the algorithm. PCA face recognition, ICA using Architecture 2 [17], and LDA face recognition algorithms [16] have been tested on faces94, ORL, and IFD databases.

8 CONCLUSION In this paper, an efficient method for personal identification

Recognition  Rate  (%)  vs.  Classifier 100 Recognition  Rate  (%)

4

The results have been organized by testing and evaluating all face recognition algorithms for each database. All techniques implemented using MATLAB and the performance was determined in terms of the recognition accuracy and the elapsed time taken using the following system configuration: 2.4 GHz PC with 1GB of RAM using Windows XP operating system. The quantitative measures used are accuracy and the elapsed time involved for each classifier for all databases as in Fig.15, and Fig.16. The average recognition rate, and average elapsed time for each classifier as in Fig.15, and Fig.17.

80 60

40

Faces94

20

ORL

0

IFD

Classifier

Fig.15. The result recognition rate of testing classifiers for each database

Average  Recognition  Rate  (%)  vs.  Classifier   Average  Recognition   Rate  (%)

Average  Elapsed   Time(Sec.)

Average  Elapsed  Time 5

100 80 60

40 20 0

Classifier

Fig.16. Average recognition rate of testing classifiers for each classifier

Elapsed  Time  vs.  Classifier 6 Elapsed  Time(Sec.)

7.3 FFNN Classifiers FFNN is suitable structure for nonlinear separable input data, it also good tool for classification purposes. In FFNN model, the neurons are organized in the form of layers. The neurons in a layer get input from the previous layer and feed their output to the next layer. In this type of networks connections to the neurons in the same or previous layers are not permitted; Fig.14 shows its architecture. The weights are adjusted by supervised training procedure called back-propagation (BP). Back-propagation is a kind of gradient descent method, which searches for an acceptable local minimum in order to achieve minimal error. Different types of NNs with a considerable number of adjustable parameters have investigated through trial and error; these parameters like transfer function, training function, number of hidden layers, and maximum number of epochs [2], [20].

5 4 3 2

Faces94

1

ORL

0

IFD

Classifier

Fig.17. The elapsed time of testing classifiers for each database

and verification by means of human face patterns is present-

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 7, ISSUE 1, MAY 2011 14

ed. To process face patterns in an efficient and effective way against existing methods, many simple and effective imageprocessing methods have been presented in face preprocessing, face detection, and feature extraction. Experiments were performed under different conditions; by varying the input face image dataset. The study showed that the in linear appearance based classifier; LDA performs better than the ICA and PCA in terms of the accuracy of recognition. The computational overhead of LDA and the PCA are almost similar while the ICA has a very long execution time. In addition, neural network based on DWT features perform better than classifiers based on other features.

[9]

[10]

[11]

[12]

[13]

[14] [15]

[16] Fig. 14. Architecture of FFNN for classification

9 REFERENCES [1]

[2]

[3]

[4]

[5]

[6] [7] [8]

A. Jain, A. Ross, and S. Prabhakar, "An Introduction to Biometric Recognition," IEEE transactions on circuits and systems for video technology, vol. 14, January 2004. K. Delac, and M. Grgic, Face Recognition, Published by the ITech Education and Publishing, Vienna, Austria, June 2007.

V. P. Nagaraja, A. J. Charapanamjeri, "Evaluation of Biometrics," IJCSNS International Journal of Computer Science and Network Security, vol. 9, No.9, September 2009. R. Chellappa, C.L. Wilson, S. Sirohey, "Human and Machine Recognition of Faces: A Survey," Proceedings of the IEEE, Vol. 83, pp. 705-740, May 1995. W. Zhao, R. Chellappa, A. Rosenfeld, & P.J. Phillips, "Face Recognition: A literature Survey. Technical Report," CART-TR948. University of Maryland, Aug. 2002. P.J. Phillips," Human Identification Technical Challenges," IEEE International Conf. on Image Processing, Vol. 1, pp. 49-52, 2002. L. Spacek, Computer Vision Science Research Projects, Faces94 face database: cswww.essex.ac.uk/mv/allfaces/faces94.html; AT&T Laboratories Cambridge, ORL face image database, http:

[17]

[18] [19] [20]

www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html V. Jain and A. Mukherjee, Indian Face Database, http://viswww.cs.umass.edu/~vidit/IndianFaceDatabase; last access on May 2011. V.V. Starovoitov, D. I. Samal, and D.V. Briliuk, "Image Enhancement for Face Recognition," International Conference on Iconics, 2003, St.Petersburg, Russia, in press. S. M. Pizer, E. P.Amburn, R. Cromarte, and K. Zuiderveld," Adaptive Histogram Equalization and its variations," Computer vision graphics,and image processing 39, pp. 355-368, 1987. S. Asadi, H. Hassanpour, and A. Akbar Pouyan, "Texture Based Image Enhancement using Gamma Correction," Middle East Journal of Scientific Research, Vol. 6, No. 6, 2010. K. Raja, and L.M. Patnaik, "Feature Extraction Based Face Recognition, Gender, and Age Classification," International Journal on Computer Science and Engineering, Vol. 02, pp. 1423, 2010. M. Turk, and A. Pentland, "Eigenfaces for Recognition," Journal of Cognitive Neuroscience, vol. 3, pp 72-86, 1991. P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, "Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection," in IEEE TPAMI. Vol. 19, pp.711-720, 1997. K. Delac, M. Grgic, S. Grgic, Independent Comparative study of PCA, ICA, and LDA on the FERET dataset," International Journal of Imaging Sys. & Technology, Vol. 15, Issue 5, pp. 252-260. M.S. Bartlett, J.R. Movellan, T.J. Sejnowski, "Face Recognition By Independent Component Analysis," IEEE Trans. on Neural Networks, Vol. 13, No. 6,, pp. 1450-1464, November 2002. M.S. Nixon, and A.S. Aguado, Feature Extraction and Image Processing, 2nd ed., 2008. J. Rodgers, & W. Nicewander, "13 Ways to Look at the Correlation Coefficient," American Statistician, Vol. 42, pp 59-66, 1988. P. Latha, L. Ganesan, & S.Annadurai, "Face Recognition using Neural Networks," Signal Processing Int. Journal (SPIJ), vol.3.

Appearance-Based Automated Face Recognition ...

http://sites.google.com/site/jcseuk/. Appearance-Based Automated Face. Recognition System: Multi-Input Databases. M.A. Mohamed, M.E. Abou-Elsoud, and M.M. Eid. Abstract—There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although ...

1MB Sizes 2 Downloads 344 Views

Recommend Documents

Multithread Face Recognition in Cloud
Oct 12, 2016 - biometrics systems have proven to be an essential security tools, ... showcase cloud computing-enabled face recognition, which utilizes ...

Multithread Face Recognition in Cloud
Oct 12, 2016 - this responsibility to third-party vendors who can maintain cloud ... showcase cloud computing-enabled face recognition, which utilizes ...

SURF-Face: Face Recognition Under Viewpoint ...
A grid-based and dense extraction of local features in combination with a block-based matching ... Usually, a main drawback of an interest point based fea- .... navente at the Computer Vision Center of the University of Barcelona [13].

SURF-Face: Face Recognition Under Viewpoint ...
Human Language Technology and ..... In CVPR, Miami, FL, USA, June. 2009. ... In International Conference on Automatic Face and Gesture Recognition,. 2002.

Face Recognition Based on SVM ace Recognition ...
features are given to the SVM classifier for training and testing purpose. ... recognition has emerged as an active research area in computer vision with .... they map pattern vectors to a high-dimensional feature space where a 'best' separating.

Rapid Face Recognition Using Hashing
cal analysis on the recognition rate of the proposed hashing approach. Experiments ... of the images as well as the large number of training data. Typically, face ...

Rapid Face Recognition Using Hashing
cal analysis on the recognition rate of the proposed hashing approach. Experiments ... of the images as well as the large number of training data. Typically, face ...

Markovian Mixture Face Recognition with ... - Semantic Scholar
cided probabilistically according to the probability distri- bution coming from the ...... Ranking prior like- lihood distributions for bayesian shape localization frame-.

Face Authentication /Recognition System For Forensic Application ...
Graphic User Interface (GUI) is a program interface item that allows people to interact with the programs in more ways than just typing commands. It offers graphical icons, and a visual indicator, as opposed to text-based interfaces, typed command la

Face Recognition in Videos
5.6 Example of cluster containing misdetection . .... system which are mapped to different feature space that consists of discriminatory infor- mation. Principal ...

Handbook of Face Recognition - Research at Google
Chapters 9 and 10 present methods for pose and illumination normalization and extract ..... Photo-sharing has recently become one of the most popular activities on the ... in social networking sites like Facebook, and special photo storage and ...

Face Recognition Using Eigenface Approach
which the person is classified by comparing its position in eigenface space with the position of known individuals [1]. The advantage of this approach over other face recognition systems is in its simplicity, speed and .... The experiment was conduct

Face Recognition using Local Quantized Patterns
by OSEO, French State agency for innovation and by the ANR, grant reference ANR-08- ... cessing, 2009:33, 2009. [18] H. Seo and P. Milanfar. Face verification ...

Face Recognition Based on Local Uncorrelated and ...
1State Key Laboratory for Software Engineering, Wuhan University, Wuhan, ... of Automation, Nanjing University of Posts and Telecommunications, 210046, ...

Undersampled Face Recognition via Robust Auxiliary ...
the latter advanced weighted plurality voting [5] or margin ... Illustration of our proposed method for undersampled face recognition, in which the gallery set only contains one or ..... existing techniques such as Homotopy, Iterative Shrinkage-.

Authorization of Face Recognition Technique Based On Eigen ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, ..... computationally expensive but require a high degree of correlation between the ...

Three dimensional face recognition based on geodesic ...
dimensional face recognition systems, on the other hand, have been reported to be less .... to methods based on PCA applied to the 3D point clouds, fitting implicit ... surfaces, and PCA applied to range images.21 Its performance was equivalent to an

Automated Recognition of Patterns Characteristic of ...
Such data are generated on a regular basis by labeling one or more ..... We initially sought a means of visualizing the degree of separation of the 5 image ...

Fully Automated Non-Native Speech Recognition Using ...
tion for the first system (in terms of spoken language phones) and a time-aligned .... ing speech recognition in mobile, open and noisy environments. Actually, the ...

Automated recognition of human activities in video ...
10.1117/2.1201405.005489. Automated recognition of human activities in video streams in real-time. Sebastiaan Van den Broek, Johan-Martijn ten Hove,.

Face Recognition Based on Nonlinear DCT ...
Dec 12, 2009 - the kernel-based nonlinear discriminant analysis technique has now been widely ... alized discriminant analysis (GDA) method for nonlinear.

Face Recognition Using Uncorrelated, Weighted Linear ...
and within-class scatter matrices, respectively; m is the mean of all samples and mi is the mean of class .... For illustration, some available images for one subject ...

Face Tracking and Recognition with Visual Constraints in Real-World ...
... constrain term can be found at http://seqam.rutgers.edu/projects/motion/face/face.html. ..... [14] Y. Li, H. Ai, T. Yamashita, S. Lao, and M. Kawade. Tracking in.