Three dimensional face recognition based on geodesic and Euclidean distances Shalini Guptaa , Mia K. Markeyb , J. K. Aggarwala , Alan C. Bovika a Department

of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA; b The University of Texas Department of Biomedical Engineering, Austin, TX 78712, USA ABSTRACT We propose a novel method to improve the performance of existing three dimensional (3D) human face recognition algorithms that employ Euclidean distances between facial fiducial points as features. We further investigate a novel 3D face recognition algorithm that employs geodesic and Euclidean distances between facial fiducial points. We demonstrate that this algorithm is robust to changes in facial expression. Geodesic and Euclidean distances were calculated between pairs of 25 facial fiducial points. For the proposed algorithm, geodesic distances and ’global curvature’ characteristics, defined as the ratio of geodesic to Euclidean distance between a pairs of points, were employed as features. The most discriminatory features were selected using stepwise linear discriminant analysis (LDA). These were projected onto 11 LDA directions, and face models were matched using the Euclidean distance metric. With a gallery set containing one image each of 105 subjects and a probe set containing 663 images of the same subjects, the algorithm produced EER=1.4% and a rank 1 RR=98.64%. It performed significantly better than existing algorithms based on principal component analysis and LDA applied to face range images. Its verification performance for expressive faces was also significantly better than an algorithm that employed Euclidean distances between facial fiducial points as features. Keywords: 3D, face recognition, geodesic distances, range image, biometrics, anthropometric landmarks

1. INTRODUCTION Three dimensional (3D) human face recognition is emerging as a significant biometric technology. Research interest into 3D face recognition has increased during recent years due to the availability of improved 3D acquisition devices and processing algorithms. Three dimensional face recognition also helps to resolve some of the issues associated with two dimensional (2D) face recognition. Since 2D systems employ intensity images, their performance is reported to degrade significantly with variations in facial pose and ambient illumination.1 Three dimensional face recognition systems, on the other hand, have been reported to be less sensitive to changes in ambient illumination during image capture than 2D systems.2 Three dimensional face models can also be rigidly rotated relatively easily to reduce variations in facial pose. Hence, considerable research attention is now being directed towards developing 3D and 2D+3D face recognition systems. Although numerous algorithms for 3D face recognition have been proposed that employ whole regions of the facial surface, algorithms that employ features from facial sub-regions remain largely unexplored. Furthermore, the few algorithms of this flavor that have been developed, have met with limited success. Another open problem that remains to be solved with regards to 3D face recognition is to develop algorithms that are robust to changes in facial expression. We address both these issues in this paper and present a novel algorithm that employs geodesic and Euclidean distances between facial fiducial points, and is robust to changes in facial expression. The first contribution of this paper is that we propose a novel algorithm, which substantially improves the performance of existing 3D face recognition techniques that employ Euclidean distances between pairs of facial Further author information: (Send correspondence to Shalini Gupta) Shalini Gupta: E-mail: [email protected], Telephone: 1 512 471 8660 Mia K. Markey: E-mail: [email protected], Telephone: 1 512 471 1711 J. K. Aggarwal: E-mail: [email protected], Telephone: 1 512 471 1369 Alan C. Bovik: Email: [email protected], Telephone: 1 512 471 5370 Vision Geometry XV, edited by Longin Jan Latecki, David M. Mount, Angela Y. Wu,   Proc. of SPIE­IS&T Electronic Imaging, SPIE Vol. 6499, 64990D, © 2007 SPIE­IS&T ∙ 0277­786X/07/$15 SPIE­IS&T/ Vol. 6499  64990D­1

fiducial points as features. Our technique employs the stepwise linear discriminant analysis (LDA) procedure for feature selection and a Fisher’s LDA classifier for final classification. The second contribution of this paper is that we investigate a 3D face recognition algorithm that employs geodesic and Euclidean distances between facial fiducial points. We compare the performance of the proposed technique against two existing state-of-the-art holistic (whole face) statistical 3D face recognition algorithms based on (a) principal component analysis (PCA) and (b) LDA applied to z values of facial range images, and also to an algorithm that employs only Euclidean distances between anthropometric fiducial points as features. We study the sensitivity of the algorithms to changes in facial expression and demonstrate that our proposed technique, which employs geodesic distances between facial fiducial points, is robust to changes in facial expression. The motivation for studying geodesic distances between facial fiducial points for 3D face recognition is twofold. First, a number of anthropometric facial proportions that are employed to characterize the shape of human faces are based on on-the-surface facial distances.3 Second, a recent study suggested that it may be possible to model changes in facial expression as isometric deformations of the facial surface.4 Under this assumption, intrinsic properties of the facial surface including the geodesic distances between pairs of points on the face would not be altered when the facial expression changes.

2. REVIEW OF PREVIOUS WORK The majority of the 3D face recognition studies have focused on developing holistic statistical techniques based on the appearance of face range images or on techniques that employ 3D surface matching. Holistic statistical 3D face recognition techniques that employ PCA, LDA, or hidden Markov models have been proposed.5–9 These techniques are straight-forward extensions of techniques that were fairly successful with 2D face images.10–12 The 3D PCA algorithm is also regarded as a baseline for evaluating the performance of other 3D face recognition algorithms.13 Although statistical holistic techniques have met with a degree of success for 3D face recognition, it is intuitively less obvious as to what discriminatory information about faces they encode. In order to match the surfaces of two 3D faces, they are iteratively aligned as closely to each other as possible and a suitable measure quantifies the dissimilarity between them.14 This technique has a high computational cost, which can become prohibitive for searching large facial databases. Studies that consider local geometric features of the 3D facial surface are fewer in number. Notable among them are those by Gordon15 and Moreno et al.16 In these studies, the authors employed 3D Euclidean distances and angles between face landmarks, and their local surface curvature and surface area properties for face recognition. In both studies, facial landmarks were automatically segmented using the H-K segmentation algorithm.17 All features were ranked in decreasing order of their discriminability as measured by Fisher’s criterion.18 The top few features were employed in a Euclidean distance nearest neighbor classifier. Another face recognition algorithm employed ‘point signatures’ at fiducial points on the face surface as featureset al.19, 20 A point signature is a measure of the curvature about a point. This technique was found to be robust to variations in pose and facial expression. In another study, this algorithm was found to be superior to methods based on PCA applied to the 3D point clouds, fitting implicit polynomials of degree 4 and 6 to facial surfaces, and PCA applied to range images.21 Its performance was equivalent to an approach where the positions of only 10 fiducial points were compared for facial recognition after iteratively aligning the two face models. Two dimensional face recognition techniques based on local facial features have been considerably successful. The Face Recognition Vendors Test 2002 (FRVT 2002) was an independent technology evaluation of state of the art 2D face recognition algorithms on a large common database of images. At the FRVT 2002, two of the top three performing algorithms were based on local facial features.1 One among them was a technique called ‘local feature analysis’.22 It employed statistical kernels that captured variation in local regions of the face for recognition. The other top ranked algorithm that employed local facial features was called ‘elastic bunch graph matching’ (EBGM) and was developed by Wiskott et al.23 In this technique, a face was represented by a data structure called the elastic bunch graph, comprised of 2D Euclidean distances between a set of fiducial points on the face image and Gabor wavelet coefficients at these points. EBGM was reported to be robust to changes in ambient illumination and facial expression.

SPIE­IS&T/ Vol. 6499  64990D­2

Three dimensional face recognition techniques that employ features from sub-regions of the face are advantageous in some respects relative to holistic statistical or surface matching techniques. First, the choice of sub-regions employed for face recognition can be guided by domain specific knowledge about shape characteristics of human faces that are known to be unique to each individual. Second, they may be affected less by global changes in the appearance of face range images such as changes in facial pose, expression, occlusions, holes, and the presence of noise. However, techniques for 3D face recognition that employ features from facial sub-regions remain largely unexplored. This is partly due to the fact that such algorithms require an additional step of reliably locating and segmenting the sub-regions. Furthermore, the overall performance of these algorithms may be dependent on accurate localization. Nonetheless, if sub-regions of the face can be reliably located, evidence from the literature on both 2D and 3D face recognition suggests that powerful techniques for 3D face recognition could be developed. Hence, there is a need to further explore the potential of 3D face recognition algorithms that employ features from facial sub-regions/fidudial points and to combine them with robust techniques for facial feature detection.

3. MATERIALS AND METHODS 3.1. Data Three dimensional face models for the study were acquired by an MU-2 stereo imaging system manufactured by 3Q Technologies Ltd. (Atlanta, GA). The system simultaneously acquires both shape and texture information. The 3D models had a resolution of 0.96 mm along the x and y dimensions and a resolution of 0.32 mm along the z dimension. Models were rigidly aligned to frontal orientation, and range and color of faces images were constructed. Hence at each (x, y) location of the image, a z value and facial color information were available. Facial range images were median filtered to remove impulse noise, interpolated to remove holes and low pass filtered with a Gaussian kernel of size 7 × 7 and σ = 1. The data set contained 1128 3D facial models of 105 subjects. It was partitioned into disjoint training and test sets. The training set contained 360 range images of 12 subjects (30 images per subjects) in neutral or expressive modes. The test set was further partitioned into a gallery set and probe set. The gallery set contained one image each of the 105 subjects with a neutral expression. The probe set contained another 663 images of the gallery subjects with a neutral or an arbitrary expression. In the probe set, the number of range images of each subject varied, with a minimum of 1 image for some subjects to a maximum of 55.

3.2. Feature Extraction We identified 19 facial anthropometric proportions that could be reliably measured from the 3D face models and which were reported to be highly variable (standard deviation ≥ 5) among adult human populations.3 Twenty five fiducial points associated with these anthropometric proportions were manually located on color images of all 1128 models (Figure 1(a)). This was done because some fiducial points (e.g. corners of the eyes), were easier to locate on the color images. Since a pair of range and color images of a subject were perfectly aligned, the locations of fiducial points on range images were automatically obtained (Figure 1(b)). We calculated 300 Euclidean distances and 300 geodesic distances between all possible pairs the 25 fiducial points. We employed Dijkstra’s shortest path algorithm24 for calculating geodesic distances. Software for Dijkstra’s algorithms is available as a part of the IsomapR1 software package.25 Geodesic distance calculations were based on defining a k = 8 connected nearest neighbors about each point. We also calculated 299 ‘global curvature’ features, each defined as the ratio of geodesic distance to the Euclidean distance between a pair of fiducial points. Intuitively this feature quantifies how much a surface bends or curves along a particular direction at a much larger scale than what can be defined by local surface curvature measures such as mean and Gaussian curvature. We did not calculate this feature for a pair of fiducial points between the upper and the lower lip since the distance between them measures the extent to which the mouth is open and is not a distance along the facial surface.

SPIE­IS&T/ Vol. 6499  64990D­3

(a)

(b)

Figure 1. Fig 1(a) shows the facial fiducial points on a color image; Fig. 1(b) shows the facial fiducial points on a range image.

3.3. 3D Face Recognition Algorithms We employed geodesic distances and global curvature features between facial fiducial points for the proposed algorithm (GEO CURV LDA). We selected a subset of 117 discriminatory geodesic features by applying stepwise linear discriminant analysis (LDA) to the 360 images in the training set. Stepwise LDA is a procedure that iteratively adds and removes features so as to maximize the ratio of the between-class separation to the total within-class separation of the model.26 It is a wrapper method for feature selection that considers the overall discrimination ability of a subset of features and not just the discrimination ability of each individual feature. The procedure terminates when no more features can be added or removed based on the specified significance level for entry and removal of features. In our experiments, both these significance levels were set to 0.05. Stepwise LDA was implemented using the software package available from SAS Institute Inc., NC, USA. Using a similar procedure we separately selected 131 discriminatory features from amongst the 299 global curvature features. The selected 117 geodesic and 131 global curvature features were then pooled together and reduced to a final combined set of 146 features using a third stepwise LDA procedure. Fisher’s linear discriminant analysis was applied to these 146 features and 11 projection directions were learned from the training data set. All images in the test data set were projected onto the LDA direction and the final match score between two faces was established by means of the Euclidean distance metric in the 11 dimensional feature space. We also implemented four existing face recognition algorithms. Two of them were based on holistic statistical methods in which (a) PCA (Z PCA) and (b) LDA (Z LDA) was applied to z values of range images. For these algorithms, a subsection of each range image of size 354 × 341 pixels, which enclosed the main facial features of all faces was employed (Figure 2). For Z PCA, 42 eigen directions that accounted for 99% of the variance of the data were learned using the training data set. All images in the test data set were projected onto these directions and the L1 norm was employed for comparing faces in the 42 dimensional PCA space. For the second holistic algorithm based on LDA applied to z values (Z LDA), the dimensionality of the data was first reduced using PCA to 348. This was done to ensure that the within-class scatter matrix used in the computation of LDA was

SPIE­IS&T/ Vol. 6499  64990D­4

Figure 2. The figure shows the region of the range images employed for the Z PCA and Z LDA 3D face recognition algorithms.

full rank and hence non-singular. Eleven LDA directions were then learned from the 348 dimensional training data. All images in the test data set were projected onto these 11 LDA directions before matching by means of the Euclidean distance metric. The two other algorithms that we implemented, employed only 3D Euclidean distances between facial fiducial points as features. For the first (EUC FILTER), we employed the ‘filter’ method for feature selection that was proposed by Gordon15 and Moreno et al.16 Individual features were ranked in descending order of their Fisher’s ratio of the between-class variance to the total within-class variance. The top 106 features were selected and employed in a nearest neighbor classifier with the Euclidean distance norm in the 106 dimensional feature space. The fourth algorithm that we implemented also employed Euclidean distances between facial fiducial points as features (EUC LDA). However, for this algorithm similar to the GEO CURV LDA algorithm, 106 features from among the 300 Euclidean distance features were selected by applying stepwise LDA to the training data set. These were further reduced to 11 by learning a set of LDA directions, and images in the test data were projected onto them before matching by means of a Euclidean distance metric.

3.4. Performance Evaluation Performance of each 3D face recognition algorithm was evaluated separately for the entire probe set, for neutral probes only and for expressive probes only. Verification performance of face recognition algorithms was evaluated using the receiver operating characteristic (ROC) methodology.27 A ROC curve represents the tradeoff between the false acceptance rate (FAR) and the false rejection rate of as the classifier output threshold value is varied. Two quantitative measures of verification performance, the equal error rate (EER) and the area under ROC curve (AUC), were calculated. Identification performance was evaluated by means of the cumulative match characteristic curves (CMC) and the rank 1 recognition rates (RR). Statistical 95% confidence intervals for the EER, AUC, and the rank one RR of each algorithm were obtained empirically by bootstrap sampling the outputs of each algorithm.28–30 The performances of the various algorithms were compared using their AUC values and rank one RR values. The criterion employed for establishing statistical differences in performance while comparing two algorithms was that if the observed quantity for either algorithm fell within the 95% confidence interval of the other, then the performances of the two algorithms were regarded as not statistically significantly different. Otherwise, they were regarded as significantly different. It should be noted that while EER represents the performance of a classifier at only one operating threshold, the AUC represents the overall performance of the classifier over the entire range of thresholds. Hence, we employed AUC and not EER for comparing the verification performance of two classifiers.

SPIE­IS&T/ Vol. 6499  64990D­5

4. RESULTS Table 1 and Table 2 present the equal error rates and the AUC values for the verification performance of 3D face recognition algorithms that were implemented in this study. The corresponding rank 1 recognition rates are presented in Table 3. Error bars for the 95% confidence intervals for each observed quantity calculated empirically using a bootstrap procedure are also presented in the tables. Table 1. The observed EER values and their 0.025 and 0.975 quantiles for verification performance of 3D face recognition algorithms. N-N represents performance of a system for the neutral probes only, N-E for the expressive probes only and N-All for all probes.

N-N Algorithm Z PCA Z LDA EUC FILTER EUC LDA GEO CURV LDA

EER (%) 18.1 6.5 5.0 0.9 1.2

N-E

CI [15.8 20.3] [5.2 8.1] [4.1 6.1] [0.7 1.5] [0.7 2.2]

EER (%) 13.4 3.9 10.2 2.9 1.2

CI [11.1 17.3] [2.2 6.2] [7.8 13.8] [1.7 4.0] [0.3 3.1]

N-All EER (%) 16.5 5.7 7.4 1.6 1.4

CI [13.7 20.8] [4.6 6.6] [6.4 8.3] [1.1 2.2] [0.9 2.1]

Table 2. The observed AUC values and their 0.025 and 0.975 quantiles for verification performance of 3D face recognition algorithms. N-N represents performance of a system for the neutral probes only, N-E for the expressive probes only and N-All for all probes.

N-N Algorithm Z PCA Z LDA EUC FILTER EUC LDA GEO CURV LDA

AUC 0.1106 0.0086 0.0106 0.0009 0.0009

N-E

CI [0.0902 0.1365] [0.0059 0.0115] [0.0070 0.0148] [0.0005 0.0018] [0.0005 0.0014]

AUC 0.0694 0.0078 0.0377 0.0032 0.0007

CI [0.0480 0.1065] [0.0015 0.0199] [0.0259 0.0496] [0.0013 0.0065] [0.0003 0.0014]

N-All AUC 0.0998 0.0082 0.0207 0.0017 0.0009

CI [0.0834 0.1227] [0.0054 0.0131] [0.0164 0.0258] [0.0010 0.0024] [0.0006 0.0014]

Table 3. The observed rank 1 RR values and their 0.025 and 0.975 quantiles for identification performance of 3D face recognition algorithms. N-N represents performance of a system for the neutral probes only, N-E for the expressive probes only and N-All for all probes.

N-N Algorithm Z PCA Z LDA EUC FILTER EUC LDA GEO CURV LDA

RR (%) 70.21 91.25 85.42 97.92 98.75

N-E

CI [66.46 74.17] [88.75 93.75] [82.00 88.23] [96.46 99.17] [97.60 99.58]

RR (%) 68.31 95.10 70.49 96.72 98.36

CI [61.20 74.86] [91.80 97.81] [63.39 77.05] [93.99 98.91] [96.17 100.0]

N-All RR (%) 69.68 92.31 81.30 97.59 98.64

CI [65.91 73.15] [90.34 94.27] [78.30 84.16] [96.38 98.64] [97.74 99.47]

4.1. Effect of Classification Algorithm Between the two face recognition algorithms EUC FILTER and EUC LDA that employed only 3D Euclidean distances between facial fiducial points as features, the EUC LDA algorithm performed significantly better (Table

SPIE­IS&T/ Vol. 6499  64990D­6

2, Table 3, Figure 3). This trend was observed for the verification as well as the recognition performance of the algorithms. Overall, EUC LDA had a AUC=0.0017, CI=[0.0010 0.0024] and a rank 1 RR=97.59%, CI=[96.38 98.64] for all probe images. The corresponding values for EUC FILTER were AUC=0.0207, CI=[0.0164 0.0258] and a rank 1 RR=81.30%, CI=[78.30 84.16]. Thus for Euclidean distance features, the algorithm that employed stepwise LDA feature selection as well as an LDA classifier was clearly superior and all other classifier were compared it.

EUC FILTER ———EUC_LDA

EUC FILTER

10

20

30

40

00

60

70

00

90

100

FAR

(a)

(b)

Figure 3. Fig 3(a) ROC curves depicting verification performance; and Fig 3(b) CMC curves depicting identification performance of 3D face recognition algorithms that employed 3D Euclidean distances between facial fiducial points as features, but employed different feature selection and classification algorithms.

4.2. Holistic vs. Facial Sub-region Based Algorithms Both the algorithms, EUC LDA and GEO CURV LDA (for all probes AUC =0.0009, CI=[0.0006 0.0014]; rank RR = 98.64%, CI=[97.74 99.47]), that were based on distances between facial fiducial points also performed significantly better than the two holistic statistical 3D face recognition algorithms Z PCA (for all probes AUC=0.0998, CI=[0.0834 0.1227]; rank 1 RR=69.83%, CI=[65.91 73.15]) and Z LDA (for all probes AUC=0.0082, CI=[0.0054 0.0131]; rank 1 RR=92.31%, CI=[90.34 94.27]). Furthermore, the Z PCA algorithm displayed the poorest performance of all algorithms tested in this study. The Z LDA algorithm performed significantly better than the Z PCA algorithm (Table 2, Table 3, Figure 4).

4.3. Effect of Distance Features Differences were also observed between the performance of the EUC LDA and GEO CURV LDA algorithms that employed only the Euclidean distance features and those that employed geodesic and global curvature features. Although the overall performance of both these algorithm on all probes was not statistically different, a closer examination of their verification performance for expressive probes revealed differences between the two. Two observations about the verification performance of EUC LDA and GEO CURV LDA can be made by examining Table 2. First, while the AUC value of GEO CURV LDA was not statistically different for neutral (0.0009, CI=[0.0005 0.0014]) and expressive probes (0.0007, CI=[0.0003 0.0014]), the performance of EUC LDA was significantly higher for neutral probes (AUC=0.0009, CI=[0.0005 0.0018]) than that for expressive probes (AUC=0.0032, CI=[0.0013 0.0065]). Second, the verification performance of both GEO CURV LDA and EUC LDA for neutral probes was not statistically different; however, for expressive probes, GEO CURV LDA

SPIE­IS&T/ Vol. 6499  64990D­7

/ I

I. I.,—

ZPCA

ZFCA ZLDA

FAR

Z_LDA

EUC_LDA

EUC_LDA

GEOCURVLDA

GEOCURVLDA

10

210

30

00

00

60

710

00

90

100

Rank

(a)

(b)

Figure 4. Fig 4(a) ROC curves depicting verification performance; and Fig 4(b) CMC curves depicting identification performance of 3D face recognition algorithms based on distances between facial fiducial points and holistic 3D face recognition algorithm for all probe image.

performed significantly better than EUC LDA. This is also evident from ROC curves of these two classifiers for expressive probes only (Figure 5(a)). The improvement in verification performance for expressive probes also resulted in significantly better performance for all probes for the GEO CURV LDA algorithm relative to the EUC LDA algorithm (Table 2). A similar trend of GEO CURV LDA (rank 1 RR=98.36%, CI=[96.17% 100%]) performing better than EUC LDA (rank 1 RR=96.72%, CI=[93.99% 98.91%]) at identifying expressive probe faces, was also observed. However, the difference between their recognition rates was not statistical significant. CMC curves for these two algorithms for expressive probes only (Figure 5(b)) showed that the curve for GEO CURV LDA was consistently higher than that for EUC LDA. Furthermore, GEO CURV LDA attained a 100% recognition rate at nearly half the rank at which it was attained by EUC LDA.

5. DISCUSSION In this paper we investigated 3D face recognition algorithms that employed geodesic distances and global curvature features between facial fiducial points. The performance of the proposed algorithm was compared with other existing state-of-the-art 3D face recognition algorithms. A number of interesting observations surfaced in the study. The EUC LDA algorithm that we developed in this study performed significantly better than the EUC FILTER algorithm that had been proposed previously in the literature. It can be inferred from this result that for 3D face recognition algorithms that employ Euclidean distances between fiducial points as features, the choice of appropriate feature selection and classification algorithm can critically affect performance. Specifically, when stepwise LDA feature selection followed by the LDA classifier was employed, significant improvement in performance was achieved relative to when a filter method for feature selection and the nearest neighbor classifier were employed. Furthermore, the EUC LDA classifier achieved very high performance (for all probes RR=97.59%, AUC=0.0017), which points towards the likelihood that faces might be located on a linear subspace of the feature space defined by the Euclidean distances between facial fiducial points.

SPIE­IS&T/ Vol. 6499  64990D­8

0.18

0.16

0.14

0.12 I 0.1

0.08

0.06

I

0.04

1

EUCLDA

-

EUCLDA 0.02-

O

GEOCURVLDA

GEOCURVLDA

0.02

0.04

0.06

0.08

OH

FAR

0H20H40H60H8 0.2

10

210

(a)

30

00

00

60

710

00

90

-

100

Rank

(b)

Figure 5. Fig 5(a) ROC curves depicting verification performance; and Fig 5(b) CMC curves depicting identification performance of EUC LDA and GEO CURV LDA 3D face recognition algorithms for images of expressive probes only.

In this study, 3D face recognition algorithms that employed distances between facial fiducial points as features also performed significantly better than statistical holistic 3D face recognition algorithms in which PCA or LDA was applied to the z values of face range images. Other 3D face recognition algorithms based on local facial features have also been reported that perform better than the baseline PCA algorithm.21 These results suggest that with appropriate feature selection and classification algorithms, 3D face recognition algorithms that employ local facial features could potentially outperform holistic approaches. Recent studies to understand the mechanisms employed by humans to process faces also point towards the importance of facial parts processing. In the cognitive sciences community, processing of human faces for learning and recognition is largely regarded as a holistic process thought to be based on relational information between among parts of the face.31 However, recent studies of observing subject eye movements during human face learning and recognition tasks refute this holistic view.32 It was found that eye movements were integral to human face processing. The authors suggested that eye movements may be employed to obtain details about specific facial features as well as for judging distances between them. Lastly, algorithms that employed geodesic distances and global curvature features between facial fiducial points performed at least as well as those that employed Euclidean distance features. GEO CURV LDA was found to be significantly more robust at verifying facial images with arbitrary facial expressions than EUC LDA. Although a similar trend was observed for their identification performance, the differences were not observed to be statistically significant. The fact the algorithm that employed geodesic distances was robust to changes in facial expression may be attributable to Bronstein et al.’s explanation of facial expressions being isometric deformation of facial surface. It should be noted, however, that performance of both these algorithms for the data set employed in this study was very nearly perfect (Table 2 and Table 3). Under such conditions, it is sometimes difficult to identify significant differences between these algorithms. This is also referred to as the three bears problem.33 Hence, as a follow up to this study, it would be instructive to compare the performance of the two classifiers on a larger database with greater facial expression variations, to validate the trends observed in this study. In conclusion, we presented an algorithm that substantially improves upon the performance of existing 3D face recognition techniques that employ Euclidean distances between facial fiducial points as features. Furthermore, we proposed a promising new 3D face recognition algorithm based on geodesic distances and global curvature

SPIE­IS&T/ Vol. 6499  64990D­9

characteristics between facial fiducial points. The method was found to be superior to the existing state-of-art 3D face recognition algorithms. It also displayed potential for being robust to changes in facial expression. A number of future directions for extending this study can be identified. Firstly, we would like to study the sensitivity of the proposed technique to the choice of facial fiducial points as well as its sensitivity to small perturbations in the positions of fiducial points. This would help to quantify robustness of the technique to accurate localization of fiducial points. Secondly, we would like to investigate techniques to automatically detect facial fiducial points as a logical next step towards building a completely automated 3D face recognition system.

ACKNOWLEDGMENTS The authors would like to gratefully acknowledge Advanced Digital Imaging Research, LLC (League City, TX) for providing funding and 3D face data for the study.

REFERENCES 1. P. J. Phillips, P. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi, and J. M. Bone, “Frvt 2002: Overview and summary.” available at www.frvt.org, March 2003. 2. E. P. Kukula, S. J. Elliott, R. Waupotitsch, and B. Pesenti, “Effects of illumination changes on the performance of geometrix facevision/spl reg/ 3d frs,” in Security Technology, 2004. 38th Annual 2004 International Carnahan Conference on, pp. 331–337, 2004. 3. L. Farkas, Anthropometric Facial Proportions in Medicine, Thomas Books, 1987. 4. A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Three-dimensional face recognition,” International Journal of Computer Vision 64(1), pp. 5–30, 2005. 5. K. I. Chang, K. W. Bowyer, and P. J. Flynn, “An evaluation of multimodal 2d+3d face biometrics,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 27(4), pp. 619–624, 2005. 6. T. Heseltine, N. Pears, and J. Austin, “Three-dimensional face recognition: A fishersurface approach,” in International Conference on Image Analysis and Recognition, In Proc. of the, A. Campilho and M. Kamel, eds., ICIAR 2004 LNCS 3212, pp. 684–691, Springer-Verlag Berlin Heidelberg, 2004. 7. C. BenAbdelkader and P. A. Griffin, “Comparing and combining depth and texture cues for face recognition,” Image and Vision Computing 23(3), pp. 339–352, 2005. 8. S. Malassiotis and M. G. Strintzis, “Robust face recognition using 2d and 3d data: Pose and illumination compensation,” Pattern Recognition 38(12), pp. 2537–2548, 2005. 9. F. Tsalakanidou, S. Malassiotis, and M. G. Strintzis, “Face localization and authentication using color and depth images,” Image Processing, IEEE Transactions on 14(2), pp. 152–168, 2005. 10. M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience 3, p. 7186, 1991. 11. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 19(7), pp. 711– 720, 1997. 12. F. Samaria and S. Young, “Hmm-based architecture for face identification,” Image and Vision Computing 12(8), pp. 537–543, 1994. 13. P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, and W. Worek, “Preliminary face recognition grand challenge results,” in Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on, pp. 15–24, 2006. 14. X. Lu, A. K. Jain, and D. Colbry, “Matching 2.5d face scans to 3d models,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 28(1), pp. 31–43, 2006. 15. G. G. Gordon, “Face recognition based on depth and curvature features,” in Computer Vision and Pattern Recognition, 1992. Proceedings CVPR ’92., 1992 IEEE Computer Society Conference on, pp. 808–810, 1992. 16. A. B. Moreno, A. Sanchez, J. Fco, V. Fco, and J. Diaz, “Face recognition using 3d surface-extracted descriptors,” in Irish Machine Vision and Image Processing Conference (IMVIP 2003), Sepetember 2003. 17. P. Besl and R. Jain, “Segmentation through variable-order surface fitting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 10(2), pp. 167–192, 1988.

SPIE­IS&T/ Vol. 6499  64990D­10

18. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, John Wiley and Sons, New York, 2nd ed., 2001. 19. C.-S. Chua, F. Han, and Y.-K. Ho, “3d human face recognition using point signature,” in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, pp. 233–238, 2000. 20. Y. Wang, C.-S. Chua, and Y.-K. Ho, “Facial feature detection and face recognition from 2d and 3d images,” Pattern Recognition Letters 23(10), pp. 1191–1202, 2002. 21. M. O. Irfan¨ oglu, B. G¨ okberk, and L. Akarun, “3d shape-based face recognition using automatically registered facial surfaces,” in Proceedings of the 17th International Coneference on Pattern Recognition, Vol.4, pp. 183–186, IEEE Computer Society, Dept. of Comput. Eng., Bogazici Univ., Turkey, 2004. 22. P. S. Penev and J. J. Atick, “Local feature analysis: a general statistical theory for object representation,” Network: Computation in Neural Systems 7, pp. 477–500, 1996. 23. L. Wiskott, J.-M. Fellous, N. Kuiger, and C. von der Malsburg, “Face recognition by elastic bunch graph matching,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 19(7), pp. 775–779, 1997. 24. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik 1, p. 269271, 1959. 25. J. Tenenbaum, V. de Silva, and J. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science 290(5500), pp. 2319–2323, 2000. 26. S. Sharma, Applied Multivariate Techniques, John Wiley and Sons, Inc., New York, 1996. 27. J. Egan, Signal Detection Theory and ROC Analysis, Academic Press, New York, 1975. 28. B. Efron and G. Gong, “A leisurely look at the bootstrap, the jackknife and cross-validation,” The American Statistician 37, pp. 36–48, 1983. 29. R. M. Bolle, N. K. Ratha, and S. Pankanti, “Evaluating authentication systems using bootstrap confidence intervals,” in 15th International Conference on Pattern Recognition, In Proceedings of, 2000. 30. R. Micheals and T. Boult, “Efficient evaluation of classification and recognition systems,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, 1, pp. I–50–I–57 vol.1, 2001. 31. J. Tanaka and M. Farah, Perception of faces, objects and scenes: Analytical and holistic processes, ch. The holistic representation of faces, pp. 53–74. Oxford University Press, New York, 2003. 32. J. M. Henderson, C. C. Williams, and R. J. Falk, “Eye movements are functional during face learning,” Memory and Cognition 33(1), pp. 98–106, 2005. 33. P. Phillips, H. Moon, S. Rizvi, and P. Rauss, “The feret evaluation methodology for face-recognition algorithms,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(10), pp. 1090–1104, 2000.

SPIE­IS&T/ Vol. 6499  64990D­11

Three dimensional face recognition based on geodesic ...

dimensional face recognition systems, on the other hand, have been reported to be less .... to methods based on PCA applied to the 3D point clouds, fitting implicit ... surfaces, and PCA applied to range images.21 Its performance was equivalent to an approach where the positions ..... Image and Vision Computing 23(3), pp.

348KB Sizes 19 Downloads 390 Views

Recommend Documents

Face Recognition Based on SVM ace Recognition ...
features are given to the SVM classifier for training and testing purpose. ... recognition has emerged as an active research area in computer vision with .... they map pattern vectors to a high-dimensional feature space where a 'best' separating.

Face Recognition Based on Local Uncorrelated and ...
1State Key Laboratory for Software Engineering, Wuhan University, Wuhan, ... of Automation, Nanjing University of Posts and Telecommunications, 210046, ...

Authorization of Face Recognition Technique Based On Eigen ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, ..... computationally expensive but require a high degree of correlation between the ...

Face Recognition Using Composite Features Based on ...
Digital Object Identifier 10.1109/ACCESS.2017.DOI. Face Recognition Using Composite. Features Based on Discriminant. Analysis. SANG-IL CHOI1 ... ing expressions, and an uncontrolled environment involving camera pose or varying illumination, the recog

Face Recognition Based on Nonlinear DCT ...
Dec 12, 2009 - the kernel-based nonlinear discriminant analysis technique has now been widely ... alized discriminant analysis (GDA) method for nonlinear.

Authorization of Face Recognition Technique Based On Eigen ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, ..... computationally expensive but require a high degree of correlation between the ...

Appearance-Based Automated Face Recognition ...
http://sites.google.com/site/jcseuk/. Appearance-Based Automated Face. Recognition System: Multi-Input Databases. M.A. Mohamed, M.E. Abou-Elsoud, and M.M. Eid. Abstract—There has been significant progress in improving the performance of computer-ba

Support vector machine based multi-view face detection and recognition
theless, a new problem is normally introduced in these view- ...... Face Recognition, World Scientific Publishing and Imperial College. Press, 2000. [9] S. Gong ...

GA-Fisher: A New LDA-Based Face Recognition Algorithm With ...
GA-Fisher: A New LDA-Based Face Recognition. Algorithm With Selection of Principal Components. Wei-Shi Zheng, Jian-Huang Lai, and Pong C. Yuen. Abstract—This paper addresses the dimension reduction problem in Fisherface for face recognition. When t

data based mechanistic modelling of three dimensional temperature ...
Temperature distribution in a Ventilated Room Filled with Obstacles ................................... 75 ...... of ventilation rate through material in a big chamber (Fig. 2.14).

Multithread Face Recognition in Cloud
Oct 12, 2016 - biometrics systems have proven to be an essential security tools, ... showcase cloud computing-enabled face recognition, which utilizes ...

Multithread Face Recognition in Cloud
Oct 12, 2016 - this responsibility to third-party vendors who can maintain cloud ... showcase cloud computing-enabled face recognition, which utilizes ...

A new approach for Face Recognition Based on PCA ...
Now we determine a second projection space (Fisher space). To do that, we apply the algorithm of linear discriminate analysis (LDA) [9] on the set of vectors Yi.

Language Recognition Based on Score ... - Semantic Scholar
1School of Electrical and Computer Engineering. Georgia Institute of ... over all competing classes, and have been demonstrated to be effective in isolated word ...

Language Recognition Based on Score ... - Semantic Scholar
1School of Electrical and Computer Engineering. Georgia Institute ... NIST (National Institute of Standards and Technology) has ..... the best procedure to follow.

pdf-0738\face-detection-and-recognition-on-mobile-devices-by ...
pdf-0738\face-detection-and-recognition-on-mobile-devices-by-haowei-liu.pdf. pdf-0738\face-detection-and-recognition-on-mobile-devices-by-haowei-liu.pdf.

THREE-DIMENSIONAL STRUCTURE ...
sphere S3 ⊂ R4. The normalized vectors are viewed as unit quaternions which we converted into 3 × 3 rotation matrices R1,...,RN . We then computed all pairwise ... (h) N = 500, p = 0.05. Fig. 5.1. Eigenvalue histograms for the matrix S for differe

RFID Based Face Attendance System RFID Based Face ... - IJRIT
ability to uniquely identify each person based on their RFID tag type of ID card make .... Fortunately, Intel developed an open source library devoted to easing the.

Survey on Face Recognition Using Laplacian faces
1Student, Pune University, Computer Department, K J College Of Engineering and Management Research. Pune .... Orlando, Florida, 2002, pp.3644-3647.

Survey on Face Recognition Using Laplacian faces - International ...
Abstract. The face recognition is quite interesting subject if we see in terms of security. A system can recognize and catch criminals and terrorists in a crowd. The proponents of large-scale face recognition feel that it is a necessary evil to make

On Special Difficulties in the Three-Dimensional ...
II. THE NEGATIVE CONJECTURE. Definition 1: Given a sequence of boxes , if all boxes can be packed one by one such that every box is moved as far as possible to the bottom left front (until it touches in all three directions another pieces or the cont

Task Effects on Three-Dimensional Dynamic Postures ...
Task Effects on Three-Dimensional Dynamic Postures during Seated Reaching ... postural angles to characterize the movement data and (b) a series of ...