IJRIT International Journal of Research in Information Technology, Volume 1, Issue 11, November, 2013, Pg. 282-289

International Journal of Research in Information Technology (IJRIT)


ISSN 2001-5569

Face Detection Methods: A Survey Shrinivas Kadam1, Ashwini Barbadekar2 and Milind Rane3 1


Student, Vishwakarma Institute of Technology, Pune University Pune, Maharashtra, India [email protected]

Professor (Dr.), Vishwakarma Institute of Technology, Pune University Pune, Maharashtra, India [email protected] 3

Professor, Vishwakarma Institute of Technology, Pune University Pune, Maharashtra, India [email protected] Abstract

A biometric technology utilizes the biological characteristics of human bodies or behaviors as identification or verification features. Biometric has gained much attention in the security world recently. Face recognition is personal identification system and face detection is required preliminary step to face recognition system and expression analysis. Face detection is also has several applications like security access control, model-based video coding, content based video indexing, or advanced human, computer interaction, video conferencing, intelligent robots, notebook, PC cameras, digital cameras, 3G cell phones and crowd surveillance. But human face is a dynamic object and many challenges associated with face detection. Researchers have proposed numerous methods to increase the performance of face detection systems. In this paper we discussed a survey of face detection methods in image processing. There are many methods used in face detection, each one has its advantages and disadvantages.

Keywords: Biometrics, face detection, feature extraction, detection methods, performance evaluation.

1. Introduction Biometric technology utilizes the biological characteristics of human bodies or behaviors as identification or verification features. The frequently used biometric features include face, fingerprint, voiceprint, and iris recognition. The fingerprint recognition is the most popular adopted in our daily lives. However, the sweat and the dust may reduce the accuracy sometimes. In face recognition system, it is not necessary to have physical contact with the machine and the image can be captured naturally by using a video camera. This makes face recognition a very convenient biometric identification approach. Early efforts in face detection have dated back as early as the beginning of the 1970s, where simple heuristic and anthropometric techniques [18] were used. Face detection is the first step of face recognition as it automatically detects a face from a complex background to which the face recognition algorithm can be applied. Given an arbitrary image, the goal of face detection is to determine whether or Shrinivas Kadam, IJRIT


not there are any faces in the image and, if present, return the image location and extent of each face [17]. But detection itself involves many complexities such as pose, presence or absence of structural components, facial expression, occlusion, image orientation, imaging conditions. Many novel methods have been proposed to resolve each variation listed above, for example the template-matching methods [1], [2] are used for a template cohering with human face features is used to perform a pattern-matching operation based on the template and an input image. Shape template [1] and active shape model [2] are common examples of this method. Face localization and detection by computing the correlation of an input image to a standard face pattern. The feature invariant approaches are used for invariant features, unresponsive to different positions, brightness, and viewpoints, are utilized in this approach to detect human faces. Statistical model is usually built up for describing the relations among face features and the presence of the detected faces. Such face features, for instance, are Facial features, texture and skin color [3], [4] of eyes, mouth, ears, nose, etc. The appearance-based methods are used for face detection with eigenface [5], [6], [7], neural network [8], [9], and information theoretical approach [10], [11]. In general, the computerized face recognition includes four steps [13]. First, the face image is enhanced and segmented. Second, the face boundary and facial features are detected. Third, the extracted features are matched against the features in the database. Fourth, the classification into one or more persons is achieved. In order to detect faces and locate the facial features correctly, researchers have proposed a variety of methods which can be divided into two categories [12]. The first category of the algorithms is based on geometric, feature-base matching which uses the overall geometrical configuration of the facial features, e.g. eyes and eyebrows, nose and mouth, as the basis for discrimination. The second category of the recognition algorithms is based on template matching, in which recognition is based on the image itself. In those algorithms, faces should be correctly aligned before recognition, which is usually performed based on the detection of eyes. So, proper face and eye detection are vital to face recognition tasks. Most of the existing face detection algorithms can be put into a two-stage framework. In the first stage, regions that may contain a face are marked, i.e. this stage focuses attention to face candidates. In the second stage, the possible regions, or, face candidates, are sent to a face verifier, which will decide whether the candidates are real faces.

2. Face Detection Methods In this section, we review existing methods to detect faces from a single intensity or color image. The various methods are grouped into four categories [16], [17]: knowledge-based methods, feature invariant methods, template matching methods, and appearance-based methods. 2.1 Knowledge Based Methods In this approach, face detection methods are developed based on the rules derived from the researcher’s knowledge of human faces. It is easy to come up with simple rules to describe the features of a face and their relationships. For example, a face often appears in an image with two eyes that are symmetric to each other, a nose, and a mouth. It is unable to find many faces in a complex image. Yang and Huang used a hierarchical knowledge-based method to detect faces [19]. Their system consists of three levels of rules. At the highest level, all possible face candidates are found by scanning a window over the input image and applying a set of rules at each location. The rules at a higher level are general descriptions of what a face looks like while the rules at lower levels rely on details of facial features. One problem with this method is the difficulty in translating human knowledge into well-defined rules. If the rules are detailed (i.e., strict), they may fail to detect faces that do not pass all the rules. If the rules are too general, they may give many false positives. 2.2 Feature Invariant Methods Feature invariant methods depend on feature derivation and analysis to gain the required knowledge about faces. Numerous methods have been proposed to first detect facial features and then to infer the presence of a face. Facial features such as eyebrows, eyes, nose, mouth, and hair-line are commonly extracted using edge detectors. Based on the extracted features, a statistical model is built to describe their relationships and to verify the existence of a face. This feature invariant method is independent on illumination, noise, and occlusion. Feature boundaries can be weakened for faces, while shadows can cause numerous strong edges which together render perceptual grouping algorithms useless. These methods are designed mainly for face localization. Shrinivas Kadam, IJRIT


2.2.1 Facial Features All Facial features method depends on detecting features of the face. Some users use the edges to detect the features of the face, and then grouping the edges [52]. Graf et al. developed a method to locate facial features and faces in gray scale images [20]. Leung et al. developed a probabilistic method to locate a face in a cluttered scene based on local feature detectors and random graph matching [3]. Their motivation is to formulate the face localization problem as a search problem in which the goal is to find the arrangement of certain facial features that is most likely to be a face pattern. Five features (two eyes, two nostrils, and nose/lip junction) are used to describe a typical face. Any pair of facial features of the same type (e.g., left eye, right-eye pair), their relative distance is computed, and over an ensemble of images the distances are modeled by a Gaussian distribution. A facial template is defined by averaging the responses to a set of multiorientation, multiscale Gaussian derivative filters (at the pixels inside the facial feature) over a number of faces in a data set. 2.2.2 Texture The texture features considered in this work are a set of 6 statistical and 3 multiresolution features that capture the gradient, directional variations and the residual energies of a pattern [51]. Textures of human faces have a special texture that can be used to separate them from different objects [21]. Dai and Nakano also applied SGLD model to face detection [27]. Color information is also incorporated with the face-texture model. Using the face texture model, they design a scanning scheme for face detection in color scenes in which the orange-like parts including the face areas are enhanced. 2.2.3 Skin Color Color is a low-level cue for object detection that can be implemented in a computationally fast and effective way for locating objects. It also offers robustness against geometrical changes under a stable and uniform illumination field. Although different people have different skin color, several studies have shown that the major difference lies largely between their intensity rather than their chrominance [20], [28], [29]. However, such skin color models are not effective where the spectrum of the light source varies significantly. In other words, color appearance is often unstable due to changes in both background and foreground lighting. Though the color constancy problem has been addressed through the formulation of physics-based models [30], several approaches have been proposed to use skin color in varying lighting conditions. McKenna et al. presented an adaptive color mixture model to track faces under varying illumination conditions [31]. However, color images contain more useful information than gray level ones and can be processed more efficiently [14]. 2.2.4 Multiple Features Multiple Features methods use several combined facial features to locate or detect faces. First find the face by using features like skin color, size and shape and then verifying these candidates using detailed features such as eye brows, nose, and hair. Yachida et al. presented a method to detect faces in color images using fuzzy theory [32], [33], [34]. They used two fuzzy models to describe the distribution of skin and hair color in CIE XYZ color space. Range and color have also been employed for face detection by Kim et al. [88]. Disparity maps are computed and objects are segmented from the background with a disparity histogram using the assumption that background pixels have the same depth and they outnumber the pixels in the foreground objects. Using a Gaussian distribution in normalized RGB color space, segmented regions with a skin like color are classified as faces. A similar approach has been proposed by Darrell et al. for face detection and tracking [36]. 2.3 Template Matching Methods According to a theory called template matching, in order to recognize an object, humans compare it to images of similar objects that they already have stored in memory. Through comparing to a variety of stored candidates, it is possible to identify the object by the one that it most closely resembles. The correlations between an input image and the stored patterns are computed for detection. In image processing concept, a very similar idea has been used for detecting different objects in the image. These methods have been used for both face localization and detection.

Shrinivas Kadam, IJRIT


2.3.1 Predefined Template An early attempt to detect frontal faces in photographs is reported by Sakai et al. [37]. They used several subtemplates for the eyes, nose, mouth, and face contour to model a face. Each subtemplate is defined in terms of line segments. The correlations between subimages and contour templates are computed first to detect candidate locations of faces. Then, matching with the other subtemplates is performed at the candidate positions. In other words, the first phase determines focus of attention or region of interest and the second phase examines the details to determine the existence of a face. The idea of focus of attention and subtemplates has been adopted by later works on face detection. Craw et al. presented a localization method based on a shape template of a frontal-view face (i.e., the outline shape of a face) [25]. Craw et al. describe a localization method using a set of 40 templates to search for facial features and a control strategy to guide and assess the results from the template-based feature detectors [1]. A hierarchical template matching method for face detection was proposed by Miao et al. [38].Lanitis et al. described a face representation method with both shape and intensity information [2]. They start with sets of training images in which sampled contours such as the eye boundary, nose and chin/cheek are manually labeled, and a vector of sample points is used to represent shape. They used a point distribution model (PDM) to characterize the shape vectors over an ensemble of individuals, and an approach similar to Kirby and Sirovich [6] to represent shape normalized intensity appearance. A face-shape PDM can be used to locate faces in new images by using active shape model (ASM) search to estimate the face location and shape parameters. The face patch is then deformed to the average shape, and intensity parameters are extracted. The shape and intensity parameters can be used together for classification. 2.4 Appearance Based Methods In general, appearance-based methods rely on techniques from statistical analysis and machine learning to find the relevant characteristics of face and nonface images. Due to the complexity of the mathematical models involved, these methods usually use dimension reduction to improve computation performance [15]. Another approach in appearance based methods is to find a discriminant function between face and nonface classes. Conventionally, image patterns are projected to a lower dimensional space and then a discriminant function is formed (usually based on distance metrics) for classification [5], or a nonlinear decision surface can be formed using multilayer neural networks [23]. Recently, support vector machines and other kernel methods have been proposed. These methods implicitly project patterns to a higher dimensional space and then form a decision surface between the projected face and nonface patterns [39]. 2.4.1 Eigenfaces Turk and Pentland applied principal component analysis to face recognition and detection [5]. Similar to [6], principal component analysis on a training set of face images is performed to generate the Eigen pictures (here called Eigenfaces) which span a subspace (called the face space) of the image space. Images of faces are projected onto the subspace and clustered. Similarly, nonface training images are projected onto the same subspace and clustered. Since images of faces do not change radically when projected onto the face space, while the projection of nonface images appear quite different. To detect the presence of a face in a scene, the distance between an image region and the face space is computed for all locations in the image. 2.4.2 Distribution Based Methods Sung and Poggio developed a distribution-based system for face detection [40], [22] which demonstrated how the distributions of image patterns from one object class can be learned from positive and negative examples (i.e., images) of that class. Their system consists of two components, distribution-based models for face/nonface patterns and a multilayer perceptron classifier. Each face and nonface example is first normalized and processed to 19×19 pixel image and treated as a 361-dimensional vector or pattern. 2.4.3 Neural Networks Neural networks have been applied successfully in many pattern recognition problems, such as optical character recognition, object recognition, and autonomous robot driving. Since face detection can be treated as a two class pattern recognition problem, various neural network architectures have been proposed. The advantage of using neural networks for face detection is the feasibility of training a system to capture the complex class conditional Shrinivas Kadam, IJRIT


density of face patterns. An early method using hierarchical neural networks was proposed by Agui et al. [8]. The first stage consists of two parallel subnetworks in which the inputs are intensity values from an original image and intensity values from filtered image using a 3×3 Sobel filter. The inputs to the second stage network consist of the outputs from the subnetworks and extracted feature values such as the standard deviation of the pixel values in the input pattern, a ratio of the number of white pixels to the total number of binarized pixels in a window, and geometric moments. An output value at the second stage indicates the presence of a face in the input region. Experimental results show that this method is able to detect faces if all faces in the test images have the same size. Simple arbitration schemes such as logic operators (AND/OR) and voting are used to improve performance. Rowley et al. [23] reported several systems with different arbitration schemes that are less computationally expensive than Sung and Poggio’s system and have higher detection rates based on a test set of 24 images containing 144 faces. One limitation of the methods by Rowley [23] and by Sung [40] is that they can only detect upright, frontal faces. Recently, Rowley et al. [24] extended this method to detect rotated faces using a router network which processes each input window to determine the possible face orientation and then rotates the window to a canonical orientation. 2.4.4 Support Vector Machine Support vector machine (SVM) introduced by Vapnik [26] and based on statistical learning theory. SVM has many advantages such as traditional minimum empirical risk is replaced by structural risk, good learning ability and generalization ability, overcome the phenomenon of overfitting. SVM is efficient for solving small sample problem and nonlinear classification. It is used in our system for face verification and face recognition. Support Vector Machines (SVMs) were first applied to face detection by Osuna et al. [39]. SVMs can be considered as a new paradigm to train polynomial function, neural networks, or radial basis function (RBF) classifiers. In [39], Osuna et al. developed an efficient method to train an SVM for large scale problems, and applied it to face detection. 2.4.5 Sparse Network of Winnows Yang et al. proposed a method that uses SNoW learning architecture [41], [42] to detect faces with different features and expressions, in different poses, and under different lighting conditions [43]. They also studied the effect of learning with primitive as well as with multiscale features. SNoW (Sparse Network of Winnows) is a sparse network of linear functions that utilizes the Winnow update rule [44]. It is specifically tailored for learning in domains in which the potential number of features taking part in decisions is very large, but may be unknown a priori.

3. Performance Evaluation In order to obtain a fair empirical evaluation of face detection methods, it is important to use a standard and representative test set for experiments. Although many face detection methods have been developed over the past decade, only a few of them have been tested on the same data set. Table 1 summarizes the reported performance among several appearance-based face detection methods on two standard data sets described in the previous section. Although Table 1 shows the performance of these methods on the same test set, such an evaluation may not characterize how well these methods will compare in the field. There are a few factors that complicate the assessment of these appearance-based methods. Table 1: Experimental Results on Images Method/Implementation/Dataset

Faces (Images)

False Detection

136 (23)

Detection Rate 81.9%

Distribution based[22]

483 (125)



483 (125)



Inductive learning[46]

483 (125)



SNOW with primitive feature[43]

483 (125)



SNOW with multi feature[43] Neural network[23]

Shrinivas Kadam, IJRIT




Support vector machine[39]

136 (23)



Rob Mc.Cready /Proprietary [47]




Aarabi et al./Yale[48]




Fang-Que/MIT+CMU[49] Traditional Ad boosting /FERET[50] Propose method /FERET[50] Neural network[23]

507 (130) NA

88.3% 98.709%

1.26×10ିହ 6.53×10ି଼

NA 136 (23)

98.064% 90.3%

2.94×10ି଻ 42

4. Conclusions In this paper, we have presented an extensive survey of the face detection methods. Face detection is the first step of face recognition as it automatically detects a face from a complex background to which the face recognition algorithm can be applied. Face detection contain many complexities such as pose, presence or absence of structural components, facial expression, occlusion, image orientation, imaging conditions. To resolve these challenges many novel methods have been proposed. During the survey, we find some points that can be further improvement in face detection methods achieve more efficient accuracy like two stage hybrid face detection scheme is proposed for significantly reducing training time and detection time to improve the feasibility of specific environment feature adaptation in face detection.

5. Acknowledgments I would like to my sincere thanks the Prof. Dr. Ashwini Barbadekar and Prof. Milind Rane dept. of Electronics and Telecommunication, Viswakarma Institute of Technology, for there valuable suggestions.

6. References [1] I. Craw, D. Tock, and A. Bennett, “Finding Face Features,” Proceeding of Second European Conference Computer Vision. pp. 92-96, 1992. [2] A. Lanitis, C. J. Taylor, and T. F. Cootes, “An Automatic Face Identification System Using Flexible Appearance Models,” Image and Vision Computing, vol.13, no.5, pp.393-401, 1995. [3] T. K. Leung, M. C. Burl, and P. Perona, “Finding Faces in Cluttered scenes Using Random Labeled Graph Matching,” Proceeding of fifth IEEE International Conference Computer Vision, pp. 637-644, 1995. [4] B. Moghaddam and A. Pentland, “Probabilistic Visual Learning for Object Recognition,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 19, no.7. pp. 696-710, July, 1997. [5] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Cognitive Neuroscience, vol.3, no.1, pp. 7186, 1991. [6] M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve procedure for the Characterization of Human Faces,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol.12, no.1, pp. 103-108, Jan. 1990. [7] I. T. Jolliffe, “Principal Component Analysis,” New York: Springer-Verlag, 1986. [8] T. Agui, Y. Kokubo, H. Nagashi, and T. Nagao, “Extraction of Face Recognition from Monochromatic Photographs Using Neural Networks,” Proceeding of second International Conference Automation, Robotics, and Computer Vision, vol.1, pp. 18.81-18.8.5, 1992 [9] O. Bernier, M. Collobert, R. Feraud, V. Lemaried, J. E. Viallet, and D. Collobert, “MULTRAK:A System for Automatic Multiperson Localization and Tracking in Real-time,” Proceeding of IEEE International Conference Image Processing, pp. 136-140, 1998,

Shrinivas Kadam, IJRIT


[10] A. J. Colmenarez and T. S. Huang, “Face Detection with Information-Based Maximum Discrimination,” Proceeding of IEEE International Conference Computer Vision and Pattern Recognition, pp. 782-787, 1997. [11] M. S. Lew, “Information Theoretic View-based and Modular Face Detection,” Proceeding of second International Conference Automatic Face and Gesture Recognition, pp. 198-203, 1996. [12] R. Brunelli, T. Poggio, “Face Recognition: Features Versus Templates,” IEEE Transactions Pattern Analysis and Machine Intelligence, 15 (10) (1993) 1042–1052. [13] K. Sobottka, I. Pitas, “A novel Method for Automatic Face Segmentation Facial Feature Extraction and Tracking,” Signal Processing: Image Communication 12 (3) (1998) 263-281. [14] R. Kjeldsen, J. Kender, “Finding Skin in Color Images,” in: Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, 1996, pp. 312–317. [15] S. C. Yan, J. Z. Liu, X. Tang, and T. S. Huang, “Formulating Face Verification with Semidefinite Programming,” IEEE Transactions Image Processing, vol. 16, no. 11, pp. 2802–2810, Nov. 2007. [16] Cha Zhang, Zhengyou Zhang, “A Survey of Recent Advances in Face Detection”, Microsoft corporation, June 2010. [17] M.-H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting Faces in Images: A Survey”, IEEE Transactions on PAMI, 24(1):34–58, 2002. [18] T. Sakai, M. Nagao, and T. Kanade, “Computer Analysis and Classification of Photographs of Human Faces,” in Proceeding, First USA Japan Computer Conference, 1972. [19] G. Yang and T. S. Huang, “Human Face Detection in Complex Background,” Pattern Recognition, vol. 27, no. 1, pp. 53-63, 1994. [20] H.P. Graf, T. Chen, E. Petajan, and E. Cosatto, “Locating Faces and Facial Parts,” Proceeding first International Workshop Automatic Face and Gesture Recognition, pp. 41-46, 1995. [21] M.F. Augusteijn and T.L. Skujca, “Identification of Human Faces through Texture-Based Feature Recognition and Neural Network Technology,” Proceeding IEEE Conference Neural Networks, pp. 392-398, 1993. [22] K.K. Sung, T. Poggio, “Example Based Learning for View Based Human Face Detection”, Technical Report AIM-1521, MIT AI Lab, 1994. [23] H. Rowley, S. Baluja, T. Kanade, “Neural Network Based Face Detection,” IEEE Transactions Pattern Analysis and Machine Intelligence 20 (1) (1998) 23–38. [24] H. Rowley, S. Baluja, T. Kanade, “Rotation Invariant Neural Network Based Face Detection,” in: Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1), January 1998. [25] I. Craw, H. Ellis, J. Lishman, “Automatic Extraction of Face Features,” Pattern Recognition Letters 5 (1987) 183–187. [26] V. Vapnik, “The Nature of Statistical Learning Theory,” second edition, Springer-Verlag, New York, 1998. [27] Y. Dai and Y. Nakano, “Face-Texture Model Based on SGLD and Its Application in Face Detection in a Color Scene,” Pattern Recognition, vol. 29, no. 6, pp. 1007-1017, 1996. [28] H. P. Graf, E. Cosatto, D. Gibson, E. Petajan, and M. Kocheisen, “Multi-Modal System for Locating Heads and Faces,” in IEEE Proceeding of second International Conference on Automatic Face and Gesture Recognition, Vermont, Oct. 1996, pp. 277–282. [29] J. Yang and A. Waibel,“A Real-Time Face Tracker,” in IEEE Proceeding of the third Workshop on Applications of Computer Vision, Florida, 1996. [30] D. Forsyth, “A Novel Approach to Color Constancy,” International Journal Computer Vision, vol. 5, no. 1, pp. 5-36, 1990. [31] S. McKenna, Y. Raja, and S. Gong, “Tracking Color Objects Using Adaptive Mixture Models,” Image and Vision Computing, vol. 17, nos. 3/4, pp. 223-229, 1998. [32] Q. Chen, H. Wu, and M. Yachida, “Face Detection by Fuzzy Matching,” Proceeding of fifth IEEE International Conference Computer Vision, pp. 591-596, 1995. [33] H.Wu, T.Yokoyama, D. Pramadihanto, and M.Yachida, “Face and Facial Feature Extraction from Color Image,” in Proceeding of second IEEE International Conference on AutomaticFace and Gesture Recognition, Vermont, Oct. 1996, pp. 345–349. Shrinivas Kadam, IJRIT


[34] G. Holst, “Face Detection by Facets: Combined Bottom-Up and Top-Down Search Using Compound Templates,” in Proceedings of the 2000 International Conference on Image Processing, 2000, p. TA07.08. [35] S.H. Kim, N.K. Kim, S.C. Ahn and H.-G. Kim, “Object Oriented Face Detection Using Range and Color Information,” Proceeding of third IEEE International Conference Automatic Face and Gesture Recognition, pp. 76-81, 1998. [36] T. Darrell, G. Gordon, M. Harville, and J. Woodfill, “Integrated Person Tracking Using Stereo, Color, and Pattern Detection,” International Journal Computer Vision, vol. 37, no. 2, pp. 175-185, 2000. [37] T. Sakai, M. Nagao, and S. Fujibayashi, “Line Extraction and Pattern Detection in a Photograph,” Pattern Recognition, vol. 1, pp. 233-248, 1969. [38] J. Miao, B. Yin, K. Wang, L. Shen, and X. Chen, “A Hierarchical Multiscale and Multiangle System for Human Face Detection in a Complex Background Using Gravity-Center Template,” Pattern Recognition, vol. 32, no. 7, pp. 1237-1248, 1999. [39] E. Osuna, R. Freund, and F. Girosi, “Training Support Vector Machines: An Application to Face Detection,” in IEEE Proceeding of International Conference on Computer Vision and Pattern Recognition, 6, 1997. [40] K.-K. Sung, “Learning and Example Selection for Object and Pattern Detection,” PhD thesis, Massachusetts Institute of Technology, 1996. [41] D. Roth, “Learning to Resolve Natural Language Ambiguities: A Unified Approach,” Proceeding of Fifteen National Conference Artificial Intelligence, pp. 806-813, 1998. [42] A. Carleson, C. Cumby, J. Rosen, and D. Roth, “The SNoW Learning Architecture,” Technical Report UIUCDCS-R-99-2101, University of Illinois at Urbana-Champaign Computer Science Department, 1999. [43] M.-H. Yang, D. Roth, and N. Ahuja, “A SNoW-Based Face Detector,” Advances in Neural Information Processing Systems 12, S.A. Solla, T. K. Leen, and K.-R. Muller, eductions. pp. 855-861, MIT Press, 2000. [44] N. Littlestone, “Learning Quickly when Irrelevant Attributes Abound: A New Linear-Threshold Algorithm,” Machine Learning, vol. 2, pp. 285-318, 1988. [45] S. McKenna, S. Gong, and Y. Raja, “Modelling Facial Color and Identity with Gaussian Mixtures,” Pattern Recognition, vol. 31, no. 12, pp. 1883-1892, 1998. [46] N. Duta and A.K. Jain, “Learning the Human Face Concept from Black and White Pictures,” Proceeding International Conference of Pattern Recognition, pp. 1365-1367, 1998. [47] D. Nguyen, D. Halupka, P. Aarabi, and A. Sheikholeslami, “Real-time Face Detection and Lip Feature Extraction Using Field-Programmable Gate Arrays,” IEEE Transactions System Man. Cybern., B. Cybern., vol. 36, no. 4, pp. 902–912, Aug. 2006. [48] R. McCready, “Real-Time Face Detection on a Configurable Hardware System,” in Proceeding Tenth International Workshop Field-Programmable Logic Appl., 2000, London, U.K., pp. 157–162. [49] J. Z. Fang and G. P. Qiu, “Learning Sample Subspace with Application to Face Detection,” in Proceeding of International Conference Pattern Recognition, 2004, pp. 423–426. [50] Jing-Ming Guo, Senior Member, IEEE, Chen-Chi Lin, Min-Feng Wu, Che-Hao Chang, and Hua Lee, “Complexity Reduced Face Detection Using Probability-Based Face Mask Prefiltering and Pixel-Based Hierarchical-Feature Adaboosting’’IEEE SIGNAL PROCESSING LETTERS, VOL. 18, NO. 8, AUGUST 2011. [51] V. Manian and R. Vasquez, “Approaches to Color and Texture Based Image Classification,” SPIE JOE, 41, 1480-1490 (2002). [52] M.Hu, S.Worrall, A.H.Sadka, A.M.kondoz,"Face Feature Detection and Model Design For 2-D Scalable Model-Based Video Coding", VIE, Guildford- London, UK July 2003.

Shrinivas Kadam, IJRIT


Face Detection Methods: A Survey

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 11, November, 2013, Pg. 282-289 ... 1Student, Vishwakarma Institute of Technology, Pune University. Pune .... At the highest level, all possible face candidates are found by ..... in Proceeding, First USA Japan Computer Conference, 1972.

141KB Sizes 10 Downloads 140 Views

Recommend Documents

Survey on Malware Detection Methods.pdf
need the support of any file. It might delete ... Adware or advertising-supported software automatically plays, displays, or .... Strong static analysis based on API.

Face Detection Algorithm based on Skin Detection ...
systems that analyze the information contained in face images ... characteristics such as skin color, whose analysis turns out to ..... software version

Intrusion Detection Systems: A Survey and Taxonomy - CiteSeerX
Mar 14, 2000 - the Internet, to attack the system through a network. This is by no means ... latter approach include its reliance on a well defined security policy, which may be absent, and ..... and compare the observed behaviour accordingly.

Intrusion Detection Systems: A Survey and Taxonomy
Mar 14, 2000 - ed into two groups: the first deals with detection principles, and the second deals with .... direct A.2, DIDS A.9, MIDAS(2) A.2 ... ful login attempts.

Intrusion Detection Systems: A Survey and Taxonomy - CiteSeerX
Mar 14, 2000 - r The prototype version ...... programmer may have emacs strongly associated with C files, ... M Esmaili, R Safavi, Naini, and J Pieprzyk.

A Review on Change Detection Methods in Hyper spectral Image
Keywords: - Change detection, hyper spectral, image analysis, target detection, unsupervised ..... [2] CCRS, Canada Center for Remote Sensing, 2004.

Improved Text-Detection Methods for a Camera-based ...
visually impaired persons have become an important ... The new method we present in the next section of the .... Table 1 Results of Small Character Data set p.

User Interface Languages: a survey of existing methods
zSERC Post-doctoral Fellow, Human Computer Interaction Group, ..... the e ects of events is easier (although I would personally prefer even less reliance on.