150

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

Iris Recognition Using Possibilistic Fuzzy Matching on Local Features Chung-Chih Tsai, Heng-Yi Lin, Jinshiuh Taur, and Chin-Wang Tao

Abstract—In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems. Index Terms—Gabor filter, iris recognition, possibilistic fuzzy matching (PFM).

I. I NTRODUCTION

B

IOMETRICS has been a popular research topic due to the growing needs of human identification applications in recent years. The recognition system based on biometric technologies has higher reliability and security than traditional systems. Popular biometric approaches with physiological characters like face, fingerprint, palmprint, iris, retina, and voice have shown the advantages of reliability, convenience, and noninvasiveness. Among these approaches, the iris has some advantages over the others and has received a lot of attention in the last two decades. The human iris, an annular region located around the pupil and covered by the cornea, can provide independent and unique information of a person. Furthermore, the iris is highly stable with age, and it is difficult to fake the iris under the protection of the cornea. Iris recognition has become an active research since the concept of an iris recognition system was first proposed by Flom and Safir [1] in 1987. In the following years, especially the last decade, many researches about iris recognition with

Manuscript received January 19, 2010; revised April 29, 2011; accepted July 5, 2011. Date of current version December 7, 2011. This work was supported by the R.O.C. National Science Council through Grant NSC 982221-E-005 -074-MY2. This paper was recommended by Associate Editor E. Santos Jr. C.-C. Tsai, H.-Y. Lin, and J. Taur are with the Department of Electrical Engineering, National Chung Hsing University, Taichung 402, Taiwan (e-mail: [email protected]). C.-W. Tao is with the Department of Electrical Engineering, National I-Lan University, I-Lan 260, Taiwan. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCB.2011.2163817

outstanding performance had been published. Daugman [2]–[4] proposed a typical and successful iris recognition system that used the first-order derivatives of the image intensity to locate circular edges of the iris and utilized multiscale quadrature 2-D Gabor wavelets to capture the information of the local phase. In Daugman’s system, the feature vectors containing 2048-b iris codes were generated by quantizing the local phase angle of the real and imaginary parts of the filtered images. A Hamming distance measure was then adopted to evaluate the difference between two feature vectors. The system proposed by Wildes et al. [5]–[8] used a four-level Laplacian pyramid to extract features of the iris. With the extracted features, iris images could be classified by using the normalized correlation and the Fisher classifier. In Boles and Boashash’s system [9], iris images were analyzed by a 1-D dyadic wavelet transform in different resolution levels. With a zero-crossing representation, the feature vector of the iris image was extracted from the wavelet results. The comparison of two feature vectors was carried out by evaluating the dissimilarity with two alternative measurements. Ma et al. [10] adopted a bank of kernel-based spatial filters to capture local details of the iris pattern. In their other works, a local intensity variation analysis method based on the Gaussian–Hermite moments [11] and a characterizing key local variation method using the 1-D wavelet transform [12] were developed for feature extraction. In the system developed by Sun et al. [13], a Gaussian filter was adopted to estimate the local direction of the iris image, and then, the angle scale of each local direction was quantized into six discrete values. Yu et al. [14] fetched some key points of the iris image by using the multichannel Gabor filters. After that, the related distances among these key points were used to represent the iris feature. Chu and Chen [15] extracted two feature vectors for each iris image that were individually derived with the linear prediction cepstral coefficient and the linear discriminant analysis. The proposed iris recognition system was constructed from a probabilistic neural network optimized with particle swarm optimization. Sanchez-Avila and Sanchez-Reillo [16] used multiple zero-crossing-based iris signatures for recognition. Monro et al. [17] used 1-D DCT coefficients to provide a low-complexity feature extraction. Miyazawa et al. [18] performed phasebased image matching using 2-D discrete Fourier transforms. A thorough review of the iris recognition system was provided in [19]. Birgle and Kokare [20] made efforts to reduce time complexity by preventing a time-consuming normalization procedure. Several iris segmentation methods were proposed in [21]–[25]. In our previous work, we adopted a bank of Gabor filters to extract the iris features and proposed an angular variation analysis method to generate a compact iris code [26].

1083-4419/$26.00 © 2011 IEEE

TSAI et al.: IRIS RECOGNITION USING POSSIBILISTIC FUZZY MATCHING ON LOCAL FEATURES

151

TABLE I C OMPARISONS OF D IFFERENT I RIS R ECOGNITION S YSTEMS , W HERE “∗ ” D ENOTES A PPLICATIONS W ITHOUT U NWRAPPING

We further developed the sequential particle swarm optimized Gabor filters [27] to replace the traditional Gabor filter bank for an iris recognition system. The existing iris recognition systems can achieve good performance. However, most of these systems adopt a similar framework to extract the iris features. More precisely, in the iris image preprocessing step, the annular iris pattern is transformed into a polar coordinate system or is unwrapped into a rectangular block. Then, feature extraction attempts to extract the iris information from the normalized iris image to generate a feature vector. For commercial iris recognition systems, it is sometimes essential to have a good alternative to avoid patent violation problems. In contrast, Zhu et al. [28] adopted the scale invariant feature transform (SIFT) to extract the local feature points in both Cartesian and polar coordinate systems. Since it is very likely that many local patterns of the iris are similar, the recognition accuracy of the system based on SIFT is not as good as those of the traditional methods. Belcher and Du [29] proposed a region-based SIFT method which attempts to detect a feature point from each partitioned region in the rectangular coordinate system to improve the performance of Zhu’s system. Both systems use the Euclidean distance to measure the dissimilarity between a pair of feature points from different images. Table I shows the comparisons of the existing iris recognition systems related to this paper. According to our observation, the position information can also play an important role in the matching of local feature points. The main contribution of this paper is to propose an effective matching algorithm with invariant properties for two sets of local feature points. Additionally, the main motivation of the proposed scheme is that we try to provide an alternative feature extraction method to avoid the unwrapping preprocessing by extracting features from

the iris image directly. A bank of Gabor filters is used to detect the local feature points and to generate a feature vector for each point. Then, the proposed matching algorithm, which is based on the possibilistic fuzzy matching (PFM) method, compares a pair of feature points by considering not only their local features but also the relative positions to all the other points. In addition, a novel curve detection method is proposed to extract the inner and outer boundaries of the iris from a gray-level image. The remainder of this paper is organized as follows. Section II describes the proposed iris segmentation approach. The local feature extraction method is illustrated in Section III. Section IV presents the matching algorithm for a pair of feature point sets. Experimental results and comparisons are provided in Section V. Section VI concludes this paper. II. I RIS S EGMENTATION An eye image contains not only the iris texture but also some irrelevant parts. The pupillary and limbic boundaries should be detected to isolate the annular iris region. Several iris segmentation methods for nonideal iris images [25] have been proposed. Vatsa et el. [21] detected an approximate elliptical boundary and then used the modified Mumford–Shah functional [22] to obtain the accurate iris boundary. Shah and Ross [23] proposed a geodesic active contours method based on curve evolution without elliptical hypothesis for the iris outer boundary. Roy et al. [24] used geometric active contours for the inner boundary and a regularized Mumford–Shah segmentation model for the outer boundary. In this section, we describe the fuzzy curve-tracing (FCT) algorithm [30], [31] which can be used to detect a curve in a binary image. Then, we extend the FCT algorithm to extract a smooth curve from a gray-level

152

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

image. The new FCT method can be applied to detect the inner and outer boundaries of an iris to segment the iris region.

βi =

A. FCT Algorithm The FCT algorithm based on the fuzzy c-means (FCM) with a smoothness constraint function was proposed to extract a smooth curve from a binary image in [30]. Assume that the set of input data is X = {x1 , x2 , . . . , xn }, which denotes the set of positions of the foreground pixels in the binary image. Let V = {v1 , v2 , . . . , vc } be the set of cluster centers used to represent a smooth curve for the foreground pixels X. The cost function of the original FCT algorithm for a closed curve is defined as follows: JFCT (U, V ; X) =

The parameters βi ’s are the intensity weight of the ith foreground pixel and are defined as

n  c 

μm ik xi

i=1 k=1



c 

for i = 1, 2, . . . , n.

xi − vk −2/(m−1) μik = c . −2/(m−1) j=1 xi − vj 

− vk 

(1)

B. FGCT Method

n  c 

2 βi μm ik xgi − vk 

i=1 k=1



c 

vk+1 − 2vk + vk−1 2

(2)

k=1

where α and m are the Lagrange multiplier and the fuzzifier, respectively, and μik is the membership value with the following constraints: μik ∈ [0, 1], c  μik = 1, k=1

α(−vk−2 + 4vk−1 + 4vk+1 − vk+2 ) + vk = 6α +

n  i=1

n  i=1

βi μm ik xi .

βi μm ik (6)

Note that the index of cluster should be circularly arranged for a closed curve. In other words, v−1 , v0 , vc+1 , and vc+2 should be treated as vc−1 , vc , v1 , and v2 , respectively. C. Iris Segmentation

The original FCT method can only produce a smooth curve represented by the cluster centers to fit the foreground pixels in a binary image. In order to apply the FCT algorithm, the gray-level (gradient) image has to be converted to a binary image. However, for an iris recognition system, it is difficult to automatically select a threshold for both inner and outer boundaries of the iris. Therefore, in our approach, the FCT method is modified to extract a smooth curve directly from the gray scale gradient image. Assume that G = {g1 , g2 , . . . , gn } is the set of gray scale values in an image and XG = {xg1 , xg2 , . . . , xgn } is the set of corresponding positions of the elements in G. The cost function of the fuzzy gray scale curve-tracing (FGCT) algorithm is designed as follows: JFGCT (U, V ; XG ) =

(5)

To solve ∂JFGCT /∂vk = 0, we can obtain the update equation of each cluster center for a closed curve

k=1

where α and m are the Lagrange multiplier and the fuzzifier, respectively, and U = {μik } is the set of membership values representing the degree of xi belonging to the cluster with center vk . Similar to the FCM algorithm, by setting the partial derivatives of the Lagrangian function JFCT with respect to μik and vk to zero, both solutions of membership values and cluster centers can be iteratively computed. Then, a smooth curve V which represents the set X can be obtained.

(4)

The input sample with a larger pixel value is more important than those with lower pixel values. In other words, cluster centers will attempt to move toward image pixels with larger intensity (Note that the FGCT will be applied to the gradient image from the iris image. Therefore, the cluster centers will try to move toward image pixels with larger gradient values.). Similar to the FCM algorithm, μik can be estimated by using the following equation at each iteration step:

2

vk+1 − 2vk + vk−1 2

gi − min(G) , max(G) − min(G)

for 1 ≤ i ≤ n and 1 ≤ k ≤ c for 1 ≤ i ≤ n.

(3)

The proposed iris segmentation consists of two stages. First, the gradient image around the iris boundaries in the radial direction from the pupil to the sclera is generated. Generally, the pupil is darker than the iris, and the iris is darker than the sclera. Therefore, the pixel values around the iris boundaries in the gradient image are positive. Accordingly, if any pixel value in the gradient image is smaller than zero, it should be set to zero. The FGCT method is then applied to extract a smooth curve of each boundary from the gradient images. The proposed iris segmentation method is shown in Fig. 1. In order to produce the gradient images around the iris boundaries, the traditional iris localization can be used to estimate the parameters of each iris boundary [26]. In the traditional iris localization method, the Canny operator and the Hough transform are used to estimate the circular boundaries. Then, the morphological top-hat filter is adopted to detect and compensate the light reflection inside the pupil, and the Gaussian filter is used to smooth the iris image. Note that this filtered image is only used to produce the gradient image. Each radial gradient image, as shown in Fig. 1(b) and (c), can be obtained by using the estimated parameters with a reasonable range of radius. Finally, the FGCT method is used to trace a smooth curve in each gradient image. To initialize cluster centers of the FGCT, the iris boundary detected by the traditional localization is uniformly sampled. In Fig. 1(a), the dotted and solid lines indicate the iris boundaries detected by the FGCT algorithm and the traditional

TSAI et al.: IRIS RECOGNITION USING POSSIBILISTIC FUZZY MATCHING ON LOCAL FEATURES

153

Fig. 2. Several results of the detected iris boundaries, where the dotted lines are produced by the FGCT method. (a) CASIA-IrisV3-Interval database. (b) UBIRIS.v1 database.

scheme, respectively. It can be observed that the proposed method cannot only fit the iris edges more precisely but also exclude the region occluded by the eyelid, eyelashes, and light reflection. The segmented iris image of Fig. 1(a) is shown in Fig. 1(d). In addition, several segmented iris samples are shown in Fig. 2. III. L OCAL F EATURE E XTRACTION In this paper, the Gabor filters are used to extract the local feature points from the segmented iris image in the Cartesian coordinate system and to estimate the dominant orientation of each detected feature point. After that, for each feature point, a rotation-invariant descriptor is generated by using its dominant orientation. A. Gabor Filters The multiresolution Gabor filters are directional bandpass filters, which have orientation- and frequency-selective prop-

erties and provide optimal joint resolution in both spatial and frequency domains [32]. In our system, the model of Gabor filters proposed in [33] is adopted to detect the iris feature points and to generate a feature vector for each feature point. The 2-D frequency domain is partitioned into m frequency and n orientation bands. The impulse response function of the jth radial frequency (ωrj ) and the kth orientation (θk ) filter is given by   1 2 2 2 2 σ x ¯ + σθk y¯ gj,k (x, y) = exp − 2 rj  ¯ × exp i2πωrj x



x x ¯ cos θk sin θk = y y¯ − sin θk cos θk

(7) (8)

where 1 ≤ j ≤ m, 1 ≤ k ≤ n, 1/σrj , and 1/σθk are the standard deviations of the Gaussian envelopes along the x- and y-axes, respectively. σrj , σθk , and ωrj are defined in the

154

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

extract the local feature points. In our system, the segmented iris image is decomposed into 12 component images by using the Gabor filters (m = 2 and n = 6) described in Section III-A. Let Oj,k denote the component image obtained from the filter with the jth radial frequency and the kth orientation. The maximum image of each radial band is defined as follows: ˜ j (x, y) = max (Oj,k (x, y)) , O 1≤k≤6

for j = 1, 2

(12)

where (x, y) indicates the position in the component image. ˜ j (x, y), the positions of the Then, for each maximum image O local maxima can be detected by comparing each pixel value to its neighbors within 3 × 3 pixels. These detected points are regarded as the local feature points. As mentioned previously, the local maximum point in a filtered image can exhibit the characteristics of the corresponding Gabor filter. Moreover, if the absolute value of a point of the kth orientation filter is larger than those of the other filters in the same scale, θk can be treated as the dominant orientation at this position. Then, a rotation-invariant feature vector for each local feature point with respect to its dominant orientation can be generated. The steps of producing the rotation-invariant Gabor representation are illustrated in Section III-C. C. Local Feature Descriptor

Fig. 1. Example of the iris localization method. (a) Detected iris boundaries, where the dotted and solid lines are produced by the FGCT method and the traditional scheme, respectively. (b)–(c) Gradient images of the inner and outer iris boundaries in the radial direction. (d) Segmented iris image.

following equations. The angular bandwidth is chosen to be π/n, which results in π . (9) σ θk = σ θ = 2n By choosing 0 < ωrmin < ωrmax < 1/2, the radial centers and bandwidths in the frequency domain can be obtained as follows:  ωrj = ωrmin + σ0 1 + 3(2j−1 − 1) σrj = σ0 · 2j−1

(10)

where σ0 =

ωrmax − ωrmin . 2(2m − 1)

(11)

In this paper, 12 Gabor filters corresponding to two radial frequencies and six orientations (m = 2 and n = 6) are utilized to decompose the iris image. After the Gabor decomposition, 12 component images can be obtained by computing the magnitude of each filtered images. B. Local Feature Point Detection The position with the largest magnitude value in a filtered image is the place that is most similar to the corresponding Gabor filter [14]. Accordingly, we use the Gabor filters to

After feature point detection, the radius of a local circular region centered at each feature point is determined to generate a local descriptor. Assume that a feature point is extracted at (x, y). The radius for this point can be estimated as follows: ⎧⎡ ⎤ ⎪ ⎨ 1  R = arg min ⎣ Oj,k (x, y)⎦ r ⎪ Nr ⎩ (x−¯ x)2 +(y−¯ y )2 ≤r 2 < Oj,k (¯ x, y¯) × p

⎫ ⎪ ⎬ ⎪ ⎭

(13)

where Nr denotes the number of pixels within a circular region with radius r and 0 < p < 1 is a proportion factor. According to the estimated radius R and the center of mass of the detected pupillary boundary (Cp), we can define four circular regions for each feature point to generate a feature vector. As shown in Fig. 3, the radius of each region is equal to 1.5R, the distance from the feature point to the center of the other region is 3R, and Rp indicates the distance between the feature point and Cp. For each feature point, the mean value and the standard deviation of each component image within each circular region are computed. In other words, there are 48 (= 4 × 12) mean values and 48 standard deviations for each point. Assume that a feature ˜ j  , and its dominant orientation is θk . For point is detected in O each region of this feature point, the order of the mean values (Mj,1 , . . . , Mj,6 ) and the standard deviations (Sj,1 , . . . , Sj,6 ) computed from the filters with the jth radial frequency are rearranged as (Mj,k , . . . , Mj,6 , Mj,1 , . . . , Mj,k−1 ) and (Sj,k , . . . , Sj,6 , Sj,1 , . . . , Sj,k−1 ). That is, it starts with the feature

TSAI et al.: IRIS RECOGNITION USING POSSIBILISTIC FUZZY MATCHING ON LOCAL FEATURES

155

ing algorithm with scale invariant, translation invariant, and rotation semi-invariant properties for two sets of local feature points. First, we review the dissimilarity measure for a pair of Gabor feature vectors, the fuzzy alignment algorithm, and the possibilistic FCM algorithm. Then, a novel matching method is proposed to compare two feature point sets by taking into consideration the information comprising the Gabor features and the position of each point. Finally, a matching score for two different iris images can be computed. The main contribution of this paper is the proposed matching algorithm, which can provide a novel, robust, and effective matching scheme for the feature points obtained from the Gabor filters. In addition, the nonlinear normalization model is adopted to provide more accurate feature before matching. Fig. 3.

Illustration of the regions for generating a feature descriptor.

A. Point-to-Point Matching

with orientation θk and then circularly shifts the index to the first one. Then, the feature vector of this point can be defined as follows: V = [M1 , S1 , M2 , S2 , M3 , S3 , M4 , S4 ]T  α α α α Mα = M1,k , . . . , M1,6 , M1,1 , . . . , M1,k−1 ,  α α α α M2,k , . . . , M2,6 , M2,1 , . . . , M2,k−1 α = [M1α , . . . , M12 ], for α = 1, 2, 3, 4.  α α α α α S = S1,k , . . . , S1,6 , S1,1 , . . . , S1,k−1 ,  α α α α S2,k , . . . , S2,6 , S2,1 , . . . , S2,k−1 α = [S1α , . . . , S12 ],

for α = 1, 2, 3, 4

(14)

α α and the standard deviation Sj,k of where the mean value Mj,k the αth region in the component image Oj,k are, respectively, defined in (15) and (16), shown at the bottom of the page. The order of the elements in the feature vector is arranged according to the dominant orientation. This strategy can approximately generate a rotation invariant feature vector. Moreover, the different signs of two feature vectors in (14)–(16) can distinguish the information from the different radial frequencies of the Gabor filters.

For a pair of iris images, I 1 and I 2 , let V1 = {V11 , V21 , . . . , Vn1 1 } and V2 = {V12 , V22 , . . . , Vn2 2 } denote the corresponding sets of the local feature vectors, respectively. The distance between the ith feature point of I 1 and the jth point of I 2 can be computed as follows [34]:     1,α 2,α   1,α 2,α  12 4  M  1 2   i,l − Mj,l   Si,l − Sj,l  DL Vi , Vj = +     Ψ(Sl )   Ψ(Ml ) α=1 l=1 (17) where Ψ is a function similar to the standard deviation and Ψ(Ml ) is defined as  n 4 1   2   1  1,α  Ψ(Ml ) = Mi,l  − ml 4 × (n1 + n2 ) α=1 i=1 ⎞⎞1/2 n2   2   2,α  + Mj,l  − ml ⎠⎠ j=1

⎞ n1  n2  4       1,α   2,α ⎠ ⎝ Mi,l  + Mj,l  ⎛

ml =

1 4 × (n1 + n2 ) α=1

i=1

(18)

IV. M ATCHING After feature extraction, the matching algorithm is essential to the comparison of two sets of points extracted from different iris images. In this section, we propose an effective match-

α Mj,k =

α Sj,k =

⎧ 1 ⎪ ⎨ π(1.5R)2 −1 ⎪ ⎩ π(1.5R)2

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

j=1

 

Oj,k (x, y),

if j  = 1

Oj,k (x, y),

if j  = 2

x,y∈Region α

(15)

x,y∈Region α

1 π(1.5R)2

⎪ ⎪ ⎪ ⎪ ⎪ ⎩−

where n1 and n2 are the number of local feature points of I 1 and I 2 , respectively. Similarly, Ψ(Sl ) can also be computed by using (18) by replacing the parameter Ml with Sl .





Oj,k (x, y) −

x,y∈Region α

1 π(1.5R)2

 x,y∈Region α



α Mj,k

2

α Oj,k (x, y) + Mj,k

!1/2 if j  = 1

, 2

!1/2

(16) ,

if j  = 2

156

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

B. Fuzzy Alignment Algorithm Finding a pose transformation between two given sets of points is well known as the absolute orientation problem. The pose transformation parameters (R for rotation, t for translation, and c for scaling) can be estimated by minimizing a least squares energy function [35] 1 E(R, t, c) = yi − (cRxi + t)2 n i=1 n

(19)

where X = {xi : i = 1, . . . , n} and Y = {yi : i = 1, . . . , n} are two point sets in the m-dimensional space. The least square method needs a prior information of the point correspondence between two point sets, which is difficult to find in practice. To overcome this problem, Marques [36] proposed a fuzzy algorithm to estimate both the pose transformation and the point correspondence in a recursive way. The energy function is formulated as  p wij yi − (cRxj + t)2 (20) Efuzzy (R, t, c) = i,j

where p(> 1) is a fuzzifier which controls the fuzziness of the solution and wij is the membership value with the following constrains:  wij = 1 ∀i. (21)

subject to the constraints  wij = 1

EPFM (I 1 , I 2 ) =

n2 n1    i

ij

j

i

(22)

p awij + btqij

tij ≤ 1.

(23)

DS (i, j) =





j

× DS (i, j) +

If wij and c are known, this energy function can be solved in an optimal way by using singular value decomposition (SVD) [35].

The fuzzy algorithm mentioned previously may suffer from a problem of outliers because of the constraint in (21). Unfortunately, in pose transformation estimation, it is inevitable to have outliers when two point sets are partially matched, which occurs frequently in our application. Moreover, noise will cause some erroneous estimation and will generate undesirable outliers in practice. A solution of the outlier problem in clustering is to use a possibilistic model [37] that relaxes the column sum constraint. This relaxation increases the freedom of weighting and makes it possible to reduce the effect of outliers by ignoring them. However, the possibilistic model is very sensitive to initializations and may cause coincident clusters. Timm et al. [38] proposed a possibilistic fuzzy clustering approach to avoid the problem of the coincident clusters. Pal et al. [39] proposed a new possibilistic FCM model which combines membership and typicality. This new model can provide less sensitivity to outliers and can prevent the coincident clusters. The possibilistic fuzzy algorithm can be written as follows:  p   awij + btqij d(vi , θ j ) + γj (1 − tij )q Jp,q =

0 ≤ wij ,

In (22), a and b are the constants that decide the relative importance of the fuzzy membership (wij ) and typicality (tij ) values in the energy function; p > 1, q > 1, and γj > 0 are user-defined constants; and d(vi , θ j ) is the distance measured between the ith unlabeled point and the jth center. The iris pattern is often occluded by the eyelid and eyelashes, and the area of the occlusion can vary a lot. In this situation, the number of detected feature points for an iris image will be different from that of the other images in the same class, which will cause the problem of outliers in the matching stage. In order to deal with the outlier problem, we propose a PFM algorithm for a pair of point sets by combining the fuzzy alignment algorithm with the possibilistic FCM model. Moreover, both the position and local feature vector of each point are used to estimate the pose transformation and the point correspondence. Let the local feature set and the corresponding position set of an iris template I 1 be denoted as V1 = {Vi1 : i = 1, . . . , n1 } and Y = {yi : i = 1, . . . , n1 }, and let those of a test image I 2 be denoted as V2 = {Vj2 : j = 1, . . . , n2 } and X = {xj : j = 1, . . . , n2 }, respectively. The energy function of the PFM is defined as follows:

j

C. PFM

∀i,

j

n2 

γj

j 2 DG (yi , xj )

+

2 f DL

n1 

(1 − tij )q

i



Vi1 , Vj2



(24)

where f is the distance weight to control the importance of the distance DL between a pair of local feature vector which is defined in (17) and DG is defined in the following equation: DG (yi , xj ) = yi − (cRxj + t) .

(25)

By minimizing the energy function of the PFM, we can obtain the membership and the typicality n " # 1 −1 2  Ds (i, j) p−1 (26) wij = Ds (i, k) k=1  −1 1 " # q−1 b DS (i, j) . (27) tij = 1 + γj Let H denote the cross covariance matrix  p ¯ )(yi − y ¯ )T = UDVT (28) awij + btqij (xi − x H= ij

where UDVT is an SVD of H. Then, the transformation parameters can be defined as follows [35], [36]: R = USVT ¯ − cR¯ t =y x

(29) (30)

TSAI et al.: IRIS RECOGNITION USING POSSIBILISTIC FUZZY MATCHING ON LOCAL FEATURES

where ¯= x

C=

1  p awij + btqij xj C ij 

p awij + btqij

¯= y

1  p awij + btqij yi C ij



(31)

ij

 S=

I,

if det(U) det(V) = 1

diag(1, 1, . . . , −1),

if det(U) det(V) = −1.

(32)

Note that the rotation matrix R is constrained by S to avoid a reflection result. Moreover, if a = 1 and b = 0, the solutions of R and t shown in (29)–(32) are similar to those in Marques’ algorithm [36]. To obtain c, we can obtain the following updating equation of the scaling by solving ∂EPFM /∂c = 0:  p ¯ )T RT (yi − y ¯) awij + btqij (xj − x ij . (33) c =  p ¯) awij + btqij (xj − x ¯)T RT R(xj − x ij

The definitions of a and b in (24) are the same as those in (22). The recursive estimation of the parameters in (24) is implemented with two steps: the estimation of the transformation parameters and the updating of the fuzzy weights. The general recursive algorithm of this two given point sets X and Y can be described briefly as follows. [0] [0] 1) Initialize wij and tij , and set an energy threshold ε, where [.] indicates the iteration number. 2) Calculate R[s] , t[s] , and c[s] according to (29), (30), and [s] [s] (33) with wij and tij , respectively. [s+1] wij

[s+1] and tij [s] [s]

3) Estimate according to (26) and (27), respectively, with R , t , and c[s] from step 2. [s] 4) Calculate the energy EPFM according to (24) with R[s] , [s+1] [s+1] t[s] , c[s] , wij , and tij . [s] |EPFM

[s−1] EPFM |

5) If − < ε, s > 0, stop the algorithm; else, s = s + 1, and go to step 2. The distance weight f defines the relative importance of DL and DG in the energy function. The larger f indicates that the dissimilarity measurement of the Gabor feature is more important than the error of the estimation of the pose parameters. According to this property, it is possible to use the weight f to avoid converging to a local minimum which may result from the initial positions of two point sets. In practice, the weight f is designed to be a monotonically decreasing function of the iteration number, and it should remain a constant after a specific number of steps. In the initial iteration step, the PFM algorithm with a larger value of the distance weight is able to perform the global exploration of the point correspondence. As the distance weight decreases, the PFM can search the pose parameters locally. The proposed PFM algorithm attempts to estimate the pose parameters, such as translation, rotation, and overall scaling, to achieve the linear alignment for iris matching. However, the nonlinear scaling in the angular direction is embedded in pupil dilation. Yuan et al. [40] proposed a nonlinear normal-

157

ization model that is inspired by the minimum-wear-and-tear meshwork model [41] to unwrap the annular iris region to a rectangular block of a fixed size. This normalization model can deal with the nonlinear scaling by using an iterative algorithm. Li [42] found a closed-form solution of Yuan’s model by using the law of cosines. In our system, the nonlinear model is also adopted to compensate the deformation of the nonlinear scaling. The position of each feature point of a test image is first nonlinearly transformed to a reference annulus. Then, the transformed position and the Gabor feature of each feature point in the test image are used as the test data in the PFM algorithm. In the following, the nonlinear normalization will be derived. Assume that the distance between a feature point and the center of mass of the detected pupil Cp is Rp, and the radii of the detected pupil, reference pupil, and iris root (estimated limbus) are rd , rref , and rroot , respectively. According to Yuan’s model, as shown −−−→ in Fig. 4, the distance Rp(= |Cp, A|) should be mapped to − − − − → Rp (= |Cp, A |). Let the centers and the radii of two virtual arcs (arc(P I  ) and arc(P  I  )) be denoted as O1 , O2 , r1 , and r2 , respectively, where the radii r1 and r2 are defined as r1 = 2 2 2 + rd2 )/2rroot and r2 = (rroot + rref )/2rroot . Based on (rroot  Li’s idea [42], Rp can be computed as follows: Rp = (rroot − r2 ) cos θp

" # (rroot − r2 ) −1 + r2 cos sin sin θp r2 where θp = cos

−1

"

2 rroot − 2rroot r1 + Rp2 2Rp(rroot − r1 )

(34)

# .

(35)

In our system, the radius of the reference pupil rref is defined as follows: rref =

rˆp × rˆl rroot

(36)

where rˆp and rˆl are the radii of the pupil and the limbus of the template iris, respectively. After the nonlinear transformation, the new position set X = {xj : j = 1, . . . , n2 } of the feature points of a test image can be computed as follows:

ˆ   cos θj (37) , for j = 1, . . . , n2 xj = Rpj sin θˆj and then, the original position set X in (24) should be replaced by X . In the following, the matching strategy combining the PFM with the nonlinear model will be referred to as the PFM-NM. D. Matching Score After the termination of the PFM, the point correspondence between the ith feature point of an iris template and the jth feature point of a test image can be estimated as follows: κij =

(awij + btij ) . (a + b)

(38)

158

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

Fig. 5. Histogram of the number of detected feature points of each iris image in the CASIA-IrisV3-Interval database.

Fig. 4. Illustration of the nonlinear normalization model. Here, the direction ˆ and the distance between the feature point and Cp is of the feature point is θ, −−−→ Rp(= |Cp, A|).

If κij is larger than max 1≤m≤n2 κim , max 1≤l≤n1 κlj , and a m=j

l=i

specific threshold, these two points can be added to the list of matching pairs, and their dissimilarity of the Gabor feature vector (DL,k (I 1 , I 2 ) = DL (Vi1 , Vj2 )) can be computed by using (17). Let T5 = 0 < T4 < T3 < T2 < T1 < T0 = ∞ be six threshold values. The similarity score of the kth matching pair is defined as the following equation: ζk (I 1 , I 2 ) = i,

if Ti+1 ≤ DL,k (I 1 , I 2 ) < Ti .

(39)

Assume that there are N matching pairs for a pair of iris images. Then, the average matching score can be computed as follows:  1 ζ(I , I ) = ζk (I 1 , I 2 ) min(n1 , n2 ) N

1

2

(40)

k=1

where n1 and n2 are the number of the feature points of the template and test images, respectively. The best average matching score is equal to four when N is equal to min(n1 , n2 ). Note that, according to the asymmetric formulation of (24), EPFM (I 1 , I 2 ) may not be equal to EPFM (I 2 , I 1 ). Consequently, the matching scores may be unequal when the roles of I 1 and I 2 are interchanged, i.e., ζ(I 1 , I 2 ) = ζ(I 2 , I 1 ). Therefore, to prevent the asymmetry and to achieve a higher performance, the proposed PFM algorithm is performed twice to calculate two matching scores (ζ(I 1 , I 2 ) and ζ(I 2 , I 1 )) in the matching procedure. Then, max(ζ(I 1 , I 2 ), ζ(I 2 , I 1 )) is adopted as the final matching score of this pair of iris images. V. E XPERIMENTAL R ESULTS In the experiments, we carried out the verification and identification tests to evaluate the proposed iris recognition system.

Moreover, the performance of our system was compared with those of the existing systems, including the typical methods and the systems based on local features. The comparative results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems. A. Database The proposed iris recognition system was evaluated on the CASIA-IrisV3-Interval database [43] and on the UBIRIS.v1 database [44]. The CASIA database is a popular iris database and is widely adopted to evaluate the iris recognition system. There are several identical images and mislabeled iris samples in this database [26], [43]. In our experiments, the identical images and the mislabeled samples were discarded to simplify the analysis. Moreover, we selected 2553 iris images from 349 classes in which there are at least four samples in each class to evaluate the system performance. For each iris image, the number of extracted local feature points is different. The histogram of the number of feature points of each selected iris image in the CASIA-IrisV3-Interval database is shown in Fig. 5. For the UBIRIS.v1 database, we selected 800 samples corresponding to 80 iris classes to evaluate the performance of our system. Each class contains ten color iris images captured under natural illumination in two distinct sessions. In order to simulate less constrained image acquisition environments, noise factors such as the surrounding reflection and defocusing are introduced to the iris images in the second session. In our experiments, each iris image is converted to gray scale and then used to evaluate the system performance. B. Iris Verification The verification system is used to verify or reject the identity that a user claims. In this paper, the receiver operation characteristic (ROC) curve, the equal error rate (EER), and the area under the ROC (AUC) are adopted to assess the performance in the verification scenario [11]. For the CASIA-IrisV3-Interval

TSAI et al.: IRIS RECOGNITION USING POSSIBILISTIC FUZZY MATCHING ON LOCAL FEATURES

Fig. 6.

159

Distributions of the intraclass and interclass matching scores for the CASIA database. (a) Original PFM method. (b) PFM-NM scheme.

TABLE II V ERIFICATION R ESULTS ON THE CASIA DATABASE: FRRs OF THE P ROPOSED M ETHODS AT T HREE D IFFERENT FAR O PERATING S TATES

database, the distributions of intraclass and interclass distances of the PFM-NM and original PFM methods are shown in Fig. 6. The area of overlap of two distributions of the PFMNM algorithm is smaller than that of the original PFM. In other words, the performance of the proposed iris recognition system can be improved by using the nonlinear normalization model. Table II summarizes the false rejection rate (FRRs) of the proposed methods at three different operating states. The iris pattern is usually occluded by the eyelid. A large occlusion area can affect the detection of the feature points and can result in information insufficiency. To evaluate the adverse effects of the occlusion, an additional experimental condition was used to reject the iris image occluded by the eyelid. Let the ratio of the occluded area to the area of the complete iris pattern be defined as the occluded ratio. If the occluded ratio of an iris sample is larger than the rejection threshold To , this sample should be discarded. Several iris images which are occluded severely are shown in Fig. 7. As summarized in Table III, the EERs of the original PFM and the PFM-NM without discarding any occlusive sample are 0.3960% and 0.1482%, respectively. The EERs of the original PFM and PFM-NM methods are significantly reduced to 0.2713% and 0.0591%, respectively, when the iris images with occluded ratio above 35% are rejected. The ROC curves of both methods are shown in Fig. 8, which also indicate that the performance of the proposed system can be improved by discarding iris images with large occlusion. For the UBIRIS.v1 database, the iris images in the distinct sessions were taken under different environments. Since the images in the second session were acquired under the condition of natural illumination, the image quality of each iris sample in the first session is better than that in the second session.

The verification performances are evaluated by using the iris images in both sessions, those in the first session, and those in the second session. The ROC curves of the PFM-NM system are shown in Fig. 9. As shown in Fig. 9 and Table IV, the performance evaluated on the first session is similar to that evaluated on the second session. However, the verification result from comparing all iris images is not as good as those from comparing the iris images acquired at the same sessions. This is because the characteristics of the local features of noisy and clear images in the same class might be different. The iris image quality assessment and enhancement methods [10], [21] can be applied to improve the system performance. C. Identification Performance For iris identification, the system attempts to determine the identity by comparing an input sample with all enrolled templates in a database. The correct recognition rate (CRR) is widely adopted to evaluate the performance of an identification system. In our experiments, three iris samples of each class are randomly selected to construct a template set, and the remainder images are used as the test samples. Assume that there are Φ iris classes in the template set. Let ζ φ,h denote the matching score of a test sample and the hth template of the φth class. The test iris sample is assigned to class C, which contains a template with the largest matching score to this test sample, i.e., C = argφ max1≤φ≤Φ,1≤h≤3 ζ φ,h . Then, the CRR can be computed by counting the correctly matched data. This scheme is repeated 1000 times, and the average CRR is computed. For the CASIAIrisV3-Interval database, the template and test sets contain 1047 and 1506 samples, respectively. The average CRRs of the original PFM and PFM-NM methods are 99.968% and 99.974%, respectively. For the UBIRIS database, the average CRR of the PFM-NM system is 97.196%. D. Comparison and Discussion There are two major factors that decrease the performance of our iris recognition system. First, the significant deformation

160

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

Fig. 7. Some examples of the iris images with the larger occluded ratio. TABLE III V ERIFICATION R ESULTS OF THE CASIA DATABASE FOR O RIGINAL PFM AND PFM-NM U SING VARIED O CCLUSION T HRESHOLDS

Fig. 8. ROC curves with different occlusion thresholds for the CASIA database. (a) Original PFM. (b) PFM-NM. TABLE IV V ERIFICATION R ESULTS OF THE PFM-NM M ETHOD E VALUATED ON THE D IFFERENT S ESSIONS OF THE UBIRIS DATABASE

Fig. 9. ROC curves of the PFM-NM for the different subsets of the UBIRIS database.

of iris texture around the pupil can result in a low genuine matching score. The second adverse factor is that part of the iris is occluded by the eyelid and eyelashes. In this situation,

the number of detected local feature points is not sufficient to discriminate iris images from different classes. For the purpose of comparison, the performance of our iris recognition system is compared with those of the typical framework and the system based on the local features. Therefore, we implemented Zhu et al.’s iris feature extraction [28] based on the SIFT method with our iris segmentation method. The performance of the reimplement system was evaluated on CASIA-IrisV3-Interval database. In addition, the publicly available source code for Masek’s iris recognition system [45] was also evaluated on the CASIA-IrisV3-Interval database. Since the parameters of Masek’s source code were optimized for the CASIA-V1 database which is a subset of the

TSAI et al.: IRIS RECOGNITION USING POSSIBILISTIC FUZZY MATCHING ON LOCAL FEATURES

TABLE V C OMPARISON OF V ERIFICATION R ESULTS

161

VI. C ONCLUSION In this paper, we have introduced a novel matching algorithm with invariant properties for iris recognition system based on the local feature points extracted by a bank of Gabor filters. The proposed matching algorithm, which is based on the PFM method, is used to compare two sets of feature points by using the information comprising the local features and the position of each point. Moreover, a fuzzy curve detection method has been proposed to extract the inner and outer boundaries of the iris from a gray-level image. The experimental results have shown that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems. R EFERENCES

Fig. 10. Comparative ROC curves for the CASIA-V3-Interval database. TABLE VI R EPORTED V ERIFICATION R ESULTS OF THE W ELL -K NOWN S YSTEMS F ROM [11]

CASIA-IrisV3-Interval database, it is possible for the system to incorrectly detect the iris boundaries when the CASIA-IrisV3Interval database is used. For a fair comparison, each wrong segmented sample was corrected manually. Moreover, Belcher and Du proposed the region-based SIFT method to construct an iris recognition system [29]. Their system was evaluated on the CASIA-V1 database. The comparative results are summarized in Table V. Accordingly, the proposed system using the PFMNM method can achieve a higher performance in terms of the EER and the AUC. Moreover, the comparative ROC curves of Zhu et al.’s scheme, Masek’s method, and our systems are shown in Fig. 10. Ma et al. [11] implemented the state-of-the-art iris recognition systems proposed by Daugman [2]–[4], Wildes et al. [6], and Boles et al. [9]. The performances of these reimplemented systems were evaluated on the CASIA database in which the number of iris samples is slightly different than that in the public CASIA-IrisV3-Interval database. However, the reported performances can be adopted for a reasonable comparison. According to the reported performances shown in Table VI, the performance of our system is comparable to those of the wellknown systems.

[1] L. Flom and A. Safir, “Iris recognition system,” U.S. Patent 4 641 349, Feb. 3, 1987. [2] J. G. Daugman, “High confidence personal identification by rapid video analysis of iris texture,” in Proc. IEEE Int. Carnahan Conf. Security Technol., 1992, pp. 1–11. [3] J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 11, pp. 1148–1161, Nov. 1993. [4] J. G. Daugman, “Biometric personal identification system based on iris analysis,” U.S. Patent 5 291 560, Mar. 1, 1994. [5] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, “A system for automated iris recognition,” in Proc. IEEE Workshop Appl. Comput. Vis., 1994, pp. 121–128. [6] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, “A machine-vision system for iris recognition,” Mach. Vis. Appl., vol. 9, no. 1, pp. 1–8, Jan. 1996. [7] R. P. Wildes, J. C. Asmuth, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, “Automated, noninvasive iris recognition system and method,” , U.S. Patent 5 572 596, Nov. 5, 1996. [8] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Proc. IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997. [9] W. W. Boles and B. Boashash, “A human identification technique using images of the iris and wavelet transform,” IEEE Trans. Signal Process., vol. 46, no. 4, pp. 1185–1188, Apr. 1998. [10] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, pp. 1519–1533, Dec. 2003. [11] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Local intensity variation analysis for iris recognition,” Pattern Recognit., vol. 37, no. 6, pp. 1287–1298, Jun. 2004. [12] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Trans. Image Process., vol. 13, no. 6, pp. 739–750, Jun. 2004. [13] Z. Sun, T. Tan, and Y. Wang, “Robust encoding of local ordinal measures: A general framework of iris recognition,” in Proc. ECCV Workshop BioAW, 2004, pp. 270–282. [14] L. Yu, D. Zhang, and K. Wang, “The relative distance of key point based iris recognition,” Pattern Recognit., vol. 40, no. 2, pp. 423–430, Feb. 2007. [15] C. T. Chu and C.-H. Chen, “High performance iris recognition based on LDA and LPCC,” in Proc. IEEE Int. Conf. Tools With Artif. Intell., 2005, pp. 417–421. [16] C. Sanchez-Avila and R. Sanchez-Reillo, “Two different approaches for iris recognition using Gabor filters and multiscale zero-crossing representation,” Pattern Recognit., vol. 38, no. 2, pp. 231–240, Feb. 2005. [17] D. M. Monro, S. Rakshit, and Z. Dexin, “DCT-based iris recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 586–595, Apr. 2007. [18] K. Miyazawa, K. Ito, T. Aoki, K. Kobayashi, and H. Nakajima, “An effective approach for iris recognition using phase-based image matching,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 10, pp. 1741–1756, Oct. 2008. [19] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: A survey,” Comput. Vis. Image Understand., vol. 110, no. 2, pp. 281–307, May 2008. [20] L. Birgale and M. Kokare, “Iris recognition without iris normalization,” J. Comput. Sci., vol. 6, no. 9, pp. 1042–1047, 2010.

162

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012

[21] M. Vatsa, R. Singh, and A. Noore, “Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 4, pp. 1021–1035, Aug. 2008. [22] A. Tsai, A. Yezzi, Jr., and A. S. Willsky, “Curve evolution implementation of the Mumford–Shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Trans. Image Process., vol. 10, no. 8, pp. 1169–1186, Aug. 2001. [23] S. Shah and A. Ross, “Iris segmentation using geodesic active contours,” IEEE Trans. Inf. Forensics Security, vol. 4, no. 4, pp. 824–836, Dec. 2009. [24] K. Roy, P. Bhattacharya, and C. Y. Suen, “Towards nonideal iris recognition based on level set method, genetic algorithms and adaptive asymmetrical SVMs,” Eng. Appl. Artif. Intell., vol. 24, no. 3, pp. 458–475, Apr. 2011. [25] H. Proença and L. A. Alexandre, “Introduction to the special issue on the segmentation of visible wavelength iris images captured at-a-distance and on-the-move,” Image Vis. Comput., vol. 28, no. 2, pp. 213–214, Feb. 2010. [26] C. C. Tsai, J. S. Taur, and C. W. Tao, “Iris recognition based on relative variation analysis with feature selection,” Opt. Eng., vol. 47, no. 9, p. 097 202, Sep. 2008. [27] C. C. Tsai, J. S. Taur, and C. W. Tao, “Iris recognition using Gabor filters optimized by the particle swarm algorithm,” J. Electron. Imaging, vol. 18, no. 2, p. 023 009, May 2009. [28] R. Zhu, J. Yang, and R. Wu, “Iris recognition based on local feature point matching,” in Proc. Int. Symp. Commun. Inf. Technol., 2006, pp. 451–454. [29] C. Belcher and Y. Du, “Region-based SIFT approach to iris recognition,” Opt. Lasers Eng., vol. 47, no. 1, pp. 139–147, Jan. 2009. [30] H. Yan, “Fuzzy curve-tracing algorithm,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 31, no. 5, pp. 768–780, Oct. 2001. [31] H. Yan, “Convergence condition and efficient implementation of the fuzzy curve-tracing (FCT) algorithm,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 34, no. 1, pp. 210–221, Feb. 2004. [32] J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 2, no. 7, pp. 1160–1169, Jul. 1985. [33] B. Duc, S. Fischer, and J. Bigün, “Face authentication with Gabor information on deformable graphs,” IEEE Trans. Image Process., vol. 8, no. 4, pp. 504–516, Apr. 1999. [34] B. S. Manjunath and W. Y. Ma, “Texture features for browsing and retrieval of image data,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 8, pp. 837–842, Aug. 1996. [35] S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 4, pp. 376–380, Apr. 1991. [36] J. S. Marques, “A fuzzy algorithm for curve and surface alignment,” Pattern Recognit. Lett., vol. 19, no. 9, pp. 797–803, Apr. 1998. [37] R. Krishnapuram and J. Keller, “A possibilistic approach to clustering,” IEEE Trans. Fuzzy Syst., vol. 1, no. 2, pp. 98–110, Apr. 1993. [38] H. Timm, C. Borgelt, C. Döring, and R. Kruse, “An extension to possibilistic fuzzy cluster analysis,” Fuzzy Sets Syst., vol. 147, no. 1, pp. 3–16, Oct. 2004. [39] N. R. Pal, K. Pal, J. M. Keller, and J. C. Bezdek, “A possibilistic fuzzy c-means clustering algorithm,” IEEE Trans. Fuzzy Syst., vol. 13, no. 4, pp. 517–530, Aug. 2005. [40] X. Yuan and P. Shi, “A non-linear normalization model for iris recognition,” in Proc. Adv. Biometric Person Authentication, 2005, pp. 135–141. [41] H. J. Wyatt, “A ‘minimum-wear-and-tear’ meshwork for the iris,” Vis. Res., vol. 40, no. 16, pp. 2167–2176, Jul. 2000. [42] J. C. Li, “Fast computation for iris normalization,” Thesis, Graduate Inst. Commun. Eng., Nat. Chi Nan Univ., Puli, Taiwan, 2009. [43] CASIA-IrisV3. [Online]. Available: http://www.cbsr.ia.ac.cn/ IrisDatabase.htm

[44] H. Proença and L. A. Alexandre, “UBIRIS: A noisy iris image database,” in Proc. Int. Conf. Image Anal. Process., 2005, vol. 1, pp. 970–977. [45] L. Masek and P. Kovesi, MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. Perth, Australia: School Comput. Sci. Softw. Eng., Univ. Western Australia, 2003.

Chung-Chih Tsai received the B.S., M.S., and Ph.D. degrees in electrical engineering from the National Chung Hsing University, Taichung, Taiwan, in 2001, 2003, and 2009, respectively. He served as a Postdoctoral Fellow with the Graduate Institute of Communication Engineering, National Chung Hsing University, from 2009 to 2010. His research interests include biometrics, image processing, pattern recognition, neural networks, and machine learning.

Heng-Yi Lin received the B.S. and M.S. degrees in electrical engineering from the National Chung Hsing University, Taichung, Taiwan, in 2003 and 2005, respectively, where he is currently working toward the Ph.D. degree in the Department of Electrical Engineering. His research interests include simultaneous localization and mapping, and path planning.

Jinshiuh Taur received the B.S. and M.S. degrees in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1987 and 1989, respectively, and the Ph.D. degree in electrical engineering from Princeton University, Princeton, NJ, in 1993. He was a member of the Technical Staff of Siemens Corporate Research, Inc. He is currently a Professor with the National Chung Hsing University, Taichung, Taiwan. His primary research interests include neural networks, pattern recognition, computer vision, and fuzzy logic systems. Dr. Taur received the 1996 IEEE Signal Processing Society’s Best Paper Award.

Chin-Wang Tao received the B.S. degree in electrical engineering from the National Tsing Hua University, Hsinchu, Taiwan, in 1984 and the M.S. and Ph.D. degrees in electrical engineering from New Mexico State University, Las Cruces, in 1989 and 1992, respectively. He is currently a Professor with the Department of Electrical Engineering, National I-Lan University, I-Lan, Taiwan. His research interests are on the fuzzy neural systems, including fuzzy control systems and fuzzy neural image processing. Dr. Tao is an Associate Editor of the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND C YBERNETICS.

Iris Recognition Using Possibilistic Fuzzy Matching ieee.pdf

There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Iris Recognition ...

958KB Sizes 5 Downloads 298 Views

Recommend Documents

Review of Iris Recognition System Iris Recognition System Iris ... - IJRIT
Abstract. Iris recognition is an important biometric method for human identification with high accuracy. It is the most reliable and accurate biometric identification system available today. This paper gives an overview of the research on iris recogn

Review of Iris Recognition System Iris Recognition System Iris ...
It is the most reliable and accurate biometric identification system available today. This paper gives an overview of the research on iris recognition system. The most ... Keywords: Iris Recognition, Personal Identification. 1. .... [8] Yu Li, Zhou X

A Review: Study of Iris Recognition Using Feature Extraction ... - IJRIT
INTRODUCTION. Biometric ... iris template in database. There is .... The experiments have been implemented using human eye image from CASAI database.

MATCHING FACE AGAINST IRIS IMAGES USING ...
matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary. Patterns (LBP), Normalized ...

A Review: Study of Iris Recognition Using Feature Extraction ... - IJRIT
analyses the Iris recognition method segmentation, normalization, feature extraction ... Keyword: Iris recognition, Feature extraction, Gabor filter, Edge detection ...

Efficient Small Template Iris Recognition System Using ...
in illumination, pupil size and distance of the eye from the camera. ..... With a pre-determined separation Hamming distance, a decision can be made as to ...

Using Fuzzy Logic to Enhance Stereo Matching in ...
Jan 29, 2010 - Stereo matching is generally defined as the problem of discovering points or regions ..... Scheme of the software architecture. ..... In Proceedings of the 1995 IEEE International Conference on Robotics and Automation,Nagoya,.

Fingerprint Recognition Using Minutiae Score Matching
speech, gait, signature) characteristics, called biometric identifiers or traits or .... lies in the pre processing of the bad quality of fingerprint images which also add to the low ... Images Using Oriented Diffusion”, IEEE Computer Society on Di

A Possibilistic Approach for Activity Recognition in ...
Oct 31, 2010 - electronic components, the omnipresence of wireless networks and the fall of .... his activity, leading him to carry out the actions attached to his.

A Possibilistic Approach for Activity Recognition in ...
Oct 31, 2010 - A major development in recent years is the importance given to research on ... Contrary as in probability theory, the belief degree of an event is only .... The Gator Tech Smart House developed by the University of ... fuse uncertain i

An Effective Segmentation Method for Iris Recognition System
Biometric identification is an emerging technology which gains more attention in recent years. ... characteristics, iris has distinct phase information which spans about 249 degrees of freedom [6,7]. This advantage let iris recognition be the most ..

EF-45 Iris Recognition System - CMITECH.pdf
Whoops! There was a problem loading more pages. EF-45 Iri ... ITECH.pdf. EF-45 Iris ... MITECH.pdf. Open. Extract. Open with. Sign In. Details. Comments. General Info. Type. Dimensions. Size. Duration. Location. Modified. Created. Opened by me. Shari

Iris Recognition Based on Log-Gabor and Discrete ...
Index Terms— Iris Recognition System, Image Preprocessing, 1D log-Gabor filter, Hamming Distance (HD), .... took from 4 cm away using a near infrared camera. The ..... interests to the developments of security over wireless communica-.

Fuzzy play, matching devices and coordination failures - Springer Link
Another approach to equilibrium selection involves exploring the dynamics of coordination games. This approach requires the specification of a dynamic process describing the play of agents involved in such a game, see e.g. Kandori et al. [11]. Anothe

hand-written postcode recognition by fuzzy artmap ...
communication called Dynamic Data Exchange (DDE) to ... 6 shows the Graphical User Interface (GUI) ... user interface captured by the PRM through the CCD.

Anesthesia Prediction Using Fuzzy Logic - IJRIT
Thus a system proposed based on fuzzy controller to administer a proper dose of ... guide in developing new anesthesia control systems for patients based on ..... International conference on “control, automation, communication and energy ...

Fingerprint matching using ridges
(2) The solid-state sensors are increasingly used, which capture only a portion ... file is small. We have ... the ridge-based system will not degrade dramatically.

PARTIAL SEQUENCE MATCHING USING AN ...
where the end point of the alignment maybe be unknown. How- ever, it needs to know where the two matching sequences start. Very recently [6] proposed an ...

SPEAKER-TRAINED RECOGNITION USING ... - Vincent Vanhoucke
advantages of this approach include improved performance and portability of the ... tion rate of both clash and consistency testing has to be minimized, while ensuring that .... practical application using STR in a speaker-independent context,.

Efficient Speaker Recognition Using Approximated ...
metric model (a GMM) to the target training data and computing the average .... using maximum a posteriori (MAP) adaptation with a universal background ...