3rd International Conference on Electrical & Computer Engineering ICECE 2004, 28-30 December 2004, Dhaka, Bangladesh

PERSON IDENTIFICATION BY RETINA PATTERN MATCHING S M Raiyan Kabir1, Rezwanur Rahman2, Mursalin Habib1 and M Rezwan Khan3 1

Department of Electrical and Electronic Engineering, BUET, Dhaka-1000, Bangladesh 2 Department of Mechanical Engineering, BUET, Dhaka-1000, Bangladesh 3 School of Science and Engineering, United International University, Dhanmondi, Dhaka-1215 Email: [email protected], [email protected], [email protected], [email protected] ABSTRACT A new way of person identification based on retina pattern is presented in this paper. It consists of retina image acquisition, image preprocessing, feature extraction and finally matching the patterns. The algorithm is based on color centroid calculation and its variation in polar grid. Method presented in this paper is translation and rotation invariant. The correlation coefficient is used to quantify the degree of matching.

1. INTRODUCTION The recent upswing in technology and increasing concern related to security caused a boost in intelligent person identification system based on biometrics. Biometrics employs physical and behavioral characteristics to identify a person. Common biometrics features are fingerprint, voice, gait, facial thermo-gram, signature, face, palm print, hand geometry, iris and retina [1]. Fingerprint is the oldest, most popular and widely used by law enforcers. Iris and retina recognition is a relatively new approach, compared to other biometrics features [1]. Retina pattern is more stable and reliable for identification, which makes retina recognition a prominent solution to security in the near future [1]. Recently sufficient amount of work is going on iris [4-5], fingerprint [6-7], face [8] etc. Iris recognition, as an emerging biometric recognition approach, is turning out to be a very interesting topic for both research and practical application [1]. Although first proposed in 1980s, retina pattern matching is one of the least deployed technique [1]. In this paper, the retina image acquisition and preprocessing of the captured image is discussed with a method of retina pattern matching. Some interesting results indicating the reliability of the method are also presented.

ISBN 984-32-1804-4

522

2. RETINA IMAGE ACQUISITION AND PREPROCESSING A vital part of the retina verification system is image acquisition. As the retina is a very small internal part of the eye, special arrangements are needed. A FUNDUS Camera (Medical device) with a built-in photo CCD camera, required software and hardware were used to capture the images [2]. All images of retina were taken at a constant illumination with 50˚ coverage angle. Images captured with this setup are of very high resolution, 1000×1504. Pictures of right eye are used for the analysis. As captured images always contain translational and rotational displacement and also some uneven color variation due to distance, position of the light source and some other physical reason, image preprocessing is a necessity to get a reasonable noiseless image. Image preprocessing is done in three steps, i) locating reference points, ii) filtration and iii) compensation for translational and rotational displacement. 2.1

Locating Reference Points

In this step two reference points has to be figured out to compensate all translational and rotational shifts. In every retina there is an optic nerve (brightest portion of the retina image) and fovea (gradually darker portion at the center of the retina image). In this step, the centers of optic nerve and fovea are detected and taken as reference points. Locating Fovea Center The 24-bit color image of retina was separated into three color layers; red, green and blue (Fig. 1).

lower limits in the y-axis. For captured images W = 1504 and H = 1000. Imax is 80 and Imin is 31 for fovea center location. It is observed that the fovea remain within the limits mentioned above. As the blue layer gives the best output, the fovea center is located from the blue layer. After these processing the image of fovea looks like Fig 2 (left).

Optic Nerve Fovea

Locating Optic Nerve Center

Fig. 1 Red layer (top), Green layer (bottom left) and Blue layer (bottom right) of retina printed in gray scale (Fovea and optic nerve are shown in red layer) To locate fovea center only one layer is taken. At first the contrast level of the layer is stretched [3] using equation (1), γ

 I(x, y) − Imin  J(x, y) = (Jmax − Jmin ) ×   + Jmin  Imax − Imin 

(1)

Where, I(x,y) and J(x,y) are the gray scale color values before and after contrast stretching, subscripts ‘max’ and ‘min’ are the maximum and minimum color level of the corresponding image, γ = a constant that defines the shape of the stretching curve. Jmax = 255, Jmin = 0 and γ = 1 are taken, as a linear curve is used to stretch the contrast limits of the image. The point (x,y) is measured from the top left hand corner of the image. The output of contrast stretching is found to be better in blue layer than any other layer for fovea detection. After contrast stretching, fovea center was located by gray level slicing [3] and centroid calculation with the equation (2).

Xc =

∑ M(x, y).x , ∑ M(x, y)

Yc =

∑ M(x, y).y ∑ M(x, y)

(2)

Where,

  Imin < I(x, y) ≤ Imax 1  M(x, y) =   x min ≤ x ≤ xmax, ymin ≤ y ≤ y max 0 otherwise x max = W /2 + 0.1× W , x min = W /2 − 0.1× W y max = H /2 + 0.1× H, y min = H /2 − 0.1× H

As it is done for locating fovea center, only one layer is taken for locating optic nerve center. Contrast level of the layer is stretched with equation (1). Jmin = 0, Jmax = 255 and γ = 1. The output is best in red layer, as the optic nerve is most prominent in the red layer output. The distance between the fovea center and the optic nerve center does not vary by a large magnitude. So, a circular portion of the image around the fovea center is not taken into consideration for the processing. As all images are of right eye there is no chance for the optic nerve to be on the left side of the fovea center. Hence, the portion of the image remaining on the left side of the fovea center has also been omitted. This leaves a much smaller portion of the image around the optic nerve for further processing. The optic nerve center is located by equation (2) with following conditions.

  Imin < I(x, y) ≤ Imax 1  M(x, y) =   x > x fc ,R(x, y) > Rref 0 otherwise 

R(x, y) =

(x − x ) + (y − y ) 2

fc

2

fc

Here, Rref is the radius of the circular part of the blocking area, R(x,y) is the radial distance of the point (x,y) from fovea center and xfc and yfc are the coordinates of fovea center. Imax = 255, Imin = 254 and Rref = 250. The red layer output is most prominent and located optic nerve center is almost deviation less. So, the optic nerve center is located from the red layer. After processing, the optic nerve is like Fig. 2 (right). Fovea

Optic Nerve

Fig. 2 Fovea of the retina (inverted) (left), Optic nerve of the retina (inverted) (right)

W is the width and H is the height of the image. Imax and Imin are the upper and lower limits of the grey level slice, xmax and xmin are the upper and lower limits in the x-axis, ymax and ymin are the upper and

523

2.2

Filtration

Filtration of image to enhance the blood vessel pattern is done in two steps.

High Pass Filtration A 2-D frequency domain Gaussian high pass filter is used to reduce the effect of low frequency variation of color [3]. The transfer function of the high pass filter is given in equation (3) [3],

T(ω ) = 1− e

ω (x, y) =



ω2 2β 2

(3)

(x − W /2) + (y − H /2) 2

2

Where, ω = redial distance in frequency domain, β = length of roll off of the filter. For blood vessel pattern enhancement β = 1.5 was chosen empirically. The contrast was stretched with Imin = 0, Imax = 255 and γ = 1 using equation (1). After high pass filtration green layer blood vessel pattern becomes most prominent. The output is like Fig 3 (left).

The filtered image of the high pass filter is differentiated first in x-axis and then in y-axis. The absolute value of both the differentiations are added and then divided by two. Mathematically, W

Fig. 4 Preprocessed image of retina (inverted)

3. FEATURE EXTRACTION AND MATCHING

Single Dimensional Differentiation in X and Y Axis

1  dI(x,m) dI(n, y)  J d (n,m) =  +  2  dx dy  n=1

The filtered image is taken and the pattern is cropped circularly with radius Rpatt taking fovea center as center of the pattern. As the fovea and optic nerve centers are known, the fovea center is moved at the center of the image matrix to compensate translational displacement and then optic nerve center is placed at an angle of zero degree. So that, there is no or very small rotational displacement. To farther increase the contrast of the image, contrast stretching is performed using equation (1). This completes the preprocessing and the image is ready for feature extraction and matching (Fig 4).

H

To extract feature from the preprocessed image the image is divided into 12 equal angular sections and several radial segments with 10 pixels radial width. The polar grid formed in this process is like the one shown in Fig 5.

(4) m=1

This process increases the contrast of the image (Fig 3(right)). Fig. 5 Polar grid for feature extraction

Fig. 3 Green layer after high pass filtration and contrast stretching (left), Green layer after differentiation (color level stretched and inverted) (right) 2.3

Compensation for Translational and Rotational Shift

The distance between the fovea center and optic nerve center is calculated with equation (5),

D fc−oc =

(x

− x fc ) + (y oc − y fc ) 2

oc

2

The color centroid of each segment is calculated using equation (2), where, X C = X color _ cg ( m ,n ) ,

YC = Ycolor _ cg ( m ,n ) , M(x, y) = I(x, y) , ‘m’ is the radial segment number and ‘n’ is the angular section number. Xcolor_cg(m,n) and Ycolor_cg(m,n) are the coordinates of color centroid of (m,n) segment. The quantities rX ( m , n ) and rY ( m ,n ) are calculated using equation (6),

rX (m,n) =

(5)

Where, xoc and yoc are the coordinates of optic nerve center, xfc and yfc are the coordinates of fovea center and Dfc-oc is the distance between fovea center and optic nerve center. The pattern radius Rpatt is taken 1.2 times Dfc-oc.

524

rY(m,n)

(X

(Y =

color_ cg(m.n)

color_ cg(m.n)

− Xmin(m,n) )

−Ymin(m,n) )

(X

(Y

max(m,n)

max(m,n)

− Xmin(m,n) )

(6)

−Ymin(m,n) )

Where, Xmax(m,n), Xmin(m,n) are respectively the xcoordinates of the right most and left most points of segment (m,n). Ymax(m,n), Ymin(m,n) respectively are the

y-coordinates of the top most and down most points of segment (m,n). The matching process is done by calculating correlation coefficient for rX ( m , n ) and rY ( m ,n ) using the data of all segments in the same angular section.

4. RESULTS The mean and standard deviation of correlation coefficients for all 12 angular sections of both rX ( m ,n ) and rY ( m ,n ) are given in the Table 1 and 2 below where, M = mean of the correlation coefficients, S = standard deviation of the correlation coefficients, P(n,m) = mth sample of nth person. Table 1: Matching results for rX ( m , n ) P(1,1) P(1,2) P(2,2) P(3,2) P(4,2) P(5,2)

M .92 .18 .14 .18 .11

S .03 .28 .19 .25 .27

P(2,1) M .14 .89 .1 .2 -.01

S .28 .04 .19 .24 .17

P(3,1) M .12 .05 .91 .04 .1

S .2 .19 .06 .24 .21

P(4,1) M .23 .25 .09 .85 .04

S .26 .2 .28 .09 .22

P(5,1) M .14 .04 .11 -.02 .77

S .23 .25 .2 .25 .12

Table 2: Matching results for rY ( m ,n ) P(1,1) P(1,2) P(2,2) P(3,2) P(4,2) P(5,2)

M .91 .08 .03 .07 .06

S .04 .23 .26 .24 .22

P(2,1) M .05 .88 -.02 .14 -.05

S .25 .03 .15 .22 .15

P(3,1) M .07 -.07 .9 -.03 .03

S .21 .11 .05 .23 .26

P(4,1) M .17 .18 .01 .86 .01

S .22 .13 .26 .06 .22

P(5,1) M .14 -.01 .05 -.07 .76

S .22 .2 .24 .25 .11

It is clear from Table 1 and 2, the mean of correlation coefficients is more than 0.7 for every matching case and standard deviation is less than 0.15. On the other hand, for a mismatch the mean of correlation coefficients is less than 0.3 (sometime negative).

525

5. CONCLUSION An effort has been made to identify a person by retina pattern matching. The results are found to be interesting. Although the number of images matched is not very high, still the results are indicative of the robustness of the method within the tolerable limit. The method is also quite insensitive to translational and rotational displacement.

ACKNOWLEDGMENTS The authors would like to thank Dr. Niaz Rahman (M.D.), Dr. Farzanur Rehman (M.D.) and Dr. Sarwar Alam (M.D.) for their help with their FUNDUS Camara. Thanks to Mr. Ayon Quayum, Mr. Ehsanul Hannan, Mr. Farrukh Imtiaz, and Ms. Noor- E – Nazia for their help with their retinal images.

REFERENCES [1] S. Nanavati, M. Thieme and R. Nanavati, Biometrics Identity Verification in a Networked World, John Wiley & Sons, Inc., 2002 [2] TOPCON TRC – 50EX, FUNDUS camera. http://www.topcon.com/med/trc50exframe.html [3] R. C. Gonzalez and R. E. Woods, Digital Image Processing., Pearsons Education Inc.,2003 [4] P.W. Hallinan, “Recognizing Human Eyes”, Geomtric Methods Comput. Vision, vol. 1570, pp.214-226, 1991. [5] Y. Zhu, T. Tan, and Y. Wang, “Iris Image Acquisition System”, Chinese Patent Application, No.99217063.X, 1999. [6] Anil K. Jain, Lin Hong, Sharath Pankanti, Ruud Bolle, “An Identity-Authentication System Using Fingerprint”. Proc. of the IEEE, Vol. 85, No.9, 1997. [7] Anil Jain, Arun Ross, Salil Prabhakar, “Fingerprint Matching Using Minutiae and Texture Features”, Proc. of ICIP, pp. 282-285, 2001M. [8] Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, Mar. 1991.

person identification by retina pattern matching

Dec 30, 2004 - gait, facial thermo-gram, signature, face, palm print, hand geometry, iris and ..... [3] R. C. Gonzalez and R. E. Woods, Digital Image. Processing.

234KB Sizes 1 Downloads 193 Views

Recommend Documents

Pattern Matching
basis of the degree of linkage between expected and achieved outcomes. In light of this ... al scaling, and cluster analysis as well as unique graphic portrayals of the results .... Pattern match of program design to job-related outcomes. Expected.

Tree Pattern Matching to Subset Matching in Linear ...
'U"cdc f f There are only O ( ns ) mar k ed nodes#I with the property that all nodes in either the left subtree ofBI or the right subtree ofBI are unmar k ed; this is ...

Rotation Invariant Retina Identification Based on the ...
Department of Computer, University of Kurdistan, Sanandaj, Iran ... Biometric is the science of recognizing the identity of a person based .... degree of closeness.

Identification of Novel Modular Expression Pattern by Involving Motif ...
A commonly used method for this purpose is a top-down approach based on clustering the network into a range of densely ... Plant functional networks integrating multiple data types, including co-expression, have also been reported . Once generated ..

Identification of Novel Modular Expression Pattern by ...
This approach identified novel expression modules for the G-box, W-box, site II, and MYB motifs from an. Arabidopsis thaliana gene co-expression network ...

Eliminating Dependent Pattern Matching - Research at Google
so, we justify pattern matching as a language construct, in the style of ALF [13], without compromising ..... we first give our notion of data (and hence splitting) a firm basis. Definition 8 ...... Fred McBride. Computer Aided Manipulation of Symbol

Efficient randomized pattern-matching algorithms
the following string-matching problem: For a specified set. ((X(i), Y(i))) of pairs of strings, .... properties of our algorithms, even if the input data are chosen by an ...

biochemistry pattern matching .pdf
biochemistry pattern matching .pdf. biochemistry pattern matching .pdf. Open. Extract. Open with. Sign In. Main menu. Whoops! There was a problem previewing ...

Person Identification based on Palm and Hand ... - Semantic Scholar
using Pieas hand database is 96.4%. 1. ... The images in this database are captured using a simple .... Each feature is normalized before matching score to.

Person Re-identification Based on Global Color Context
which is of great interest in applications such as long term activity analysis [4] and continuously ..... self-similarities w.r.t. color word occurred by soft assignment.

Towards High-performance Pattern Matching on ... - Semantic Scholar
such as traffic classification, application identification and intrusion prevention. In this paper, we ..... OCTEON Software Developer Kit (Cavium SDK version 1.5):.

Optimization of Pattern Matching Algorithm for Memory Based ...
Dec 4, 2007 - widely adopted for string matching in [6][7][8][9][10] because the algorithm can ..... H. J. Jung, Z. K. Baker, and V. K. Prasanna. Performance of.

A Universal Online Caching Algorithm Based on Pattern Matching
We present a universal algorithm for the classical online problem of caching or ..... Call this the maximal suffix and let its length be Dn. 2. Take an α ..... Some Distribution-free Aspects of ... Compression Conference, 2000, 163-172. [21] J. Ziv 

Holistic Twig Joins: Optimal XML Pattern Matching
XML employs a tree-structured data model, and, naturally,. XML queries specify .... patterns is that intermediate result sizes can get very large, even when the input and ... This validates the analytical results demonstrat- ing the I/O and CPU ...

q-Gram Tetrahedral Ratio (qTR) for Approximate Pattern Matching
possible to create a table of aliases for domain- specific alphanumeric values, however, it is unlikely that all possible errors could be anticipated in advance. 2.

Optimization of Pattern Matching Algorithm for Memory Based ...
Dec 4, 2007 - accommodate the increasing number of attack patterns and meet ... omitted. States 4 and 8 are the final states indicating the matching of string ...

Towards High-performance Pattern Matching on ... - Semantic Scholar
1Department of Automation, Tsinghua University, Beijing, 100084, China. ... of-art 16-MIPS-core network processing platform and evaluated with real-life data ...

q-Gram Tetrahedral Ratio (qTR) for Approximate Pattern Matching
matching is to increase automated record linkage. Valid linkages will be determined by the user and should represent those “near matches” that the user.

Optimization of Pattern Matching Circuits for Regular ...
NFA approaches, a content matching server [9] was developed to automatically generate deterministic finite automatons (DFAs) .... construct an NFA for a given regular expression and used it to process text characters. ... [12] adopted a scalable, low

Parallel Approaches to the Pattern Matching Problem ...
the pattern's fingerprint (Kp(X) or Fp(X)), 3) computing all the fingerprints ... update fingerprints instead of computing from scratch: ... blocks (virtualized cores).

String Pattern Matching For High Speed in NIDS - IJRIT
scalability has been a dominant issue for implementation of NIDSes in hardware ... a preprocessing algorithm and a scalable, high-throughput, Memory-effi-.

Tree pattern matching in phylogenetic trees: automatic ...
Jan 13, 2005 - ... be installed on most operating systems (Windows, Unix/Linux and MacOS). ..... a core of genes sharing a common history. Genome Res., 12 ...

a novel pattern identification scheme using distributed ...
Jul 6, 2009 - macroblock temporal redundancy (ITR) (see Fig 1) which is static in successive frames. Indeed, few bits are used to signal zero residual error and zero motion for ITR to decoder, which is obviously significant when a sequence is encoded