A New Approach of Iris Detection and Recognition

Received Feb 26, 2017 Revised Jun 17, 2017 Accepted Sep 11, 2017 This paper proposes an IRIS recognition and detection model for measuring the e-security. This proposed model consists of the following blocks: segmentation and normalization, feature encoding and feature extraction, and classification. In first phase, histogram equalization and canny edge detection is used for object detection. And then, Hough Transformation is utilized for detecting the center of the pupil of an IRIS. In second phase, Daugmen’s Rubber Sheet model and Log Gabor filter is used for normalization and encoding and as a feature extraction method GNS (Global Neighborhood Structure) map is used, finally extracted feature of GNS is feed to the SVM (Support Vector Machine) for training and testing. For our tested dataset, experimental results demonstrate 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyes. In addition, our proposed model outperforms than other two conventional methods exhibiting higher accuracy.


INTRODUCTION
For the authentication of the identity of an individual, biometric recognition is a reliable way. For biometric authentication, several physical stable characteristics, such as fingerprints, voice recognition, hand geometry, hand writing, the retina, iris recognition are used. Among the characteristics, most of them require the some physical actions of sensing device or some special actions [1]. To overcome the barrier like the physical contacts, automated recognition technique is an alternative and less invasive. Iris recognition technique is one of them [13]. This noninvasive verification technique for the identification of individuals is more practical because the pattern of the human iris is unique, distinctive and stable throughout the adult life [1]. Figure 1 illustrates a generalized overview of human eye; from it we can see the position of each portions of the eye. IJECE ISSN: 2088-8708  From the human eyes, we can get the pattern of the iris which is identical and reliable for human identification purpose. Previously, a number of researchers worked on Iris recognition. In most of the conventional methods, researchers worked on either Iris localization or Iris pattern recognition. For Iris localization most of the researchers use Daugman's Iris recognition model [2]. In Daugman's model, a 256 byte code is generated by quantizing the local phase angle based on the result of the filtered image. With a quiet different approach the Wildes system fragmented a standardized connection based on goodness-ofmatch values and Fisher's linear discriminant for pattern identification [3]. Jiali Cui el al. [4] proposed an iris detection algorithm based on principal component analysis. HyungGu Lee el al. utilized the binary feature for Iris detection [5]. On these varieties of method, feature vector is not main key feature for detection techniques except the Daugman's model. For the numerical presentation of the character of any object, for the benefit in statistical procedures and defining the characteristics into some dimensional vectors, feature vector approach is more reliable. Beside these, selecting the optimal feature is also a vital criteria.
In our proposed model, we utilized the Daugman's model. After feature encoding, DNS and GNS maps [6,7] are used to reduce the size of the feature vector and extract the feature vector of our tested IRIS dataset. For the machine learning purpose on recognition phase, single class support vector machine has been used.
In addition, we have compared the performance of our proposed approach considering the same environment with two other approaches: one is proposed by Li M. el al. [8], another one by K. Sathiyarajael al. [9]. However, the recent performance comparison in the area of iris recognition and detection depends on how far the accyracy, efficacy and scalability performance can be increased [15].
Iris dection and recognition is an important pre-processing step in automatic automatic systems and a well-designed technique can improve the accuracy in collecting clear iris images and mark noise areas [16].The paper is organized as follows -Section 2 describes the overview in depth of our proposed model. The results of the experiments carried out and their analysis are included in Section 3 and finally Section 4 concludes the paper. Figure 2 presents a detailed implementation of our research model, in which we highlight the major portions by drawing seperate blocks.

Segmentation
At the beginning of processing of the input image, some steps are required to ensure better performance from the system. We have used histogram equalization technique to adjust the image intensities in order to enhance contrast. For improving the edge detection, we have considered image adjustment. For the edge detection part, we have used Canny Edge Detection Algorithm (CEDA) [10]. CEDA is a multi-stage algorithm to detect a wide range of edges in the image. By smoothing the image with the help of Gaussian filter the noise will be removed. Then this algorithm finds the intensity gradients of the image and then nonmaximum suppression is applied to get rid of spurious response to edge detection. Figure 2 shows that in Segmentation portion (A), for pupil and boundary selection Hogh transformation has been used and for sclera/IRIS detection, mid point algorithm has been used.
For iris detection, in our system, we have implemented an automatic segmentation process with the help of two algorithms. Initially, we check for the corneal reflection. We have taken the complement of the given image and then we removed the dark points (reflection points). After that, from the given image we have calculated the first derivatives of intensity values by calculating the result based on threshold value we generate an edge map. The parameters of circles (center coordinates and the radius) are evaluated by voting in Hough space. In order to detect the pupil we bias the first derivative in vertical direction. Finally, we have drawn a circle by doubling the pupil radius.

Normalization
Daugman's model [2] is used for the normalization of our segmented iris regions. We have made sure that for all the normalized images must have the same resolution. We considered the center of the pupil as the reference point. We have passed the radial vectors through the iris region. The selected data points along each radial line are known as radial resolution. The number of radial lines going around the iris region are known as radial resolution where the number of radial lines going around the iris region are defined as angular resolution [14].
Because of the non-concentric nature of the pupil to the iris, to rescale points a remapping formula is needed based on the angle around the circle. The formula is Here, (O x , O y ) represents the center displacement of the pupil compare to iris center. r ′ represents the distance between the edges of pupil and iris. is the angle based on the edges were counted. r 1 is the radius of the iris [14]. The function first gives a 'doughnut' form to the iris region based on the angle. From this 'doughnut' form iris region we construct a 2D array with horizontal dimension of angular resolution and vertical dimension of the radial resolution [1]. Figure 3 shows the main structural visualization of producing the rectangular area from the circular radius of Daugman's rubber sheet model.

Feature Encoding
The template matrix size is set in such a way by doubling the column size of the normalized iris image and the row is kept same. The reason behind thisin our template both the real and imaginary value will participate. After that, the normalized iris pattern is convoluted with 1D Log-Gabor wavelets. First 1D signals are generated from 2D normalized iris pattern and then Gabor filter is used to those 1D signals. In the Log -Gabor equation we used the following value. The value f 0 is set to 18 which represents a scale 4 Gabor wavelet. From the experiment, we set the value of sigma over frequency to the 0.5.

(4)
Where G is the Gabor filtered function.f 0 and σ are the parameters of the filter. f 0 will give the center frequency of the filter [12].Log-Gabor filter returns a matrix with complex valued element with the size of normalized iris image. After that, two new matrix is created from based on the real part and imaginary part of Log-Gabor returned matrix. Thereafter, the raw data of these is converted to pseudo-polar coordinate system.Then, the values of the real part matrix and imaginary part matrix in converted into binary value. Finally, by merging these two matrix we get the template of a human. The data is set in such a way that the odd columns contains real part matrix value and the even columns holds imaginary matrix value. We also calculated absolute values. Figure 4 shows the phase quantization process in short. From this figure, the process of collecting the real and imaginary response of the image after applying applying Log Gabor Filter; has been visualized.

Texture Feature Extarction
For feature extraction, from the Gabor filtered image getting from feature encoding phase, we construct the DNS [6] maps according to the statistical parameters of Table 1. From Table 1 we are getting the exact parameters of the GNS and DNS map. We then calculate GNS feature vectors by averaging the DNS [6,7] values of the filtered image. For selecting the most significant features from the GNS map which exhibits spatial textures, we select only the features in the concentric circles of various radii at the center of the map. In our experiment, 8 and 16 features from first two innermost circles, 24 uniform angular features from each of the other 8 circles are considered to construct the feature vector. Therefore, the number of dimensions of the feature vector is 216 (=8+16+8x24). Figure 6 exhibits the steps of GNS and DNS map extraction from Gabor filtered image.

Training
In the training and testing, we utilize SVM (Single Class) with a Gaussian radial kernel function [6,7] The Gaussian radial basis kernel function is represented as: Where ( − )is the kernel function, and are the input feature vectors, and σ is a parameter set by users to determine the effective width of the basis kernel function [14].

RESULTS AND ANALYSIS
To validate the proposed model, CASIA-Iris-Interval dataset V3.0 [11] has been used. Also we have used the same dataset for comparison portion. This data set is consisting of six subset. Among those we have considered 200 images in total, in which we have utilized 70% data for training and 30% data for testing purposes for each eye of the individual. Figure 5. The output block of the whole process to get the GNS for right eye Figure 5 demonstrate the output blocks for each steps according to our propose model for the right eye from detection to GNS.After the detection of the eye, we eliminate the refection and then we detect the inner circle. From the rubber sheet image of the eye, we get imaginary and real response. Then we calculate the  According to Table 2, experimental result shows that our proposed model gives 92% accuracy for combined real portion recognition and 86% accuracy for combined imaginary portion. For each case, the real part always exhibits accuracy of more than 86% while the imaginary one exhibits always more than 78%, which concludes a good amount of percentage of accuracy. We have compared our proposed model with two other conventional approaches: one is by Li M.el al. [17] and other one is by K. Sathiyaraja el al.
[18].  Figure 6, we have seen that for combined real and imaginary part, our proposed model gives a better performance than two other stated models: Li M. el al.
[18] as algorithm 2. It is clearly visible that while for real portion the algorithm 1 and 2 is giving the result of 86% and 84%, ours one is giving 92%, which is really significant. Beside these, imaginary portion is giving 86% recognition accuracy while algorithm 1 and 2 is giving 82% and 81%.

CONCLUSION
In our proposed model, for segmentation purpose we have used Hough Transformation. Then Daugman's Rubber Shhet model is utilized for normalization purpose. For reducing the number of feature vector we have used GNS and DNS mapping, which creates significant changes on experimental result. Overall, we have got 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyesalong with the reduced dimension of 216 feature vector.  ISSN: 2088-8708