EV-SIFT-An Extended Scale Invariant Face Recognition for Plastic Surgery Face Recognition

Received Aug 11, 2016 Revised May 15, 2017 Accepted May 30, 2017 This paper presents a new technique called Entropy based SIFT (EV-SIFT) for accurate face recognition after the plastic surgery. The corresponding feature extracts the key points and volume of the scale-space structure for which the information rate is determined. This provides least effect on uncertain variations in the face since the entropy is the higher order statistical feature. The corresponding EV-SIFT features are applied to the Support vector machine for classification. The normal SIFT feature extracts the key points based on the contrast of the image and the VSIFT feature extracts the key points based on the volume of the structure. However, the EVSIFT method provides both the contrast and volume information. Thus EV-SIFT provide better performance when compared with PCA, normal SIFT and VSIFT based feature extraction. Keyword:


INTRODUCTION
Human faces are multi-dimensional and complex visual stimuli, which carry abundant useful information regarding the uniqueness of a person. Face recognizing utilizing them for security and authentication purposes has taken a new turn in current era of image analysis and computer vision, for instance, in applications that include surveillance, image retrieval, human-computer interaction and biometric authentication. Normally, face recognition system does not need the sense of touch or interaction from human to conduct the process of recognition. It is one of the advantages of face recognition over the other recognition methods. Face recognition can denote either in verification phase [1] or the identification phase [2]. In the verification phase, matching between two faces is resolved. There are lot of methods available to attain face recognition [3][4][5][6][7][8]. But the accuracy of recognition is not always high. This is because of the changing degrees of illumination, facial expressions, poses, aging, low resolution input image or facial marks [9], [10]. Various researchers have implemented different methodologies of face recognition to tackle the effect of pose [11], illumination [12], low resolution [13], aging [14] or a combination of any of them [15]. However these uncertainties could be fairly overcome and in the plastic surgery faces, the recognition gets even more intensified with the identification of person. The failure of recognition in plastic surgery faces is due to the lack or the variation in face components, skin texture, global face appearance and the geometric relationship existing among the facial features lack or the variation in face components [16][17][18]. The costefficient as well as the advanced means of plastic surgery has attracted people all around the globe. However, only few research contributions or methodologies have been reported in the literature to address the problem of recognizing the plastic surgery faces. Few of them include, the recognition using local region analysis  [20]. A review has also been done in [21] to illustrate the use of multimodal features in recognizing the plastic surgery based on the contributions. The contribution of the paper mainly considers three set of demonstrations. The first one is detection of exact plastic surgery face using the novel SIFT feature based on entropy. The second one is to accurately recognizing the true surgery portions while the third one is to analyze the performance of face recognition before and after plastic surgery.

LITERATURE REVIEW 2.1. Related Works
Maria De Marsico et al. [22] have made an accurate recognition of face, which has undergone plastic surgery, through the application of the region-based approaches on a multimodal supervised collaborative architecture, termed as, Split Face Architecture (SFA). They have validated the superiority of their methodology through applying the Supervised SFA to traditional PCA and FDA, to LBP in the Multiscale, Rotation Invariant version with Uniform Patterns, FARO (Face Recognition against Occlusions and Expression Variations) and FACE (Face Analysis for Commercial Entities).
Naman Kohli et al. [23] have put forth Multiple Projective Dictionary Learning framework (MPDL) that never desires to compute the and norms to recognize normal faces, even after they have been modified due to plastic surgery. Multiple projective dictionaries as well as the compact binary face descriptors have been utilized to learn the local and the global plastic surgery face representations, in order to facilitate the discrimination of the plastic surgery faces from the original ones. The testing that was done on the plastic surgery database has resulted in an accuracy of about 97.96%.
Chollette C Chude-Olisah et al. [24] have overcome the degradation in the performance of face recognition, they have found that their approach has outshined the previously available plastic surgery face recognition approaches, irrespective of the changes in illumination as well as expressions and the facial modifications resulting from plastic surgery. Hamid Ouanan [25] have introduced GaborHOG features based face recognition scheme, which uses HOG instead of DOG in the SIFT. M. I. Ouloul [26] introduced an efficient face recognition using SIFT descriptor in RGBD images which is based on RGB-D images produced by Kinect, this type of cameras cost less and it can be used in any environment and under any circumstances. Himanshu S. Bhatt et al. [27] have introduced a multi-objective evolutionary granular algorithm, which supports in the matching of images that were taken prior and later to plastic surgery. Initially, this algorithm does the generation of overlapping face granules at three-levels of granularity. Plastic surgery face recognitionhave undergone various developments in the recent past. The research contributions have been reported in the literature either in the feature extraction phaseor in the classification phase or in both the phases.

FEATURE EXTRACTION USING EV-SIFT
In Equation (1) Here each sample point of the image compared with 26 neighbours in the DOG scale space in order to find all the extreme points and 2D image space. The target point is needed to compare with 8 neighbours in the current image and also with 18 neighbours in the scale above and below. The true scale in variance can be The output which is calculated from the convolution of image with ) , , ( be at extremum if the scale of the image structure is close to the  value of the normalized Laplacian function. The points that are extrema in both spatial and scale spaces should be selected in case of detecting the blob structure and selecting them at most optimum scale. The next point is to determine the key points at the scale space extreme in the difference of Gaussian function expressed with the image. It follows some steps such as, a. Allocate the Orientation and Gradient Modules to each Key Point The parameters of the key point are completely depends on the distribution property of the gradient orientation of the image which is around the key points. So the orientation and gradient modules of the key point are calculated which registers invariance to image rotation. The gradient magnitude and orientation is calculated for each image sample. b. Volume based Feature Descriptor V-SIFT feature is same as SIFT where it uses the volume of the structure instead of taking the contrast as in normal SIFT feature. In the normal SIFT feature, the image scale space extreme is detected in the first step. From that it is known that the Laplacian is a second-order derivative which is very sensitive to noise. V-SIFT [28] [29] are implemented to overcome this limitation and also it can completely remove the unreliable key points.In this technique here is some unique maximum over scales in, In this method the value of in Equation (4) should be calculated at each key point location and if the value is below the threshold, the corresponding key points are removed. In the normal shift algorithm, the key points are removed based on the contrast value at the location of the key point of the image. But in V-SIFT, it is based on the volume of the structure which is estimated as follows, In Equation (5),  is the scale of the corresponding key point. We can derive the above equation as, From Equation (6) and (7), the extrema value of There is a rule in this case which is in Equation (14). Entropy is the measure of unpredictability of the content of the information in an image. It is the statistical measure of randomness which can be used to characterize the texture of the input image. Since the entropy is the higher order statistical feature it can have least effect on uncertain variations in the face. Entropy based feature descriptor is demonstrated in following steps.
Step1: The volume of the image is calculated as in V-SIFT formulation and it will be in the form of matrix which is determined in Equation (9).
Step 2: The information source is stationary and memory less. The volume of the structure in EV-SIFT analysis which is the probability function is represented in Equation (10).
Step 3: From the volume of the structure, the entropy is calculated. The entropy calculation for the EV-SIFT process is determined as follows, In Equation (11), if ) (V E is high entropy, then the volume is from uniform distribution and if ) (V E is low entropy, then the volume is from varied distribution. Therefore, the obtained final EV-SIFT descriptor of the whole data base is denoted as D i F .The gradient magnitudes and orientations along with volume and entropy descriptors of the image is sampled around the corresponding key point location using the scale of key points in order to select the level of Gaussian blur of the image. First sample with 8*8 neighbour window which is centred on the key point and then divide the neighbour into 4*4 child window. From this each child window, calculate the gradient orientation histogram with eight bins. Each descriptor contains a 4*4 array of histograms around each key point and each histogram contains 8 bins. Therefore 4*4*8=128 dimension is obtained which is the feature vector. The structure of the proposed recognition system is explained in Figure 1.

RESULT AND DISCUSSION 4.1. Experimental Setup
The experiment of face recognition is conducted using pre-surgery and post surgery faces. The data base for the pre-surgery and post surgery faces are downloaded from the URL: http://www.locateadoc.com/pictures/. In the corresponding data base, the pre-surgery and post surgery faces of 515 persons are present. The sample images of the database are shown in Figure 2.

Recognizing Surgery Portions
The plastic surgery portions are recognized by calculating the SIFT features. Here the features are determined for the pre surgery and post surgery faces. In this determination, many key points are obtained. Further both the pre surgery and post surgery faces are matched by the stitching process. Here the points which are not matched are taken which can be considered as the plastic surgery portions of the face. The recognized portion of plastic surgery face is shown in Figure 3.

Statistical Analysis
The statistical analysis of the plastic surgery face recognition describes the comparison of features such as Principle Component Analysis (PCA), SIFT, Volume SIFT and the proposed method EV-SIFT. The performance measures are analysed for all these features. The analysis of the classifiers such as linear SVM, quadratic SVM, RBF SVM and MLP SVM for before and after plastic surgery is illustrated in Table 1, Table  2, Table 3 and Table 4. The ranking of each measure is mentioned in bracket. Sensitivity is the measure of the method to correctly identify the positive samples while the sensitivity is the measure of the method to correctly identify the negative samples. Precision can give the ratio of positive against all the positive results. In Table 1 and Table 2, which is the linear SVM and quadratic SVM, the accuracy is better for the PCA while the sensitivity and the specificity are better for the EV-SIFT feature for plastic surgery faces. But here all the measures are better for the EV-SIFT feature. The ranking of all the measures is calculated and the final rank is best for EV SIFT Feature when compared to the other feature extraction methods in linear SVM and quadratic SVM. In table III, it describes the analysis for RBF SVM. Here all the measures are better for PCA, SIFT AND V-SIFT while the proposed EV-SIFT feature shows less performance. By analysing the rank, PCA is better than other methods in RBF SVM. In Table 4, all the measures show better performance while using EV-SIFT feature. So by examining the overall analysis, it is clear that the EV-SIFT feature extraction is better for the plastic surgery face recognition purpose.

Impact of SIFT dimension
In SVM classifier, two main parameters such as radius and Enlarge Factor (EF) are present. These factors are varied and the performance analysis is done. The diagrammatic representation on the impact of SIFT feature is shown in Figure 4. The value for radius is varied from 2.5, 5, 10, 15 and 20 where the corresponding value of EF is varied from 0.5, 1, 1.3, 1.5 and 1.7.
In Figure 4(a), the accuracy is better for the radius 2.5 and EF 0.5. The specificity is better for the radius 2.5 and EF 1 as shown in Figure 4(b). In Figure 4(c), specificity range is better for the radius 10 and the EF value 1.7. But the range of f1_score is highest for the radius 10

Sensitivity to Plastic Surgery Faces
The graphical representation of the ranking of different kernel of SVM classifier like linear, quadratic, RBF and MLP for with and without plastic surgery is shown in Figure 5. Generally, pre-surgery and post-surgery faces may have high variation in their features. In Figure 5, the rank of pre-surgery face is high where as the rank of post surgery face is low. Therefore, it is clear that the proposed EV-SIFT feature is highly sensitive to plastic surgery faces.

CONCLUSION
This paper has presented a face recognition technique that uses derived features based on EV-SIFT approach. The corresponding system was evaluated using plastic surgery image database of 515 subjects where it contains each image of pre-surgery and post surgery faces. The proposed EV-SIFT approach has obtained the volume of the structure and the contrast of the image and also it has removed all the unwanted key points effectively. The extracted features were applied to the SVM classifier for the recognition purpose. The performance measures were analyzed in different kernel of SVM with different existing features. Here EV-SIFT feature was effective for producing the best performance. The parameters of SVM classifier such as radius and enlarge factor was varied. From the analysis it was clear that, the performance was better for varied values of radius and EF and it was not fixed. So here a proper tuning was needed for obtaining the fixed value. In future work, the analysis based on the tuning process will be done to have the accurate recognition of plastic surgery face.