=Paper=
{{Paper
|id=None
|storemode=property
|title=Comparison of Genetic-based Feature Extraction Methods for Facial Recognition
|pdfUrl=https://ceur-ws.org/Vol-710/paper17.pdf
|volume=Vol-710
|dblpUrl=https://dblp.org/rec/conf/maics/SheltonDBSAPAWR11
}}
==Comparison of Genetic-based Feature Extraction Methods for Facial Recognition==
Comparison of Genetic-based Feature Extraction Methods for
Facial Recognition
#
Joseph Shelton1, #Gerry Dozier2, #Kelvin Bryant3, #Lasanio Smalls4, #Joshua Adams5,
#
Khary Popplewell6, #Tamirat Abegaz7, ^Damon L. Woodard8, ~Karl Ricanek9
#
Computer Science Department, North Carolina Agricultural and Technical State University
1601 East Market St. Greensboro, NC, 27411, United States of America
^314 McAdams Hall, Clemson University, Clemson, S.C. 29634-0974, United States of America
~CIS Building, Room 2010, 601 South College Road, Wilmington NC, 28403
jashelt1@ncat.edu1, gvdozier@ncat.edu2, ksbryant@ncat.edu3, lrsmall@ncat.edu4, jcadams2@ncat.edu5,
ktpopple@ncat.edu6, tamirat@programmer.net7, woodard@clemson.edu8, ricanekk@uncw.edu9
Abstract The original GEFE method was theorized to have a bias
In previous research, Shelton et al. presented a genetic- due to the selection process of the coordinates for a patch.
based method for evolving feature extractors for facial The coordinates represented the left corner of a patch,
recognition. The technique presented evolved feature which increased the possibility of the patch dimensions
extractors that consisted of non-uniform, overlapping exceeding the boundaries of a facial image. Any patch that
patches and did not cover the entire image. In this paper, exceeded the bounds was shifted till the whole patch was
we compare the use of non-uniform, overlapping patches within the image space.
with uniform, overlapping patches. Our results show a Because the possible coordinates could be anywhere on
statistically significant performance improvement over the
technique presented in Shelton’s previous paper.
an image, the probability of selecting a patch that would
just end up in the lower right hand corner was greater than
any other location on the image. The method used in this
research seeks to eliminate any potential bias by
Introduction representing the coordinates of a patch as the center of it.
This creates a greater probability of a patch being placed on
Biometric recognition is the science of identifying an
all corners of an image.
individual or group of individuals based on
The remainder of this paper is as follows: we will
physical/behavioral characteristics or traits (Ross, 2007).
introduce the concept of GEFE for evolving LBP Feature
One of the most popular biometric modalities is the face
Extractors composed non-uniform and uniform patches, as
(Li and Jain, 2005; Ahonen, Hadid and Pietikinen 2006;
well as describing the genetic algorithm used in this
Matas et al., 2002) and perhaps one of the more widely
research. We will discuss our experimental setup, we will
used techniques for extracting features from facial images
show our results and finally we will present our
for the purpose of biometric recognition is the Local
conclusions and future work.
Binary Pattern (LBP) method (Ojala and Pietikinen, 2002).
Shelton et al. introduces a genetic-based method, GEFE
(Genetic & Evolutionary Feature Extraction), for evolving
LBP feature extractors that consisted of non-uniformed, GEFE using Non-Uniform and Uniform Patch
unevenly distributed patches that do not cover the entire Sizes
image. The proposed method proved superior to the LBP is a texture operator that can be used to extract
traditional LBP which uses uniform, evenly distributed, texture information in the form of image features. The
non-overlapping patches, that cover the entire image. In images used in this work are gray-scale, and are all facial
this paper, we introduce an alternative GEFE approach images. For the standard LBP technique (Ojala and
which is similar to the original GEFE approach with the Pietikinen, 2002), a number of uniform, non-overlapping,
exception that the unevenly distributed, overlapping and evenly distributed patches are used to cover an image.
patches are of uniform size. Texture features are then extracted from each patch area of
the image.
Shelton et al. developed a genetic-based method for bit string that has more than two bit changes (once again,
evolving LBP feature extractors that were composed of including the wrap-around).
patches that were non-uniform, overlapping, unevenly
distributed, and did not cover the entire image. Figure 1
provides an example of the patch layout of the standard
LBP method (Figure 1a) and the original GEFE method
(Figure 1b).
Figure 2: The LBP Method
The number of possible binary patterns using 8 neighbors
is 256. Each binary pattern is classified as either uniform or
non-uniform. A uniform pattern is a bit string that has two
or less bit changes (including the wrap-around from the last
bit to the first bit). A non-uniform pattern is a bit string that
Figure 1a: Standard LBP Figure 1b: GEFE has more than two bit changes (once again, including the
Figure 1: Gray Scale Images fitted with patches wrap-around). As shown in Figure 3, the uniform pattern
has one change between the fourth and fifth bits, and one
Given a layout of patches for an LBP feature extractor, change between the eighth and first bits. Since the string
the LBP method is applied to each interior pixel within the wraps around, the last and first bits in the string must be
patch. Each pixel has a value between 0 and 255 that compared. The non-uniform pattern has changes between
represents the intensity of its gray level. When LBP is the second and third bits, the third and fourth bits, the fifth
applied to the pixels of a patch, a histogram is created that and sixth bits, and the seventh and eighth bits. Out of the
represents the unique texture pattern for that particular total 256 possible patterns, 58 of those patterns are
patch. The histograms of every patch on the image are then uniform. Two of the 58 patterns are 00000000 and
concatenated to form a unique set of features that 11111111.
represents the image.
Figure 2 provides an example of the LBP method being
Uniform Non-Uniform
applied to a particular pixel value for the center pixel with
an intensity value of 120. The center pixel is surrounded
by 8 neighboring pixels, shown in the 1st sample pattern in
Figure 2. The differences are calculated and shown in the
2nd pattern.
Upon inspection of the 3rd pattern, one can see a series of
zeros and ones. This pattern is created by taking the Figure 3: Uniform VS Non-Uniform
difference between each neighbor pixel and the center
pixel. If the difference is negative, then the conversion A subhistogram is created for every patch of an image,
value will be zero. If the difference is zero of greater, then and is composed of 59 bins. Bins 1 to 58 correspond to the
the conversion value for that neighbor will be one. The 58 possible uniform patterns using 8 neighbors. The 59th is
third pattern is then ‘unwrapped’ to form a binary string a bin that holds the count of all non-uniform patterns found
and the string is converted to an integer number, which is in the patch. The work of Ojala and Matti Pietikainen
the LBP value for that center pixel. For the center value in suggests that the most discriminating features of a facial
Figure 2, the LBP is 14 due to the sequence: 00001110. image contain predominantly uniform patterns. The
Where the binary string starts it’s unwrapping depends on subhistograms associated with each patch are then
the user, but this research starts the unwrapping process at concatenated to form a histogram representing the features
the leftmost corner. extracted by the LBP method.
The number of possible binary patterns using 8 As in Shelton et al.’s previous research, this paper uses a
neighbors is 256. Each binary pattern is classified as either steady-state GA (SSGA) to evolve a population of feature
uniform or non-uniform. A uniform pattern is a bit string extractors (FE) (Davis, 1991; Fogel, 2000). In previous
that has two or less bit changes (including the wrap-around research, Shelton et al. evolved FEs consisting of non-
from the last bit to the first bit). A non-uniform pattern is a uniform patches (not to be confused with the
uniformity/non-uniformity of LBP patterns presented
earlier).
compute SSGA{
t = 0;
Non Uniform GEFE initialize pop(t)
evaluate pop(t)
A candidate FE, fei, is a 6-tuple, , While(Not done){
where Xi = {xi,0, xi,1,…, xi,n-1} represents the x-coordinates Parent1 = Select_From_Pop(t)
of the center pixel of the n possible patches , Yi = {yi,0, yi,1, Parent2 = Select_From_Pop(t)
Child = Procreate(Parent1, Parent2)
… , yi,n-1} represents the y-coordinates of the center pixel Evaluate(Child)
of the possible patches, Wi = {wi,0, wi,1, … , wi,n-1} Replace(Worst(Pop(t+1), Child)
represents the widths of the n possible patches, Hi = {hi,0, t = t+1;
}
hi,1,…, hi,n-1} represents the heights, Mi = {mi,0, mi,1,…, }
mi,n-1} represents the mask for each patch, and fiti
represents the fitness of fei. The mask is a vector that
determines which patches are used when building the Figure 4: Pseudo-code for the GEFE SSGA
feature vector for an image. The purpose of masking out
patches is to reduce the number of features that need to be Experiment
compared when measuring similarity between images. The We performed our experiment on a subset of 105
FE can create patches with non-uniform sizes, meaning subjects taken from the Facial Recognition Grand
that the widths and heights for each n patch can be unique. Challenge (FRGC) dataset (Phillips et al., 2005). Each
Given a probe set and a gallery set the fitness is the subject in the FRGC dataset has three slightly different
number of errors made when comparing each probe to the images associated with it, as seen in Figure 5. Our dataset
gallery multiplied by 10 plus the fraction of the n patches of 105 subjects consisted of a probe set (one image per
from which features were extracted (1). subject), and a gallery set (two images per subject).
The probe set contains one of the images of each
n 1 subject, and the gallery set contains the other two images
fiti NumErrors * 10 (( m ) / n 1)
k 0
i ,k
(1) for each of the subjects. Since our dataset contained 105
subjects, a total of 105 images were in the probe set and
210 images were in the gallery set. The dimensions of our
Uniform GEFE images were 100 by 127 pixels.
Candidate FEs consisting of patches with uniform patch For this experiment, we compared the Standard LBP
size are similar with the exception that for any FE, fek , Wk method (SLBP), GEFE with non-uniform sized patches
= {wk,0, wk,1, … , wk,n-1} is of the form, wk,0 = wk,1,…, wk,n- (GEFEn), and GEFE with uniform sized patches (GEFEu).
2 = wk,n-1, meaning that the widths of every patch is the
same. Similarly, Hk = {hk,0, hk,1, … , hk,n-1} is of the form,
hk,0 = hk,1,…, hk,n-2 = hk,n-1, meaning that the height of
every patch is the same.
Steady State Genetic Algorithm
The SSGA used to evolve candidate FEs works as
follows. First a population of candidate FEs is randomly Probe Image Gallery Image Gallery Image
generated. Each candidate FE is then evaluated and Figure 5: Subject 27’s Snapshots
assigned a fitness. After the initial population has been
created, two parents are selected via binary tournament Results
selection (Fogel, 2000, Abraham, Nedjah and Mourelle, For our results, an SSGA was used to evolve a population
2006) and are used to create one offspring via uniform of 20 candidate feature extractors. The SSGA used uniform
crossover and Gaussian mutation (Davis, 1991; Fogel, crossover and Gaussian mutation, (where the Gaussian mu
2000; Kennedy and Eberhart 2001; Abraham, Nedjah and σ = 0.1). The SSGA was run 30 times for GEFEu and
Mourelle, 2006). The offspring is then evaluated, assigned GEFEn. For each run, a total of 1000 function evaluations
a fitness, and replaces the worst fit candidate FE in the were allowed.
population. The evolutionary process of selecting parents, In Table I, the average performance of the three methods
creating a offspring, and replacing the worst fit FE in the is shown. The SLBPM needed to be run only once since
population is repeated a user specified number of times. the patch characteristics were deterministic. GEFEn used an
Figure 4 provides a pseudocode version of an SSGA. average of 36.90% of patches, with an average accuracy of
99.84% while GEFEu used an average of 35.82% of
patches, with an average accuracy of 100%. Both GEFEu
and GEFEn outperformed SLBPM in terms of accuracy Action Consortium (BEACON). The authors would like to
while using a fewer number of features. thank the ODNI and the NSF for their support of this
A t-test was used to confirm the observation that GEFEu research.
had a statistically significant better performance (in terms
of accuracy) that GEFEn.
TABLE I
Research has been done that notes certain areas of a face
to be discriminating enough to effectively distinguish Experimental results for LBP (even distribution) and
between different persons (Matas et al., 2002). Figure 6 SSGA Experiments
shows an approximate positioning of patches for the best
feature extractors created using the GEFEn and the GEFEu. Methods Patches Avg. Best
For Figure 6b, the patches are meant to be the same size. Used Accuracy Accuracy
To avoid confusion, one of the patches was drawn in
green. SLBPM 100.0% 99.04% 99.04%
It is interesting to see that the majority of patches are
around the ocular region. Because the GEFEn and the
GEFEu choose this region to focus on suggests that this
GEFEn 38.65% 99.68% 100.0%
area holds textures that are unique enough to differentiate
individuals from one another. This result is consistent with
conclusions presented in other research (Woodard et al.,
GEFEu 35.82% 100.0% 100.0%
2010; Miller et al., 2010).
References
[1] Timo Ojala, Matti Pietikainen, Multiresolution Gray-Scale and
Rotation Invariant Texture Classification with Local Binary Patterns,
IEEE Trans. Pattern Analysis and Machine Intellegence; 971-987; 2002
[2] Arun Ross, An Introduction To Multibiometrics, Appeared in Proc. of
the 15th European Signal Processing Conference (EUSIPCO), (Poznanm
Figure 6a: SSGA Non_Uniform Figure 6b: SSGA Uniform Poland), September 2007
Figure 6: Best Individuals [3] S. Z. Li and A. K. Jain, Eds., Handbook of Face Recognition.
Springer, 2005.
[4] Damon L. Woodard Shrinivas J. Pundlik Jamie R. Lyle Philip E.
Conclusion and future Work Miller, Periocular Region Appearance Cues for Biometric Identification
In CVPR Workshop on Biometrics, 2010
In this paper, two forms of GEFE were compared (along
with SLBPM). Both GEFEu and GEFEn had better [5] James E. Gentile, Nalini Ratha, Jonathan Connell SLIC: Short-Length
Iris Codes, in Submission to: Biometrics: Theory, Applications and
performance than SLBPM. GEFEu had a better Systems (BTAS ’09), 2010
performance than GEFEn. Our future work will be devoted
toward the investigation and comparison of GEFE using a [6] P. Miller, A. Rawls, S. Pundlik, and D. Woodard, Personal
identification using periocular skin texture, in SAC ’10: Proceedings of
variety of other forms of Genetic and Evolutionary the 2010 ACM symposium on Applied Computing . New York, NY, USA:
Computing. A second endeavor will be to use the smaller ACM, 2010.
feature sets evolved by GEFE in an effort to develop
hierarchical biometrics systems similar to the one proposed [7] L. Davis, Handbook of Genetic Algorithms. New York: Van Nostrand
Reinhold, 1991.
in Gentile’s paper (Gentile, 2009).
[8] D. Fogel, Evolutionary Computation: Toward a New Philosophy of
Machine Intelligence. IEEE Press, 2000.
Acknowledgments
[9] J. Kennedy and R. Eberhart, Swarm Intelligence. Morgan Kaufmann,
This research was funded by the office of the Director of 2001.
National Intelligence (ODNI), Center for Academic
Excellence (CAE) for the multi-university, Center for [10] Timo Ahonen, Abdenour Hadid, Matti Pietikinen, "Face Description
with Local Binary Patterns: Application to Face Recognition," IEEE
Advanced Studies in Identity Sciences (CASIS) and by the Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no.
National Science Foundation (NSF), Science & 12, pp. 2037-2041, Dec. 2006, doi:10.1109/TPAMI.2006.244.
Technology Center: Bio/Computational Evolution in
[11] Jiming Liu, Y. Y Tang, Y. C. Cao, An Evolutionary Autonomous [14] Joseph Shelton, Kelvin Bryant, Joshua Adams, Khary Popplewell,
Agents Approach to Image Feature Extraction IEEE Trans. Evolutionary Tamirat Abegaz, Kamilah Purrington, Damon L. Woodard, Karl Ricanek,
Comput. 1 2 (1997), pp. 141–158. Gerry Dozier, Genetic Based LBP Feature Extraction and Selection for
Facial Recognition in ACM Southeast Conference (ACMSE), 2011.
[12] Ajith Abraham, Nadia Nedjah, Luiza de Macedo Mourelle
Evolutionary Computation: from Genetic Algorithms to Genetic [15] J. Matas, P. B´ılek, M. Hamouz, and J. Kittler, Discriminative
Programming “Studies in Computational Intelligence” Springer (2006), Regions for Human Face Detection in The 5th Asian Conference on
1-20. Computer Vision, 23–25 January 2002, Melbourne, Australia
[13] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. [16] J. Gentile, N. Ratha, and J. Connell. An efficient, two-stage iris
Hoff, J. Marques, J. Min, and W. Worek, “Overview of face recognition recognition system in International Conference on Biometrics: Theory,
grand challenge,” in IEEE Conference on Computer Vision and Pattern Applications, and Systems (BTAS ’09), pages 1–5, 2009
Recognition, 2005.