<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Genetic-Based Selection and Weighting for LBP, oLBP, and Eigenface Feature Extraction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tamirat Abegaz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gerry Dozier</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kelvin Bryant</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joshua Adams</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Brandon Baker</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joseph Shelton</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Karl Ricanek</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Damon L. Woodard</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>orth Carolina A&amp;T State University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>- In this paper, we have investigated the use of genetic-based feature selection (GEFeS), genetic-based feature weighting (GEFeW) on feature sets obtained by Eigenface and LBP. Our results indicate that GEFeS and GEFeW enhances the overall performance of both the Eigenface and LBP-based techniques. Compared to Eigenface hybrid, our result shows that both LBP and oLBP hybrids perform better in terms of accuracy. In addition, the results show that GEFeS reduces the number of features needed by approximately 50% while obtaining a significant improvement in accuracy.</p>
      </abstract>
      <kwd-group>
        <kwd>Local Binary Pattern (LBP)</kwd>
        <kwd>Eigenface</kwd>
        <kwd>Steady State Genetic Algorithm (SSGA)</kwd>
        <kwd>Overlapping Patches</kwd>
        <kwd>Feature Selection</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Feature Selection is a computational technique that
attempts to identify a subset of features that are most relevant
to a particular task (such as biometric identification) [1]. The
ideal feature selection technique removes those features that
are less discriminative and keeps those features that have high
discriminatory power. A number of feature selection
techniques have been developed and can be classified as:
Enumeration Algorithms (EAs), Sequential Search Algorithms
(SSAs), and Genetic Algorithms (GAs). EAs guarantee the
optimal subset of features by evaluating all possible subsets of
the features. This works well for a very small sized feature
sets, however, it is computationally infeasible when the size of
the feature set is large [2].</p>
      <p>SSAs attempts to divide a feature set, U, into two subsets
of features, X, and Y, where X denotes the selected features
and Y denotes the remaining ones. Based on user specified
criteria, SSAs select the least significant features from the
subset X and moves those features into Y while selecting the
most significant features from Y and moving them into X.
While SSAs are suitable for small and medium size problems,
they are too computationally expensive to use on large
problems [2].</p>
      <p>
        GAs attempt to find an optimal (or near optimal) subset of
features for a specific problem [
        <xref ref-type="bibr" rid="ref10 ref5 ref6 ref7 ref8 ref9">3, 4, 5, 6, 7, 8, 9, 10</xref>
        ]. First, a
number of individuals or candidate Feature Subsets (FSs) are
generated to form an initial population. Each FS is then
evaluated and assigned a fitness obtained from the evaluation
function specific to the problem at hand. Parents are then
selected based on fitness. New FSs are produced from the
selected parents by the processes of reproduction. Survivors
are selected from the previous generation and combined with
the offspring to form the next generation. This process
continues for user specified number of cycles.
      </p>
      <p>
        This work is an extension of the research performed by
Abegaz et. al [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In their work, Abegaz et al. used Genetic
and Evolutionary Feature Selection (GEFeS), GEFeS+ (which
is a co-evolutionary version of GEFeS) , and Genetic and
Evolutionary Feature Weighting (GEFeW), Eigenface
algorithm. In their work, Abegaz et. al. reported that
EigenGEFeS, Eigen-GEFeS+, and Egen-GEFeW enhanced the
overall performance of the Eigenface method while reducing
the number of features needed. Comparing Eigen-GEFeS,
Eigen-GEFeS+, and Eigen-GEFeW, they reported that
EigenGEFeW performed best in terms of accuracy even though it
used a significantly larger number of features as compared to
either Eigen-GEFeS or Eigen-GEFeS+. In this paper, we
extend the work of Abegaz et. al compare GEFeS, GEFeS+,
and GEFeW hybrids using Eigenface, LBP, and overlapped
LBP (oLBP).
      </p>
      <p>
        Our work is partly motivated by the research of Gentile et.
al [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Gentile et. al proposed a hierarchical two-stage
process to reduce the number of feature checks required for an
iris-based biometric recognition system. The claimed that a
shorter representation of the iris template by pre-aligning the
probe to each gallery sample and generate a shortlist of match
candidates. Our target is a similar system for Face
Recognition (FR) based on short length biometric templates
that are able to achieve higher recognition accuracies.
      </p>
      <p>The remainder of this paper is as follows. Section II
explains the feature extraction techniques used as input for the
GEFeS, GEFeS+, and GEFeW. Section III provides an
overview of GEFeS, GEFeS+, and GEFeW. Section IV
presents our experiment, and in Section V we present our
results. Finally, our conclusions and future work are presented
in Section VI.</p>
      <p>II. FEATURE EXTRACTION USING EIGENFACE, LBP, AND OLBP</p>
      <p>
        In a typical biometric system, the task of sample acquisition
and feature extraction are always performed [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Sample
acquisition is the gathering of biometric traits such as
fingerprints, iris scan, periocular images, or facial images.
From the acquired sample, feature extraction is performed to
create a feature vector to be used for comparison. In the case
of a facial biometric sample, Eigenface and LBP are
commonly used feature extractors. For a typical feature
extractor, the pre-enrolled images (and their associated feature
vectors) are stored in a database commonly referred to as
gallery [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], while newly acquired images (and their feature
vectors) are called probes [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        For Eigenface based feature extraction [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], each image in
the training dataset was converted into a single vector. This
conversion is necessary because one needs a square matrix
(transformation matrix or covariance matrix) to compute the
Eigenvectors (Eigenfaces) and the Eigenvalues . The gallery
images have been used to construct a face space spanned by
the Eigenfaces. Each image is then projected into the face
space spanned by the Eigenfaces. 560 discriminatory feature
weights were extracted for each image and stored for the
feature selection experiments.
      </p>
      <p>
        For LBP based feature extraction [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ], an image is first
divided into several patches (blocks) from which local binary
patterns are extracted to produce histograms from every
nonborder pixels. The histogram obtained from each patch is
concatenated to construct the global feature histogram that
represents both the micro-patterns and their spatial location. In
other words, the histograms contain description of the images
on three different levels of localities. The first one indicates
that the labels for histograms contain information about the
pattern on a pixel level. Second, the summation of the labels
obtained in the patch level to produce the information on a
regional level. Finally, the histograms at the regional level are
concatenated to produce the global descriptor of the image.
      </p>
      <p>
        The standard LBP uses those labels which have at most one
0-1 and one 1-0 transitions when viewed as a circular bit
string. Such labels are known as uniform patterns [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] For
uniform pattern LBP, every patch (block) consists of
bins where represents the bins for the patterns
with two transitions [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The remaining three bins represents
the bins for the patterns with 0 transitions (all zeros
(00000000) and all ones (11111111), and for all non-uniform
patterns (bin that represents more than two transitions) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
The total number of histogram is computed using the formula,
, where represents the number of blocks
and P represents the of sampling points. For our research, we
use =8, and =36 to obtain a feature vector of 2124.
      </p>
      <p>
        oLBP based feature extraction [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] is a variant of LBP that
attempts to include the internal border pixels that are left out
during the process of logical portioning on the standard LBP
feature extraction method. This is done by logically
overlapping the patches horizontally, vertically, and both
horizontally and vertically with a one pixel overlap. This
provides information to determine whether including the
middle border pixels have impact on the recognition rate of
the LBP based face recognition algorithm.
      </p>
    </sec>
    <sec id="sec-2">
      <title>III. GEFES, GEFES+, AND GEFEW</title>
      <p>
        GEFeS, GEFeS+, and GEFeW were designed for selecting
and/or weighting the most discriminatory features for
recognition. GEFeS, GEFeS+, and GEFeW are instances of a
Seady State GA(SSGA) with in eXplanatory Toolset for the
Optimization Of Launch and Space Systems (X-TOOLSS)
[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. In order to describe GEFeS, consider the following
feature vector.
      </p>
      <p>Furthermore, consider the vector shown in Figure 2 as a
candidate real-coded feature mask.</p>
      <p>For GEFeS a masking threshold value of 0.5 is used to
create a binary coded candidate feature mask which will be
used as condition for masking features. If the random real
number generated is less than the threshold (0.5 in this case),
then the value corresponding to the real generated number is
set to 0 in the candidate feature mask vector or 1 otherwise.
The candidate feature mask is used to mask out a feature set
extracted for a given biometric modality. Figure 3 shows the
candidate binary coded feature mask matrix obtained from the
random real numbers generated in Figure 2. The masking
threshold value is applied on the real numbers to obtain the
binary representation</p>
      <p>When Comparing the candidate feature mask with the
feature matrix, if a position corresponding to the feature
matrix value in the candidate feature mask is 0 then that
feature value will be masked out from being considered in the
distance computation. Figure 4 shows the result of the features
in Figure 1 when feature masking (Figure 3) is applied to a
feature vector.
GEFeS + is a co-evolutionary version of GEFeS where that
instead of using the static threshold value of 0.5, we evolve a
threshold value between 0 and 1. So each random number
generated using a uniform distribution has a masking
threshold value that determines whether the feature
corresponding to features is masked out or not.</p>
      <p>For GEFeW, the real-coded candidate feature mask is used
to weight features within the feature matrix. The real-coded
candidate feature mask value is multiplied by each feature
value to provide a weighted feature. If the number generated
is 0 (or approximately equal to 0) the feature value is 0, which
basically means that the feature is masked.</p>
      <p>As given in Equation 1, the fitness returned by the
evaluation function is the number of recognition errors
encountered after applying the feature mask multiplied by 10
plus the percentage of features used. The selection of the
parent is based on smaller fitness values because the
optimization goal is to reduce the number of recognition
errors (i.e. increasing the accuracy) while reducing the number
of features.
(1)</p>
    </sec>
    <sec id="sec-3">
      <title>IV. EXPERIMENTS</title>
      <p>
        The dataset used in this research is a subset of the Face
Recognition Grand Challenge (FRGC) dataset [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. In our
dataset, 280 subjects were used, with each subject having a
total of 3 associated images with it. Out of 840 images, 280
were used as probe and 560 images were selected for training
images. The images had passed the pre-processing stages such
as eye rotation alignment, histogram equalization, masking
resizing (each with 225 by 195), and conversion of the images
into greyscale.
      </p>
      <p>For the GEFeS, GEFeS+, and GEFeW, the inputs used
were the features extracted using Eigenface, LBP, and oLBP
feature extraction methods. These methods were used on a
subset of the FRGC dataset. This subset was selected because
it contains a variety of imaging conditions such as different
ethnic origins, frontal images that were neutral, and frontal
images that had facial expressions.</p>
      <p>The objective of this experiment is to compare the impact
of applying GEFeS, GEFeS+, GEFeW on the Eigenface, LBP,
and oLBP based feature extraction methods.</p>
    </sec>
    <sec id="sec-4">
      <title>V. RESULTS</title>
      <p>For our experiment, nine GEFeS, GEFeS+, GEFeW
instances were used. These instances all have a population
size of 20, Gaussian mutation rate of 1 and mutation range of
0.2. The Mutation rate value of 1 implies that all children
(100%) must undergo mutation. The mutation range provides
a window from the current value (obtained value after
recombination) that the new value will be mutated.
Furthermore, they were each run a total of 30 times with a
maximum of 1000 function evaluations. GEFeS, GEFeS+, and
GEFeW were designed for selecting and/or weighting the
most discriminatory features for recognition. Our results are
shown in Tables I.</p>
      <p>In Table I, the columns represent the method used, the
percentage of the average features, the average accuracy, and
the best accuracy obtained. The percentage of the average
accuracy is computed using the results obtained from the 30
runs. The best accuracy is selected from the run that resulted
in the smallest number of errors.</p>
      <p>ANOVA and t-Tests were used to divide the GEFeS,
GEFeS+, GEFeW instances and the baseline algorithms into
equivalence classes. As shown in Table 1, comparing the
baseline algorithms, the Eigenface method performs best. The
results show that when using 100 percent of the features, the
maximum accuracy obtained for the baseline LBP was
70.36%. While the BaselineLBPBest performs slightly better
than the baseline BaselineLBP, it still uses the entire feature set
As can be seen in Table 1, applying GEFeS on the feature set
extracted by the standard LBP significantly improves
accuracy from a 70.36% to 96.62%. This result shows that
GEFeS is actually masking out those features which are less
relevant for recognition. This improvement in accuracy comes
also with a reduction in the number of features used for
recognition.</p>
      <p>Compared to GEFeS and GEFeS+, all of the results show
that GEFeW used a larger number of features. Using a larger
number of features brings a better result in the case
EigenGEFeW as compared to Eigen-GEFeS, and Eigen-GEFeS+.
Surprisingly, in the case of LBP-GEFeW and oLBP-GEFeW
the result is the opposite. Utilizing a significantly larger
number of features actually decreases the accuracy for both
LBP-GEFeW and oLBP-GEFeW as compared to their
corresponding methods.</p>
      <p>LBP-GEFeS, LBP-GEFeS+, oLBP-GEFeS, and
oLBPGEFeS+ fall in the best equivalence class with respect to
accuracy. This means that there is no statistical difference
among them. All performed well in terms of reducing the
number of features needed and in producing a significant
improvement in accuracy from their corresponding baseline
methods.</p>
      <p>Figure 1 shows the Cumulative Match Characteristic (CMC)
curve for the BaselineLBP, BaselineoLBPbest, BaselineEigenface and
for the methods that fall in the first equivalent class. As can be
seen from the Figure 1, LBP-GEFeS, LBP-GEFeS+,
oLBPGEFeS, and oLBP-GEFeS+ obtain approximately 97.5%
accuracy at rank 10. However, both BaselineEigenface and
EigenGEFeS performed well (approximately 96%) at rank 10.
BaselineLBP performed relatively poorly in terms of accuracy.</p>
    </sec>
    <sec id="sec-5">
      <title>VI. CONCLUSION AND FUTURE WORK</title>
      <p>Our results using GEFeS, GEFeS+, and GEFeW suggests
that hybrid GAs for feature selection/weighting enhances the
overall performance of the Eigenface, LBP, and oLBP
methods while reducing the number of features needed. When
comparing the baseline accuracy, the Eigenface method
performed far better than both LBP and oLBP. However, the
hybrid GAs result show that both LBP and oLBP hybrids
performed much better than the Eigenface hybrid method.</p>
      <p>
        Our future work will be devoted towards the investigation
of GEFeS, GEFeS+, and GEFeW based on other forms of
Genetic and Evolutionary Computation[
        <xref ref-type="bibr" rid="ref21 ref22 ref23 ref24">21, 22, 23, 24</xref>
        ]
      </p>
    </sec>
    <sec id="sec-6">
      <title>ACKNOWLEDGMENT</title>
      <p>This research was funded by the Office of the Director of
National Intelligence (ODNI), Center for Academic
Excellence (CAE) for the multi-university Center for
Advanced Studies in Identity Sciences (CASIS) and by the
National Science Foundation (NSF) Science &amp; Technology
Center: Bio/computational Evolution in Action CONsortium
(BEACON). The authors would like to thank the ODNI and
the NSF for their support of this research</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>Ajay Kumar “ENCYCLOPEDIA OF BIOMETRICS” 2009, Part</source>
          <volume>6</volume>
          ,
          <fpage>597</fpage>
          -
          <lpage>602</lpage>
          , DOI: 10.1007/978-0-
          <fpage>387</fpage>
          -73003-5_157
          <string-name>
            <given-names>M.</given-names>
            <surname>Dash</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          , Bartlett,
          <string-name>
            <given-names>Javier R.</given-names>
            <surname>Movellan</surname>
          </string-name>
          , and Terrence J.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Sejnowski</surname>
          </string-name>
          ,, “
          <article-title>Feature Selection for Classification” Genetic Algorithms for Feature Selection”</article-title>
          ,
          <source>Intelligent Data Analysis</source>
          , vol.
          <volume>1</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>156</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Glenn</surname>
          </string-name>
          .
          <article-title>Genetic-based type II feature extraction for periocular biometric recognition: Less is more</article-title>
          .
          <source>In Proc. Int. Conf. on Pattern Recognition</source>
          ,
          <year>2010</year>
          . to appear.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Huang C. L.</given-names>
            and
            <surname>Wang</surname>
          </string-name>
          <string-name>
            <surname>C. J. “</surname>
          </string-name>
          <article-title>GA-based feature selection and parameters optimization for support vector machines”</article-title>
          ,. C.
          <string-name>
            <surname>-L. Huang</surname>
          </string-name>
          , C.-J.
          <source>Wang / Expert Systems with Applications</source>
          . Vol.
          <volume>31</volume>
          (
          <issue>2</issue>
          ),
          <year>2006</year>
          ,
          <fpage>pp231</fpage>
          -
          <lpage>240</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Adams</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woodard</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glenn</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          "GEFE:
          <article-title>Genetic &amp; Evolutionary Feature Extraction for PeriocularBased Biometric Recognition,"</article-title>
          <source>Proceedings 2010 ACM Southeast Conference, April 15-17</source>
          ,
          <year>2010</year>
          , Oxford, MS.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adams</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woodard</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>"A Comparison of Two Genetic and Evolutionary Feature Selection Strategies for Periocular-Based Biometric Recognition via XTOOLSS,"</article-title>
          ,
          <source>Proceedings of the 2010 International Conference on Genetic and Evolutionary Methods (GEM'10: July 12-15</source>
          ,
          <year>2010</year>
          ,
          <string-name>
            <given-names>Las</given-names>
            <surname>Vegas</surname>
          </string-name>
          , USA).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Simpson</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adams</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woodard</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glenn</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>K..</given-names>
          </string-name>
          <article-title>"GEC-Based Type-II Feature Extraction for Periocular Recognition via X-TOOLSS,"</article-title>
          <source>Proceedings 2010 Congress on Evolutionary Computation, July</source>
          <volume>18</volume>
          -23, Barcelona, Spain.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barnes</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>"Refining Iris Templates via Weighted Bit Consistency"</article-title>
          ,
          <source>Proceedings of the 2009 Midwest Artificial Intelligence &amp; Cognitive Science (MAICS) Conference, Fort Wayne, April 18-19</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adams</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woodard</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>"A Comparison of Two Genetic and Evolutionary Feature Selection Strategies for Periocular-Based Biometric Recognition via XTOOLSS", (to appear in)</article-title>
          <source>The Proceedings of the 2010 International Conference on Genetic and Evolutionary Methods (GEM'10: July 12- 15</source>
          ,
          <year>2010</year>
          ,
          <string-name>
            <given-names>Las</given-names>
            <surname>Vegas</surname>
          </string-name>
          , USA).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Tamirat</surname>
            <given-names>Abegaz</given-names>
          </string-name>
          , Gerry Dozier, Kelvin Bryant Joshua Adams, Khary Popplewell, Joseph Shelton, ,
          <string-name>
            <surname>Karl</surname>
            <given-names>Ricanek</given-names>
          </string-name>
          , Damon L.
          <article-title>Woodard” “Hybrid GAs for Eigen-Based Facial Recognition”</article-title>
          ,
          <source>accepted for IEEE Symposium Series in Computational Intelligence</source>
          <year>2011</year>
          (SSCI
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.E</given-names>
            <surname>Gentile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ratha</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Connell</surname>
          </string-name>
          ,
          <article-title>"SLIC: Short-length iris codes,"</article-title>
          <source>In Proc. IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems</source>
          ,
          <year>2009</year>
          . BTAS '
          <volume>09</volume>
          ,
          <fpage>28</fpage>
          -
          <lpage>30</lpage>
          Sept.
          <year>2009</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.E.</given-names>
            <surname>Gentile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ratha</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Connell</surname>
          </string-name>
          , “
          <article-title>An efficient, two-stage iris recognition system”</article-title>
          ,
          <source>In Proc. 3rd International Conference on Biometrics: Theory, Applications, and Systems (BTAS)</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Peter</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Higgins</surname>
          </string-name>
          ,
          <article-title>"Introduction to Biometrics"</article-title>
          ,
          <source>The Proceeding of Biometrics consortium conference</source>
          <year>2006</year>
          ,
          <article-title>Baltimore”</article-title>
          , MD, USA, Sept.
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Turk</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Pentland</surname>
          </string-name>
          ,
          <article-title>"Eigenfaces for recognition"</article-title>
          ,
          <source>Journal of Cognitive euroscience</source>
          , Vol.
          <volume>13</volume>
          , No.
          <issue>1</issue>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Caifeng</given-names>
            <surname>Shan</surname>
          </string-name>
          and
          <string-name>
            <given-names>Tommaso</given-names>
            <surname>Gritti</surname>
          </string-name>
          ,
          <article-title>" Learning Discriminative LBPHistogram Bins for Facial Expression Recognition"</article-title>
          ,
          <source>Proc. of 15th EUSIPCO</source>
          , Poznan, Poland,
          <year>September 2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Goldberg</surname>
            ,
            <given-names>Toimo</given-names>
          </string-name>
          <string-name>
            <surname>Ahonen</surname>
          </string-name>
          , Abdenour Hadid, and
          <article-title>Matti Pietik¨ainen " Learning Face Expression Recognition”</article-title>
          , http://www.ee.oulu.fi/mvg/,
          <source>visited on sept 10</source>
          ,
          <fpage>2120</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ren</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Kee</surname>
          </string-name>
          ,”
          <article-title>LBP discriminant alalysi for face verification</article-title>
          ,”
          <source>in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)</source>
          , vol
          <volume>3</volume>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>172</lpage>
          ,
          <year>June 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Tamirat</surname>
            <given-names>Abegaz</given-names>
          </string-name>
          , “
          <string-name>
            <surname>GE ETIC A D EVOLUTIO ARY FEATURE SELECTIO A D WEIGHTI G FOR FACE RECOG ITIO</surname>
          </string-name>
          <article-title>”</article-title>
          , thesis submitted to North Carolina A&amp;T State University
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>M. L. Tinker</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <article-title>Dozier, and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Garrett</surname>
          </string-name>
          , “
          <article-title>The exploratory toolset for the optimization of launch and space systems (x-toolss</article-title>
          ),” http://xtoolss.msfc.nasa.gov/,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>P.</given-names>
            <surname>Jonathon</surname>
          </string-name>
          <string-name>
            <given-names>Phillips1</given-names>
            ,
            <surname>Patrick</surname>
          </string-name>
          <string-name>
            <given-names>J. Flynn2</given-names>
            ,
            <surname>Todd</surname>
          </string-name>
          <string-name>
            <given-names>Scruggs3</given-names>
            ,
            <surname>Kevin W. Bowyer2 Jin</surname>
          </string-name>
          <string-name>
            <given-names>Chang2</given-names>
            ,
            <surname>Kevin</surname>
          </string-name>
          <string-name>
            <given-names>Hoffman3</given-names>
            ,
            <surname>Joe</surname>
          </string-name>
          <string-name>
            <given-names>Marques4</given-names>
            ,
            <surname>Jaesik</surname>
          </string-name>
          <string-name>
            <given-names>Min2</given-names>
            ,
            <surname>William</surname>
          </string-name>
          <string-name>
            <surname>Worek3</surname>
          </string-name>
          ,”
          <article-title>Overview of the Face Recognition Grand Challenge”</article-title>
          ,
          <source>IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Danial</given-names>
            <surname>Ashlock</surname>
          </string-name>
          . “
          <article-title>Evolutionary Computation for Modeling and Optimization</article-title>
          .”, Springer,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Dozier</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Homaifar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tunstel</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Battle</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>“An Introduction to Evolutionary Computation” (Chapter 17), Intelligent Control Systems Using Soft Computing Methodologies</article-title>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zilouchian &amp; M. Jamshidi</surname>
          </string-name>
          (Eds.), pp.
          <fpage>365</fpage>
          -
          <lpage>380</lpage>
          , CRC press.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guillamet</surname>
          </string-name>
          , &amp;
          <string-name>
            <surname>J. Vitri</surname>
          </string-name>
          <article-title>`a, “Evaluation of distance metrics for recognition based on non-negative matrix factorization”</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          ,
          <volume>24</volume>
          (
          <fpage>9</fpage>
          -
          <lpage>10</lpage>
          ),
          <year>2003</year>
          ,
          <fpage>1599</fpage>
          -
          <lpage>1605</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Fogel</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>Evolutionary Computation: Toward a New Philosophy of Machine Intelligence</article-title>
          .
          <source>IEEE Press, 2nd Edition</source>
          .,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>