<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automated Off-Line Writer Verification Using Short Sentences and Grid Features</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Konstantinos Tselios</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elias N. Zois</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Member IEEE</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Athanasios Nassiopoulos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sotirios Karabetsos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Member IEEE</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>George Economou</string-name>
          <email>economou@upatras.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Member IEEE</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>E. N. Zois is with the Electronics Engineering Department, Technological and Educational Institute of Athens</institution>
          ,
          <addr-line>Agiou Spiridonos Str., 12210, Aegaleo, Greece (phone:</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <fpage>21</fpage>
      <lpage>25</lpage>
      <abstract>
        <p>-This work presents a feature extraction method for writer verification based on their handwriting. Motivation for this work comes from the need of enchancing modern eras security applications, mainly focused towards real or near to real time processing, by implementing methods similar to those used in signature verification. In this context, we have employed a full sentence written in two languages with stable and predefined content. The novelty of this paper focuses to the feature extraction algorithm which models the connected pixel distribution along predetermined curvature and line paths of a handwritten image. The efficiency of the proposed method is evaluated with a combination of a first stage similarity score and a continuous SVM output distribution. The experimental benchmarking of the new method along with others, state of the art techniques found in the literature, relies on the ROC curves and the Equal Error Rate estimation. The produced results support a first hand proof of concept that our proposed feature extraction method has a powerful discriminative nature.</p>
      </abstract>
      <kwd-group>
        <kwd>Writer Verification</kwd>
        <kwd>Handwritten Sentences</kwd>
        <kwd>Grid Features</kwd>
        <kwd>ROC</kwd>
        <kwd>EER</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Bkeeping numerous situations, including defense and</p>
      <p>
        IOMETRICS recognition is an appealing method for
economic transactions secured. Thus, access to important
resources is granted by reducing potential vulnerability.
Among other biometric features, online and offline
handwriting, which is a subset of behavioral biometrics, has
been frequently used for resolving the problem of recognizing
writers either for security or forensic applications [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In
recent years, writer identification and verification tasks have
received considerable attention among the scientific
community. A special case of writer verification uses context
based handwriting. So, the answer to the question: is this
person who he claims to be? shall be provided by examining a
predetermined text of known transcription. As stated by
      </p>
      <p>
        Siddiqi and Vincent [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] this kind of writer verification
problem is similar to signature verification.
      </p>
      <p>
        Although content dependent approaches using well defined
semantics have been used at the early years of writer
recognition there are at least three important reasons that
justify the continuous study of handwriting patterns other than
signatures. Firstly, biometric verification schemes based on
handwritten words or small sentences can be potentialy used
to real world security applications which are quickly emerging
in a modern and continuous evolving mobile and Internet
based environment. Secondly, content based retrieval systems
could also benefit since their users could query handwriting
images from various corpuses with similar handwriting styles
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Finally, an important reason emerges from the field of
continuous verification [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. By this, we mean that we could
use the handwritten patterns, to grant access to resources not
only to a person’s initial entrance, but also within a cyclic and
continuously verification loop, throughout the entire use of the
application. In order to explore writer verification tasks, we
can test a number of algorithms in a number of well
established databases in the literature like IAM [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], Firemaker
[7], CEDAR [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and Brazilian Forensic letter database [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
These databases carry rich handwriting information since they
have a large sample size like 156 words and/or paragraphs.
The use of these databases might bring around awkward
circumstances if issues like those described in the continuous
verification schemes need to be raised. This can be easily seen
using the following example: Imagine the case that a person
has to verify him/her by writing a entire letter in a relative
small amount of time. In order to cope with this situation, an
alternative idea would be either to use a portion of the
aforementioned databases or to employ one small sentence content
like the one provided by database like the HIFCD1 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        In this work, we are presenting a novel feature extraction
method for writer verification based on the structured
exploitation of the statistical pixel directionality of
handwriting. This is achieved by counting, in a probabilistic
way, the occurrence of specific pixel transitions along
predefined paths within two pre-confined chessboard
distances. Then, the handwritten elements described by their
strokes, angles and arcs are modelled by fusing, in the feature
level, two and three step transitional probabilities. This is an
extension of the work proposed in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] for signature
verification.
      </p>
      <p>A two stage classification scheme based on similarity
measures and an SVM has been enabled in the HIFCD1
corpus. The verification efficiency is evaluated by measuring
the Equal Error Rate on the ROC curves, which is the point
were the probability of misclassifying genuine samples is
equal to the probability of misclassifying forgery samples. The
EER is evaluated as a function of the word population. This is
achieved by plotting the ROC curves each time we append a
word for verification.</p>
      <p>
        Finally, in order to benchmark our proposed method,
comparisons are provided against recently described, state of
the art methodologies for, off-line signature verification
preprocessing and feature extraction, as well as writer
verification and feature extraction approaches. Within this
context, we are providing a feasibility study of the
discriminative power of our method. This "feature
benchmarking" concept can be justified by the fact that an
ideal feature extraction method would make the classifier's job
trivial whereas an ideal classifier would not need a feature
extractor [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Thus, by keeping the classifier stage fixed,
feature benchmarking could be rated in a comparative way.
      </p>
      <p>The rest of this work is organized as follows: Section 2
provides the database details and the description of the feature
extraction algorithm. Section 3 presents the experimental
verification protocol which has been applied. Section 4
presents the comparative evaluation results while section 5
draws the conclusions.</p>
      <p>II. DATABASE AND FEATURE EXTRACTION PROCEDURE</p>
      <sec id="sec-1-1">
        <title>A. Database Description and Pre-Processing</title>
        <p>
          In order to provide a confirmation of the proposed method
and evaluate our approach, we have employed the HIFCD1
handwritten corpus which has been used formerly in the
literature [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. This corpus is under re-enlistment and
enrichment since its initial appearance in 2000. The developed
database consists of two different small sentences, one written
in Greek and the other one in English. Additionally to the first
twenty persons who have been enrolled in the past, another
twenty persons have been enrolled later on creating a total
temporary set of forty persons. This database is under
restructuring in order to increase its size and diversity (e.g.
include iris, fingerprints, gait, signatures, face, large scale
handwritten text etc.) of biometric samples equivalent to these
provided by modern databases like IAM [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] and BioSecure
[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Each sentence was written by each writer 120 times.
Consequently, 9600 sentences were recorded in our database
containing a total of 48000 words. Both linguistic forms of the
sentences are presented in Fig.1. The Greek language, being
our native language, was used in order to maintain constant
handwriting characteristics. The Greek sentence is made up of
two small words of three letters, two medium length words of
seven letters and a lengthy word of eleven letters. Each word
has been created in its own cell thus making segmentation
procedures trivial. For every word image of the corpus,
preprocessing steps are applied in order to provide an enhanced
image version with maximized amount of utilized information.
The pre-processing stage includes thresholding of the original
handwritten image using Otsu’s method [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and thinning in
order to provide a one pixel wide handwritten trace, which is
considered to be insensitive to pen parameters changes like
size, colour and style. Finally, the bounding rectangle of the
image is produced. It must be pointed out that we treat the
handwritten image as a whole and we do not perform any
character segmentation. Next, an alignment is carried out for
every bounded image.
        </p>
        <p>
          This stage gathers the intrapersonal useful information from
all the samples of a writer inside a region that is considered to
be the one that contains the most useful handwriting
information [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. In this work, we have used the estimated
coordinates of the centre of mass x and y for each image.
Fig. 2 presents in a graphical way the above discussion. In this
work the term ‘most informative window’ (MIW) of the
handwritten pattern is presented by considering the processed
handwritten word sub-region, inside the bounded image,
centred at x and y parameters while its length and width are
determined empirical with trial an error method.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>B. Feature Extraction</title>
        <p>The feature extraction method maps the handwriting
information, represented by the sequence of MIW words, to a
feature vector which models handwriting by estimating the
distribution of local features like orientation and curvature.
The idea behind this originates from the simplest form of
chain code. Analytically, chain code describes an eight set of
sequences of two pixels and codes the succession of different
orientations on the image grid. When sequences of three
successive pixels are examined, line, convex and concave
curvature features are generated. Since we do not utilize the
features’ order of appearance, the corresponding features
which can be defined uniquely, beginning from a central pixel
to another one, inside a chess-board distance equal to 2 are
twenty-two (22). The enforcement of the symmetry condition
limits the number of independent convex and concave features
to 11. This subset is enriched with the use of four line-features
describing the fundamental line segments of slope 0, 45, 90,
135. This 15-dimensional feature space defines the new
embedding space. Furthermore we have partitioned the MIW
image to a 2 × 2 sub-window grid, and the respective outputs
have been fused in feature level by simple appending.</p>
        <p>Following the above idea, we explore an additional feature
set by measuring the pixels paths which are obeying the
following statement. Find the four pixel connected paths,
while restraining the chess-board distance among the first and
the fourth pixel equal to three and co instantaneously
restraining the chess-board distance among the first and the
third pixel equal to two, by ignoring the prior path selection
that has taken place in the inner two-step transition. This
provides a feature with dimensionality of 28 since we do not
partition the image. The final feature vector is generated by
appending, in a feature fusion way, the aforementioned two
and three step features. Its dimensionality equals to 88 (four
sub-images x 15 features + one image x 28 features) and it is
depicted graphically in Fig. 3. Algorithmically, a rectangular
grid of 4 × 7 dimension scans every input of MIW words
sequence. This mask aligns each aforementioned pixel with
the {5, 3} coordinate, thus enabling 15 potential 2-step paths
and 28 3-step paths from the central pixel according to the
previous discussion. Then, the paths which are included in the
feature set are marked and a counter updates the
corresponding features found. Finally, the feature components
are normalized by their total sum in order to provide a
probabilistic expression.</p>
        <p>
          As described in section II, the input to the classification
system are the training and testing feature vectors denoted
hereafter as {vTw , vTSw} . The training set vTw is composed of the
genuine and forgery vectors {GTW , FTW }
of each writer
Wi , i = 1, 2,..., 40 . The GTW vectors are modeling the genuine
class population by means of their average value µvGTW and
standard deviationσˆvGTW . Next, the similarity scores of the
(1)
genuine training vectors are evaluated by using the weighted
distance as eq. (1) provides [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] and their pdf S (vGTW |Wi ) is
        </p>
        <p>S (vFTW |Wi )
stored. A similar procedure, described by eq. (2), has been
applied in order to derive the distribution of the similarity
scores for the case of the false train
samples {FW } .</p>
        <p>S (vGTW |Wi ) = ⎜⎛ ∑88 σ) ( j)v−G2TW (G( j)TW − µ ( j)vGTW ) ⎟
2 ⎞
⎝ j=1 ⎠
−0.5
−0.5
S (vFTW |Wi ) = ⎜⎛ ∑88 σ) ( j)v−F2TW ( F ( j)TW − µ ( j)vFTW ) ⎟ (2)
2 ⎞
⎝ j=1 ⎠</p>
        <p>Following the first stage, a two-class support vector
machine is employed in order to provide a mapping of the
training similarity scores to another distance space, induced by
the SVM. Accordingly, inputs to the second stage are the
genuine and impostor distribution
scores S (vGTW |Wi ) ,
S (vFTW |Wi ) . The output of the SVM is a continuous-valued
distance of the optimal separating hyper-plane from the
unknown test input sample vector [24]. The mapping function
has been represented by a Gaussian Radial Base kernel
function after a number of trials.</p>
        <p>
          The testing phase uses the remaining samples of the
genuine and forgery sets {vTSw} = {GTSW , FTSW } . Thus, for each
writer, the similarity scores, evaluated from the samples of the
testing set, are presented as an input to the second stage SVM
mapping function. A negative value from the SVM output
indicates that the unknown feature vector is below the optimal
separating hyper plane and near the hyper-plane which
corresponds to the genuine class. On the other, a positive
value denotes that the unknown input vector tends to fall
towards the impostor hyper-plane class [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Finally, the
continuous SVM output models both the overall distribution
of the genuine writers along with the impostor ones. The
selection of the training samples for the genuine class is
accomplished using random samples with the hold-out
validation method.
        </p>
        <p>Evaluation of the verification efficiency of the system is
accomplished with the use of a global threshold on the overall
SVM output distribution. This is achieved by providing the
system’s False Acceptance Rate (FAR: samples not belonging
to genuine writers, yet assigned to them) and the False
Rejection Rate (FRR: samples belonging to genuine writers,
yet not classified) functions. With these two rates, the receiver
operator characteristics (ROC) are drawn by means of their
FAR / FRR plot. Then, classification performance is measured
with the utilization of the system Equal Error Rate (EER: the
point which FAR equals FRR).</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>IV. RESULTS</title>
      <sec id="sec-2-1">
        <title>A. Benchmarking With Relative Feature Algorithms</title>
        <p>
          We have benchmarked the proposed methodology against
three other feature extraction methods for signature
verification and writer identification, which can be found in
the literature. The first is a signature verification texture based
approach, which is provided by Vargas, Ferrer, Travieso and
Alonso [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Secondly, we are examining the performance of
a shape descriptor proposed by Aguilar, Hermira, Marquez
and Garcia, which is based on the use of predetermined shape
masks [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. In all cases, the pre-processing as well as the
feature extraction steps have been realized according to the
description described by the authors. The third method uses
the f1 contour direction pdf features and the f2 contour hinge
features which are a part of the work proposed by Bulaku and
Schomaker [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. It is of great interest that the f2 feature is one
of the most powerful descriptors for modelling the
handwriting. It must be noted that, an appropriate
preprocessing step has been carried out in order to provide the
contours of the handwritten images.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Verification Results</title>
        <p>According to the material exposed in section III,
representation of the genuine class has been realized with
various schemes by utilizing 5, 10, 15, 20, 25, 30 samples for
the {GTW } training and 115, 110, 105, 100, 95 and 90 samples
for the {GTSW } testing. On the other, the {FTW } training set
for the forgery class has been formed using one sample of all
the remaining writers which results to a number of 39
samples. The {FTSW } samples are formed by employing the
remaining 119 ( samples writer ) × 39 writers , resulting to a
total number of 4641. The ROC curves, which are drawn as a
function of the number of words and presented to figs, 4-8,
illustrate the classification efficiency of our method against to
those mentioned to the previous section. These curves have
been evaluated for the last training scheme, i.e 30 and 90
training samples for {GTW } and {GTW } population. Similar
results regarding the evaluation taxonomy have been obtained.</p>
        <p>Commenting on the results, it can be easily inferred that our
method provides a challenging, first hand proof of concept of
its enhanced writer verification capabilities. Another
interesting issue is that the verification efficiency is enhanced
when the number of the inserted words to the feature stage
increases, which is intuitively correct. An Additional comment
is that the English sentence provides a boosted EER when
compared to the Greek sentence, even though Greek is our
native language. This might be due to the fact that the text
used in the English sentence incorporates lengthier words
when compared to the Greek one. Another standpoint for the
enhanced Latin EER measure could be that when Greeks or
individuals which are not having English as their native
language are forced to write in Latin, their response provides
less spontaneous handwritten samples. This may have
introduced less writer specificity in the data which in its turn
provides higher verification rates. Although the results are
quite encouraging however; they must be further tested in
larger databases and under a number of different feature and
classifications schemes. The best EER rates corresponding to</p>
        <p>TABLE I
CLASSIFICATION EFFICIENCY (%) BASED ON THE EQUAL ERROR RATE DERIVED FROM FIGS. 4-8
Feature
Extraction Method</p>
        <p>Proposed work</p>
        <p>Sequences of Words
(1st / {1st &amp; 2nd } / {1st&amp; 2nd&amp;3rd} / {1st&amp; 2nd&amp;3rd&amp;4th} / {all}
English Sentence</p>
        <p>
          Greek Sentence
15.53 / 6.05 / 5.92 / 4.90 / 4.08
22.78 / 11.13 / 9.21 / 7.14 / 5.71
Feature proposed by [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]
13.54 / 11.10 / 9.08 / 7.69 / 6.92
15.04 / 12.29 / 10.99 / 9.76 / 8.96
f1 Feature proposed by [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
29.81 / 21.06 / 19.46 / 18.41 / 14.12
29.78 / 28.08 / 26.49 / 23.85 / 21.98
f2 Feature proposed by [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
20.22 / 12.72 / 11.36 / 7.48 / 5.58
26.55 / 17.72 / 17.57 / 12.41 / 10.82
Feature proposed by [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
28.95 / 28.19 / 24.64 / 19.07 / 16.90
32.30 / 30.44 / 29.18 / 28.47 / 27.63
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Plamondon</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Srihari</surname>
          </string-name>
          ,
          <article-title>"On-line and off-line handwriting recognition: A comprehensive survey,"</article-title>
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          , vol.
          <volume>22</volume>
          , pp.
          <fpage>63</fpage>
          -
          <lpage>84</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G. X.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Viard-Gaudin</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Kot</surname>
          </string-name>
          ,
          <article-title>"Automatic writer identification framework for online handwritten documents using character prototypes,"</article-title>
          <source>Pattern Recognition</source>
          , vol.
          <volume>42</volume>
          , pp.
          <fpage>3313</fpage>
          -
          <lpage>3323</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Siddiqi</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Vincent</surname>
          </string-name>
          ,
          <article-title>"Text independent writer recognition using redundant writing patterns with contour-based orientation and curvature features,"</article-title>
          <source>Pattern Recognition</source>
          , vol.
          <volume>43</volume>
          , pp.
          <fpage>3853</fpage>
          -
          <lpage>3865</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhardwaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. O.</given-names>
            <surname>Thomas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Govindaraju</surname>
          </string-name>
          ,
          <article-title>"Retrieving handwriting styles: A content based approach to handwritten document retrieval,"</article-title>
          <source>in Proc. International Conference on Handwriting Recognition</source>
          , Kolkata, India,
          <year>2010</year>
          , pp.
          <fpage>265</fpage>
          -
          <lpage>270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Sim</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Janakiraman</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>"Continuous verification using multimodal biometrics,"</article-title>
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          , vol.
          <volume>29</volume>
          , pp.
          <fpage>687</fpage>
          -
          <lpage>700</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>U.-V.</given-names>
            <surname>Marti</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Bunke</surname>
          </string-name>
          ,
          <article-title>"The IAM-database: An English sentence database for off-line handwriting recognition "</article-title>
          <source>International Journal on Document Analysis and Recognition</source>
          , Vol.
          <volume>5</volume>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>46</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Bulaku</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Schomaker</surname>
          </string-name>
          ,
          <article-title>"Forensic Writer Identification: A Benchmark Data Set and a Comparison of Two Systems"</article-title>
          ,
          <source>Technical Report</source>
          ,
          <string-name>
            <surname>NICI</surname>
          </string-name>
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Srihari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Cha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Arora</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>"Individuality of handwriting"</article-title>
          ,
          <source>Journal of Forensic Science</source>
          , Vol.
          <volume>47</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Hanusiak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Oliveira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Justino</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Sabourin</surname>
          </string-name>
          ,
          <article-title>"Writer verification using texture-based features"</article-title>
          ,
          <source>International Journal of Document Analysis and "Recognition, [DOI:10.1007/s10032-011-0166- 4]</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E. N.</given-names>
            <surname>Zois</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Anastassopoulos</surname>
          </string-name>
          ,
          <article-title>"Fusion of correlated decisions for writer verification,"</article-title>
          <source>Pattern Recognition</source>
          , vol.
          <volume>34</volume>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>61</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>E. N.</given-names>
            <surname>Zois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tselios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Siores</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nassiopoulos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Economou</surname>
          </string-name>
          ,
          <article-title>"Off-Line Signature Verification Using Two Step Transitional Features,"</article-title>
          <source>in Proc 12th IAPR Conference on Machine Vision Applications</source>
          , Nara, Japan,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R. O.</given-names>
            <surname>Duda</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Hart</surname>
          </string-name>
          ,
          <article-title>Pattern classification</article-title>
          . New York: John Wiley and Sons,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>[13] http://biosecure.it-sudparis.eu/AB/</mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Otsu</surname>
          </string-name>
          ,
          <article-title>"A threshold selection method from gray-level histogram"</article-title>
          ,
          <source>IEEE Transactions on System, Man and Cybernetics</source>
          , Vol.
          <volume>8</volume>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>66</lpage>
          ,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Lutz</given-names>
            <surname>Hamel</surname>
          </string-name>
          :
          <article-title>"Kernel Knowledge discovery with support vector machines"</article-title>
          , Wiley, New Jersey,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Vargas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Ferrer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Travieso</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Alonso</surname>
          </string-name>
          ,
          <article-title>"Off-line signature verification based on grey level information using texture features"</article-title>
          ,
          <source>Pattern Recognition</source>
          , Vol.
          <volume>44</volume>
          , pp.
          <fpage>375</fpage>
          -
          <lpage>385</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Aguilar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Hermira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Marquez</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. O.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <article-title>"An offline signature verification system based of local and global information"</article-title>
          ,
          <source>LCNS 3087</source>
          , pp.
          <fpage>295</fpage>
          -
          <lpage>306</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bulacu</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Schomaker</surname>
          </string-name>
          ,
          <article-title>"Text-independent writer identification and verification using textural and allographic features,"</article-title>
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          , Vol.
          <volume>29</volume>
          , pp.
          <fpage>701</fpage>
          -
          <lpage>717</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>