<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Biometric authentication based on eye movements by using scan-path comparison algorithms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Carlos-Alberto Quintana-Neva´ rez</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francisco Lo´ pez-Orozco</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rogelio Florencia-Ju a´rez</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>, Divisio ́ n Multidisciplinaria de la UACJ en Ciudad Universitaria</institution>
          ,
          <addr-line>Cd. Jua ́ rez, Chihuahua, Me ́ xico</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Quintana-Neva ́rez is a undergraduate student of the Software Engineering programme at UACJ</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <fpage>33</fpage>
      <lpage>38</lpage>
      <abstract>
        <p>-This paper presents an approach for an authentication method of people by using their eye movements. Our method is based on a simple scan-path comparison. People's eye movements were recorded by using an eye tracker when they were drawing a personal identification number (PIN) on a screen numeric pad. Data was analyzed using the Eyenalysis algorithm to measure the similarity of scan-paths by calculating and normalizing the distance in pixels for each point in the scan-paths. In the results of a first experiment and analysis we got an average acceptance rate of 80% and a low false acceptance rate under 25%. In a second experiment a previous training for each participant was done, and we got best results with trained people. However, we are continuing with this research in order to make a new study where variables like fixation time and distance from the equipment are also considered.</p>
      </abstract>
      <kwd-group>
        <kwd>algorithms</kwd>
        <kwd>authentication</kwd>
        <kwd>biometrics</kwd>
        <kwd>eye movement</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>A issue when talking about digital devices. We have a</p>
      <p>T present, information security is a very important
constant concern that our information is not violated. That
is why techniques were developed to authenticate that a
person is, in fact, who he claims to be.</p>
      <p>These mechanisms fall into three categories: something you
know (eg. passwords, PIN, patterns), something you have
(eg. magnetic cards, chips, keys) and something you are (eg.
body parts, voice, iris pattern.).</p>
      <p>
        Belonging to last category, biometric authentication
systems were born [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] which are divided into several categories
such as fingerprint recognition [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], facial recognition [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
voice recognition [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and iris recognition [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], which have
already been violated.
      </p>
      <p>
        The case of the fingerprint [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], that was violated using a
mold of the victims fingerprint or the case of iris [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], violated
by decoding the binary that was saved in the database and
reconstructing the iris in base on that binary codes. This are
the most trustworthy and secure biometric authentication
methods existing, and they are examples on why we are
faced with the need to create a new method of biometric
authentication that is less vulnerable than the previous
ones. This is why the idea of authentication through eye
movements via the eye tracker is presented here.
      </p>
      <p>
        Due to the fact that privacy is an issue that can not be
ignored, especially when we talk about the information
we store in our mobile devices, it is necessary to develop
and improve the existing authentication methods, or in
addition, create one new, since these are proven to be
vulnerable, as in the case of Android unlock pattern [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
and Numeric PIN on any mobile device with camera and
microphone [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In view of this need is why it was decided
to create this new method based on eye movements, which,
through an eye tracker will recognize the movements of a
person against a stimulus which could be a static image,
a pad or a simple text, then to match it with a register
previously recorded in a database. A general background
of biometric authentication and people recognition based
on eye movements methods are presented. A theoretical
framework will be presented on previous works in the area
of Eye Movement, such as those developed by Hermens
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], where it explains how social stimuli affect people’s
attention to visualize an object.
      </p>
      <p>
        Halverson [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] discusses how to clean the systematic error
given by eye tracking devices when analyzing the data they
provide.
      </p>
      <p>
        In our work, we propose a prototype of an algorithm for
the authentication of users based on eye movements with
comparison of ocular movement patterns, as proposed by
Mathot and Cristino [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. This algorithm measures the
Euclidean distance of each of the points in a scan-path,
makes a summation and normalizes it by dividing it by
the number of elements in the scan-path. This distance is
represented in pixels, and shows the difference between
two scan-paths. Two experiments were made where we
capture the eye movements of 10 participants on a numeric
pad on the screen with the numbers sorted between 0 and
9. Participants were asked to create a numeric password
of 4 or 6 digits-length as an identification key. Several
captures with the eye tracker were made. Then they were
analyzed to find a relation between each of the captures of
the participants.
Several works and studies related to this subject have
been proposed. These works propose different methods to
achieve the authentication or identification of people based
on eye tracking. For example the work proposed by
Komogortsev uses complex characteristics of ocular behavior
to identify a person [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Other of their reported works
uses geometric characteristics of the eye shape to perform
an identification [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In collaboration with Holland, they
analyzed the influence of the environment and the stimuli
given at the time of authentication [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. They also
conducted an experiment on eye tracking in a common tablet
to authenticate a person using only a webcam [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. At
the International Conference on Applied Cryptography and
Network Security, Liu proposed a method for smartphone
authentication that consisted of displaying 4 objects on the
mobile phone screen and randomly spreading them to be
followed by the eye movement of the participants [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
Here, a simple linear regression algorithm was proposed as
a method to identify people.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], ImagePass is proposed and tested as a graphical
authentication system based on pattern recognition. It makes a
comparison of vision patterns of the system, with identified
patterns in other studies. In [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], the possibility of creating a
secure and usable authentication system via eye tracking for
smartphone technology was also analyzed. This proposal is
called EyeVeri, where the mobile front camera and pattern
tie algorithms are used to identify if a person is who he
claims to be presenting to users different kinds of stimuli.
      </p>
      <p>
        Security is a really important factor in contemporary
systems since that some of existing authentication methods
are vulnerable. For example, fingerprint method has been
violated several years ago [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Antti Sten, Antti Kaseva and
Teemupekka Virtanen, from the Department of
Telecommunications, Software and Multimedia at the University of
Helsinki, explain that “Typically, a human finger contains
a lot of fat that leaves a mark that is not visible where
it touches, therefore, it generally leaves a clear mark also
on the scanner. This stain can be made visible in many
ways and even a mere breath can show the impression very
clearly. The scheme is to use this stain, breathe the scanner
and make it. The scanner thinks that there is a live finger
pressed against its pad. A variation of this idea is to use
a finger-like substance that has a flat surface and press it
against the pad leaving the fat stain below it” [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Although
the modern scanners, also make finger recognition to know
if it is a living finger, even so it is easy to circumvent
this protection, as it was already mentioned. One of the
techniques used in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is to create a finger based on a mold
and gelatin, having these same conductivity as a human
finger.
      </p>
      <p>
        The recognition of iris was also violated in recent times
as explained in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], where a method was proposed to
reconstruct an iris based on the binary information that is saved
when registering it in the system. Other authentication
methods such as the numeric PIN, and the unlock pattern
for Android, were breached using techniques based on the
microphone and camera for the PIN [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and an attack where
the grease left by the fingers was used to Identify the unlock
pattern for Android [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
That is why it is intended to develop this project. The
principal objective is to find if authentication via Eye Tracking is
more robust or less vulnerable than those mentioned above.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>PROPOSED METHOD</title>
      <p>In this research, an own software program displaying a
numeric on-screen pad (Figure 1) was developed. Users are
supposed to draw a numerical password with their eyes on
the numeric pad while eye movements are captured by an
eye tracker and a comparison of the captured scan-paths is
also performed.</p>
      <p>
        For comparison purpose, various pattern recognition
and similarity measures where used, but we will talk about
the one that allowed interesting results in our experiments:
The Eyenalysis algorithm proposed by Mathot &amp; Cristino
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>The method followed is described in the next algorithm.
It shows the process performed to capture data that will be
analyzed after the experimental phase.</p>
      <p>Experiment starts
Computer starts calibration process
if U serStaresAtF ixationCross true then
numericP ad true
user draws his pin with eyes
experiment ends
else
numericP ad f alse
waiting for user
end if
Ten users where asked to create a 4-to-6 digit PIN (e.g. 2704)
and follow it with their eyes. Subsequently, the numeric pad
appears as shown in Figure 1 and the participant must draw
the numeric PIN with the eyes.</p>
      <p>For the purpose of this experiment, a previous
calibration was done in order to obtain the best possible results
in the data capture. This calibration step was done by the
calibration software developed by the eye tracker
manufacturer. In the computer screen shown in Figure 2 t can be seen
the image of the calibration software.</p>
      <p>Error given by the calibration step is an average of the
error of all the nine points displayed on the calibration
screen. Subsequently, the numeric pad appears as shown
in Figure 1 and the participant must draw the numeric PIN
with the eyes.
4.2</p>
      <sec id="sec-2-1">
        <title>Second experiment</title>
        <p>For the second experiment, 3 users where asked to do
the same process as the previous experiment, with the
difference that in this time, the first user had no training
at all. Second participant had experience with eye tracking
because of participations in previous experiments. The third
participant had extensive training on eye tracking with
previous experiments and a process where he was asked
to read and visualize images focusing on specifics parts of
them.
5</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>DATA ANALYSIS</title>
      <p>
        In both of the experiments, a total of 4 scan-paths were
captured from each participant after a training were 5
scanpaths for participant was captured. The last 4 scan-paths
were used as data to be compared in the data analysis
section. Data was analyzed with the Eyenalysis algorithm [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
This algorithm was selected because we find in literature
that it has many applications in the scan-paths comparison
fields and we can make an adaption to our work.
5.1
5.1.1
      </p>
      <sec id="sec-3-1">
        <title>First experiment</title>
        <p>
          Eyenalisis
Eyenalysis algorithm was proposed by Mathot &amp; Cristino
[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. There they find the similarity between two scan-paths
by finding the Euclidean distance point by point in the
scanpath, with the following formula:
qi)2
where p and q are points in the scan-path. After that, they
find the normalized form of all the points, dividing this
number by the maximum number of points from the two
scan-paths:
        </p>
        <p>D(S; T ) =</p>
        <p>Pin=1 diS + Pn j</p>
        <p>j=1 dT
max(nS ; nT )
where S and T are scan-paths to be compared and d is the
distance calculated beforehand for each point in the
scanpath.</p>
        <p>This method gives good results based on a distance
calculated in pixels. The minor is the distance, the more
similar the scan-path is to the other one, the authors of
this algorithm say that around 100 px is considered a good
measure of similarity.</p>
        <p>For the intruder acceptance rate, we ask a participant
to make the same password than the other participant, just
telling him to imitate the password in the correct numeric
order, but never tell him how much time he should last in
each number.</p>
        <p>In Figure 3 we can see how we got a better sight on how
the acceptance rate for a intruder, was much more low than
the person trying to authenticate himself.</p>
        <p>
          This algorithm give a good way to make an approach
for authentication based on eye movement. From the 10
participants of the experiment, 8 were accepted giving us
a 80% of acceptance based in the criteria proposed by the
author of the algorithm where he says 100 px is considered
a good similarity measure, and we add 200 more px to give
the users a threshold to commit errors.
A second experiment was run, where 5 participants were
recruited and asked to follow the method described before
of drawing a numeric PIN with their eyes. The difference in
this experiment is that distance is normalized according to
Eyenalysis algorithm [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Here and due the fact that we had
participants with different levels of training. For example
first participant had no idea of what eye tracking was as we
can see his performance in Figure 4 where he obtained a 0%
of acceptance based on the previous criteria.
        </p>
        <p>The last participant in this experiment, had a lot of
experience by working with eye tracker technology than the
other ones. This participant, was invited for the last
experiment. He also contributed in another experiments involving
eye tracking and he read articles about eye tracking before.
Results with this participant are shown in Figure 6 with a
83.33% of acceptance.</p>
        <p>The second participant, was trained before by being
recruited for the first experiment, so he has a little
experience on eye tracking and the purposed method. With this
participant we obtain the results in Figure 5 obtaining a
16.66% of acceptance.</p>
        <p>We can notice that participant with more training
perform a better acceptance in both algorithms and with a
less error rate, while the participant with not much training
performs a good average. Finally, the participant with no
training at all, obtained worst results.
In the experimentation process, we find that both methods
gave good results, but they do not have a direct point of
comparison because the Eyenalysis algorithm represents the
normalized distance between all the points in a scan-path,
while the linear correlation, represent the accuracy rate of a
scan-path compared to another comparing point by point in
the x axis and then in the y axis leaving no way to make
a real comparison. They are different metrics to account
similarity.</p>
        <p>In the case of the linear correlation, we can say that
results are affected by the calibration step and light conditions.
Here, the false acceptance rate is measured by comparing
the scan-path of the real user with the scan-path of other
user.</p>
        <p>On other hand, the intruder distance on the Eyenalysis
method is measured by testing with a different person, and
let know him the password of the real user, and then he tries
to follow his scanpth.</p>
        <p>Results are shown in Figure 5 where we can see that the
distance in pixels for the real user is much less from the
intruder, so we can deduce that a user will have a threshold
to commit errors.</p>
        <p>But if we consider that threshold, the best distance reached
by the intruders will be much higher than the worst distance
reached by the real user, making this way of authentication
have a considerable rate of security.</p>
        <p>
          For the second experiment, a modification to the
Eyenalysis [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] algorithm was made.
        </p>
        <p>This modification consist on dividing the distance given in
pixels by 10, normalizing more the obtained results. This
simple modification lead to a way to compare the results
in both algorithms as is shown in Figure 4, 5 and 6, where
we can see how the training plays an important role in our
method.
7</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>CONCLUSION &amp; FUTURE WORK</title>
      <p>
        In conclusion, Eyenalysis [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] algorithm give a good accuracy
of security and an almost null accuracy of false positive
results as we are trying to reach to ensure that the proposed
method can be more secure than the other biometric
methods (e.g. fingerprint or iris recognition).
      </p>
      <p>
        In the case of Simple Linear Regression [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], we got good
results, but not as as in the case of Eyenalysis [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] algorithm.
However, Eyenalysis is more computationally complex and
slower than the Simple Linear correlation (SLR) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>In future work, we are planning to taking into account
more variables like fixation time and a specific fixations set
of each eye trying to find a relation between the time of
fixations and the level of security. And to improve the
algorithm performance with simple algorithms. It is important
to evaluate our algorithm method in different experimental
conditions in order to assure that is accurate to every kind
of situation.</p>
    </sec>
    <sec id="sec-5">
      <title>ACKNOWLEDGMENTS</title>
      <p>The first author would like to thank to LabTEC2 to provide
the necessary equipment for run the experiments.</p>
      <p>Alberto Quintana-Neva´ rez Is a software
engineer undergraduate student at UACJ in Me´ xico,
Quintana-Neva´ rez has participated in projects at
LabTEC2. His research focuses on information
security, secure app development, social
computing and eye movement.</p>
      <p>Francisco Lo´ pez-Orozco PhD. Since 2015, he
is an associate professor in the Software and
Computer Systems Engineering undergraduate
programs at UACJ. He obtained his PhD from
the University of Grenoble, France in 2013. He
is a co-founder and permanent member of the
Laboratory of Emerging Technologies in
Computer Science (LabTEC2) at UACJ. His research
focuses on cognitive computational and
experimental psychology, human-computer
interaction, centered user design.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wayman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maltoni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Maio</surname>
          </string-name>
          , An Introduction to Biometric
          <source>Authentication Systems</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          . London: Springer London,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. van der Putte and J.</given-names>
            <surname>Keuning</surname>
          </string-name>
          ,
          <article-title>Biometrical Fingerprint Recognition: Don't get your Fingers Burned</article-title>
          , pp.
          <fpage>289</fpage>
          -
          <lpage>303</lpage>
          . Boston, MA: Springer US,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chellappa</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Krishnaswamy</surname>
          </string-name>
          , “
          <article-title>Discriminant analysis of principal components for face recognition</article-title>
          ,”
          <source>in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition</source>
          , pp.
          <fpage>336</fpage>
          -
          <lpage>341</lpage>
          ,
          <year>Apr 1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ross</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Prabhakar</surname>
          </string-name>
          , “
          <article-title>An introduction to biometric recognition,” IEEE Transactions on Circuits and Systems for Video Technology</article-title>
          , vol.
          <volume>14</volume>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>20</lpage>
          ,
          <year>Jan 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Wildes</surname>
          </string-name>
          , “
          <article-title>Iris recognition: an emerging biometric technology</article-title>
          ,
          <source>” Proceedings of the IEEE</source>
          , vol.
          <volume>85</volume>
          , pp.
          <fpage>1348</fpage>
          -
          <lpage>1363</lpage>
          ,
          <year>Sep 1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ste</surname>
          </string-name>
          <article-title>´n, A</article-title>
          . Kaseva, and T. Virtanen, “
          <article-title>Fooling fingerprint scannersbiometric vulnerabilities of the precise biometrics 100 sc scanner</article-title>
          ,”
          <source>in Proceedings of 4th Australian Information Warfare and IT Security Conference</source>
          , vol.
          <year>2003</year>
          , pp.
          <fpage>333</fpage>
          -
          <lpage>340</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Galbally</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gomez-Barrero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fierrez</surname>
          </string-name>
          , and
          <string-name>
            <surname>J. OrtegaGarcia</surname>
          </string-name>
          , “
          <article-title>From the iriscode to the iris: A new vulnerability of iris recognition systems</article-title>
          ,” Black Hat Briefings USA,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Aviv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Gibson</surname>
          </string-name>
          , E. Mossop,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blaze</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Smith</surname>
          </string-name>
          , “
          <article-title>Smudge attacks on smartphone touch screens</article-title>
          .,
          <source>” Woot</source>
          , vol.
          <volume>10</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Simon and R. Anderson</surname>
          </string-name>
          , “
          <article-title>Pin skimmer: Inferring pins through the camera</article-title>
          and microphone,”
          <source>in Proceedings of the Third ACM workshop on Security and privacy in smartphones &amp; mobile devices</source>
          , pp.
          <fpage>67</fpage>
          -
          <lpage>78</lpage>
          , ACM,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Hermens</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Walker</surname>
          </string-name>
          , “
          <article-title>Do you look where i look? attention shifts and response preparation following dynamic social cues</article-title>
          ,
          <source>” Journal of Eye Movement Research</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>5</issue>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Hornof</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Halverson</surname>
          </string-name>
          , “
          <article-title>Cleaning up systematic error in eye-tracking data by using required fixation locations</article-title>
          ,” Behavior Research Methods, Instruments, &amp;
          <string-name>
            <surname>Computers</surname>
          </string-name>
          , vol.
          <volume>34</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>592</fpage>
          -
          <lpage>604</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12] S. Math oˆt,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cristino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Gilchrist</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Theeuwes</surname>
          </string-name>
          , “
          <article-title>A simple way to estimate similarity between pairs of eye movement sequences</article-title>
          ,
          <source>” Journal of Eye Movement Research</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Komogortsev</surname>
          </string-name>
          and C. D. Holland, “
          <article-title>Biometric authentication via complex oculomotor behavior,” in Biometrics: Theory, Applications and Systems</article-title>
          (BTAS),
          <year>2013</year>
          IEEE Sixth International Conference on, pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          , IEEE,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Komogortsev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Karpov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. R.</given-names>
            <surname>Price</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Aragon</surname>
          </string-name>
          , “
          <article-title>Biometric authentication via oculomotor plant characteristics,” in Biometrics (ICB</article-title>
          ),
          <year>2012</year>
          5th IAPR International Conference on, pp.
          <fpage>413</fpage>
          -
          <lpage>420</lpage>
          , IEEE,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>C. D. Holland</surname>
            and
            <given-names>O. V.</given-names>
          </string-name>
          <string-name>
            <surname>Komogortsev</surname>
          </string-name>
          , “
          <article-title>Biometric verification via complex eye movements: The effects of environment and stimulus,” in Biometrics: Theory, Applications and Systems</article-title>
          (BTAS),
          <year>2012</year>
          IEEE Fifth International Conference on, pp.
          <fpage>39</fpage>
          -
          <lpage>46</lpage>
          , IEEE,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Holland</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Komogortsev</surname>
          </string-name>
          , “
          <article-title>Eye tracking on unmodified common tablets: challenges and solutions</article-title>
          ,”
          <source>in Proceedings of the Symposium on Eye Tracking Research and Applications</source>
          , pp.
          <fpage>277</fpage>
          -
          <lpage>280</lpage>
          , ACM,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          , “
          <article-title>Exploiting eye tracking for smartphone authentication</article-title>
          ,
          <source>” in International Conference on Applied Cryptography and Network Security</source>
          , pp.
          <fpage>457</fpage>
          -
          <lpage>477</lpage>
          , Springer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Marija</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Sime</surname>
          </string-name>
          , “
          <article-title>Eye tracking recognitionbased graphical authentication,” in Application of Information and Communication Technologies (AICT</article-title>
          ),
          <year>2013</year>
          7th International Conference on, pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , IEEE,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ren</surname>
          </string-name>
          , and W. Xu, “
          <article-title>Eyeveri: A secure and usable approach for smartphone user authentication,” in Computer Communications</article-title>
          ,
          <source>IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          , IEEE,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>