<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Saliency Model Predicts Fixations in Web Interfaces</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jeremiah D. Still</string-name>
          <email>jstill2@missouriwestern.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Author Keywords</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christopher M. Masciocchi</string-name>
          <email>cmascioc@iastate.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Iowa State University, Department of Psychology</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Missouri Western State University, Department of Psychology</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Saliency, Interface Development</institution>
          ,
          <addr-line>Design, Model</addr-line>
        </aff>
      </contrib-group>
      <fpage>25</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>User interfaces are visually rich and complex. Consequently, it is difficult for designers to predict which locations will be attended to first within a display. Designers currently depend on eye tracking data to determine fixated locations, which are naturally associated with the allocation of attention. A computational saliency model can make predictions about where individuals are likely to fixate. Thus, we propose that the saliency model may facilitate successful interface development during the iterative design process by providing information about an interface's stimulus-driven properties. To test its predictive power, the saliency model was used to render 50 web page screenshots; eye tracking data were gathered from participants on the same images. We found that the saliency model predicted fixated locations within web page interfaces. Thus, using computational models to determine regions high in visual saliency during web page development may be a cost effective alternative to eye tracking.</p>
      </abstract>
      <kwd-group>
        <kwd>Saliency</kwd>
        <kwd>Search and Design</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>INTRODUCTION
Some visual designs guide users to the locations of
important information, while others mislead users. Visual
saliency, inherent in a complex interface, cues users to
certain spatial regions over others. If employed correctly by
designers, salient cues may reduce information search times
and facilitate task completion [cf. 18] by implicitly
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
Pnreo-tpmrocaedeedionrgsdiosftrthibeu5ttehdInftoerrnpartioofniatloWrcoorkmshmoperocniaMl oaddevlaDnrtiavgeen aDnedvetlhoaptmceonptioefs
Advanced User Interfaces (MDDAUI 2010): Bridging between User Experience and
UobIreEarnregptihnuiebselrinisnohgt,i,coteorgaapnnoidzsettdhoeantftuhselelr2vc8eittrhastAiooCrnMtooCnortenhdfeeirsfetinrricsbetuoptneagHteou.mlTiasontsFc,aocrpteoyqrusoiitrnheesrwpriisoer,
Computing Systems (CHI 2010), Atlanta, Georgia, USA, April 10, 2010.
specific permission and/or a fee.</p>
      <p>
        CMopDyDrigAhUt©I22001100,foAr pthreili1nd0i,v2id0u1a0l,paAptelrasnbtya,thGeepoarpgeiras,' aUuSthAor.s. Copying permitted
foCroppriyvraitgehatn2d0a0c9adAemCiMcpu9rp7o8s-e1s-.6R0e5-p5u8b-l2ic4a6ti-o7n/0o9f/m0a4t.e.r.$ia5l .f0ro0m. this volume
requires permission by the copyright owners. This volume is published by its editors.
communicating to users where they ought to start their
visual search [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. In order to be considered salient, a
feature must be visually unique relative to its surroundings.
For example, text that is underlined amongst
nonunderlined text “pulls” the reader’s attention to it. However,
many interfaces, like web pages, are rich with visual media,
such as text, pictures, logos and bullets, making the
determination of salient features a complicated task. Given
this complexity, designers are often left making best
guesses about which spatial regions are salient within an
interface. Previous research on visual search in web pages
defines entry points as regions within a page where users
typically begin their visual search. In this article, we will
argue that these entry points are heavily influenced by
visual saliency, that is, users will often begin searching web
pages at the location of highest saliency. In related research
examining cognitive processing these implicit and low level
cues that guide a viewer’s visual search are referred to as
stimulus-driven properties – certain characteristics of the
stimulus quickly “drive”, or direct attention to certain
locations over others. Currently, no consensus has been
reached as to which visual characteristics, or
stimulusdriven properties, make for effective entry points.
Measuring Overt Attention through Eye Tracking
Given the over abundance of visual information in our
environments and our working memory limitations,
attention must be selective, only allowing a limited amount
of information into consciousness, for our cognitive system
to function properly [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. It has been suggested that the
programming of eye movements has a direct and natural
relationship with visual attention in that attention is often
directed to whichever item is fixated [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Only information
that falls directly on the fovea during a fixation is encoded
with high resolution and only a limited amount of this high
resolution information is processed, while the rest falls into
rapid decay [see 4]. Thus, it is critical that users fixate on
relevant visual information or that content will not reach
users' awareness.
      </p>
      <p>
        It is no surprise then, that designers often monitor eye
movements to evaluate a web page’s saliency, or entry
points. Eye tracking systems allow designers to test whether
their web pages actually guide users' fixations to important
locations. However, eye tracking has a number of
recognized costs. Eye tracking systems are often expensive,
not easily accessible, time consuming to employ and they
gradually lose calibration [
        <xref ref-type="bibr" rid="ref15 ref7">1, 2, 7, 15</xref>
        ].
      </p>
      <p>
        Stimulus and Goal Driven Searches
In this article we investigate the influence of
stimulusdriven saliency on attention within the context of a web
page. Stimulus-driven saliency guides attention quickly and
without explicit intention, thus some might question its role
during a purposeful search on a web page. There is ample
evidence to suggest that goals do influence the guidance of
attention. For example, web page eye tracking research has
shown that changing the task (or goal) during a search, or
seeking navigational or informational indicators, changes
observers’ fixation patterns [3]. Additional research has
shown that, given enough time, expectations can cause a
consistent pattern of fixations – F-shaped pattern or reading
patterns (e.g., left-right/top-bottom) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. However, these
goal-driven effects interact with stimulus-driven effects,
making the stimulus-driven influences more difficult to
examine [cf. 11]. Also, it is often the case that only a few
seconds are spent on a web page (even with a goal in mind)
making the understanding of stimulus-driven processing,
which is believed to influence attention very rapidly,
critical. For instance, when searching for information
observers often only skim through approximately 18 words,
and spend 4 to 9 seconds, per web page [
        <xref ref-type="bibr" rid="ref12">2, 12</xref>
        ]. One way to
investigate the pure influence of stimulus-driven guidance
is to use a computational saliency model designed to make
predictions about what properties or features of a web page
attention ought to select within complex media, or scenes.
Predicting Fixations through a Saliency Model
Visually salient items often draw observers' attention. To
better understand the influences of saliency, or
stimulusdriven selection, on attention, Koch and Ullman (1985)
developed a model to compute an image's visual saliency
without any semantic input (i.e., meaning of objects). Their
model is based on the assumption that eye movement
programming is driven by local image contrast leading to
logical serial searches through complex spatial
environments. These serial searches are guided by low level
primitives extracted from a scene. The saliency model was
developed under the pretense that low level visual features
(i.e., color, light intensity, orientation) are processed
preattentively in humans and, in turn, rapidly influence overt
attention. Thus, the underlying assumption is that visual
saliency is used to guide the fovea to unique areas within a
scene that might provide the most efficient processing [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
The computational model is implemented on a computer
using digital pictures as stimuli to produce a pre-attentional
or “saliency” map [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. To create a saliency map, the model
receives input from pixels within a digital picture. Then, it
extracts three feature channels – color, intensity, orientation
– at eight different spatial scales. These three channels are
normalized and differences of center-surround are
calculated for each separate channel. The separate channels
are additively combined to form a single saliency map. An
image's saliency map provides predictions of where spatial
attention should be deployed [for detailed explanations
refer to 6, 13]. In essence, the model makes predictions
about which regions in an image have the most and least
likely chance to be attended based purely on
stimulusdriven properties. The saliency model is available for
download from &lt;SaliencyToolbox.net&gt; as a collection of
Matlab functions and scripts [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>Testing a Saliency Model within Web Pages
Designers recognize the need to predict and identify where
users’ attention will be guided on a web page. For example,
it is well known that one should avoid using poor designs
that increase the likelihood of users missing important
interface features such as branding, navigational or
informational symbols. But, using an eye tracking system to
monitor guidance of attention – as is traditional – can be
expensive, difficult to employ and time consuming within
the context of a practical iterative design process. Thus, we
investigated the utility of a computational saliency model in
predicting the guidance of attention in web page
screenshots. This new method is benchmarked and
compared to another set of data in which participants’ eye
movements were tracked while they viewed the same web
page screenshots.</p>
      <p>METHOD
Participants
The data from eight undergraduate participants are
examined. All participants reported extensive web site
experience.</p>
      <p>Stimuli and Equipment
The images were 50 screenshots of various web pages.
Each participant saw each screenshot only once.</p>
      <p>Participants' eye movements were recorded by an ASL eye
tracker with a sampling rate of 120 Hz. Screenshots were
shown on a Samsung LCD monitor, which had a viewing
area of approximately 38.0 cm × 30.0 cm. A chin rest
maintained a viewing distance of approximately 80 cm.
Images subtended approximately 26.70 x 21.20 visual angle.
Procedure
Participants first read and signed an informed consent
document, and were then seated in front of the monitor with
their chin in the chin rest. The experiment began and
concluded with a 9-point calibration sequence to calibrate
the eye tracker and estimate the amount of tracking error.
Participants were told that they would view a series of web
page screenshots, and that they should, "look around the
image like you normally would if you were surfing the
internet." A fixation cross was presented at the center of the
screen to signal the beginning of a trial. After a delay of
approximately 1 second, a randomly selected web page
screenshot was presented for 5 seconds. The fixation cross
then reappeared to signal the beginning of the next trial.
The experiment took
complete.</p>
      <p>approximately
minutes to
Saliency maps were created using the algorithms developed
by Itti, Koch, and Niebur (1998). The model was run on
each image individually and the output was normalized by
dividing all values by the maximum value for that map, and
multiplying all values by 100. To simplify data analysis, the
size of the saliency maps was increased to be identical to
the size of the screenshots (1024 x 768 pixels). As
described in the Introduction, these saliency maps are 2-D
representations of areas in the screenshot that show the
relative saliency of locations in the image. Figure 1 shows
an example of two web page screenshots and their
corresponding saliency maps. Low values (dark areas in the
image) indicate regions of the image that are low in
saliency, while high values (light areas in the image)
indicate regions high in saliency.</p>
      <p>RESULTS
We used a similar technique to Parkhurst, Law, and Niebur
(2002) to determine whether salient regions in web pages
were fixated more often than would be expected by chance.
Specifically, the values of the saliency map at the location
of each participant's first ten fixations were extracted. For
example, the x, y coordinates of the first fixation for each
participant was determined for every screenshot and the
value at the same location in the corresponding saliency
map was extracted. This process was repeated for fixations
two through ten. These values formed the Observed
Distribution of participant responses (Figure 2).</p>
      <p>To determine the likelihood that salient regions would be
fixated by chance, we repeated the process used to find the
Observed Distribution after rearranging the fixations and
saliency maps for all screenshots. For example, the values
from the saliency map for screenshots 2 to 50 were
extracted at the fixated locations from screenshot 1. The
saliency values of all other screenshots were extracted at
the location of the first ten fixations for all subjects for each
screenshot. These values formed the Shuffled Distribution.
The method used to create this distribution controls for
spatial biases that may inflate correlations between
fixations and salient regions. If the values of the Shuffled
Distribution are larger than those of the Observed
Distribution, it would indicate that participants fixated on
regions that are lower in saliency than what is expected by
chance. If, however, the values of the Observed
Distribution are larger than those in the Shuffled
Distribution, it would indicate that participants fixated
regions that are higher in saliency than what is expected by
chance.</p>
      <p>Figure 2 shows the means for the Observed and Shuffled
Distributions of the first ten fixations for each screenshot.
An analysis of variance was conducted with fixation
number (1-10) as a within-subjects variable and distribution
(observed, shuffled) as a between-subjects variable, to
determine whether any differences between the
distributions varied as a function of fixation number. The
main effect of fixation number was reliable, F(9, 882) =
6.39, MSE = 19.03, p &lt; .001. Pairwise comparisons
revealed that the values for the first fixation were higher
than all other values, and that the values of the tenth
fixation were lower than all other values. This indicates that
early fixations tend to occur at regions of higher salience
than those of later fixations. More importantly, the main
effect of distribution was also reliable, F(1, 98) = 4.86,
MSE = 397.95, p &lt; .05, indicating that the values of
Observed Distribution were larger than those of the
Shuffled Distribution. This difference confirms that
participants fixated regions higher in saliency than would
be expected by chance, showing that the saliency model is
effective at predicting fixations. Distribution x Fixation
number was not significant, F &lt; 1.
Eye tracking is a commonly employed method for
examining the guidance of overt attention within interfaces
(e.g., web pages). However, it has several drawbacks. We
propose that a web page’s saliency, stimulus-driven
properties, may be revealed through the use of a
computational saliency model. Therefore, we compared the
performance of the model to eye tracking data collected
from human observers. We were able to demonstrate that,
indeed, the saliency model predicts the deployment of overt
attention within a web page interface.</p>
      <p>
        Previous research has shown a modest correlation between
saliency and eye fixations in natural and artificial scenes
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. We have extended this research by showing that even
in web pages, which may contain more semantic
information (e.g., meaningful: text or images) than nature
scenes, fixations are correlated with saliency. Specifically,
participants were more likely to fixate on regions in the web
pages with a higher saliency value than predicted by
chance.
      </p>
      <p>Our data suggest that saliency maps alone can provide
reasonable predictions of overt attention. In addition,
saliency maps can be generated quickly, and require no
additional equipment or participants. Even with these
positive attributes, one may be hesitant to abandon eye
tracking altogether. Our recommendation to designers is to
choose the method most appropriate for your project given
your constraints and needs. It is often the case that
developing effective interfaces requires many levels of
analysis. For example, during the early formative testing
process it would be appropriate to begin by using the
saliency model to ensure that regions identified as being
important are also visually salient. Then, during the ‘final’
prototype development stage, employ the eye tracking
method to verify that your participants are actually looking
at the critical elements in the design.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Arroyo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selker</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Wei</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <article-title>Usability tool for analysis of web designs using mouse tracks. ComputerHuman Interaction extended abstracts on human factors in computing systems (</article-title>
          <year>2006</year>
          ),
          <fpage>484</fpage>
          -
          <lpage>489</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Sohn</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>What can a mouse cursor tell us more?: Correlation of eye/mouse movements on web browsing. Computer-Human Interactions extended abstracts on human factors in computing systems (</article-title>
          <year>2001</year>
          ),
          <fpage>281</fpage>
          -
          <lpage>282</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Cutrell</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Guan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <article-title>What are you looking for? An eye-tracking study of information usage in web search</article-title>
          .
          <source>Proceedings of the SIGCHI conference on human factors in computing systems</source>
          (
          <year>2007</year>
          ),
          <fpage>407</fpage>
          -
          <lpage>416</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Egeth</surname>
            ,
            <given-names>H. E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Yantis</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>Visual attention: Control representation and time course</article-title>
          .
          <source>Annual Review of Psychology</source>
          (
          <year>1997</year>
          ),
          <volume>48</volume>
          ,
          <fpage>269</fpage>
          -
          <lpage>297</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Itti</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>A saliency-based search mechanism for overt and covert shifts of visual attention</article-title>
          .
          <source>Vision Research</source>
          ,
          <volume>40</volume>
          (
          <fpage>10</fpage>
          -
          <lpage>12</lpage>
          ) (
          <year>2000</year>
          ),
          <fpage>1489</fpage>
          -
          <lpage>1506</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Itti</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Niebur</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>A model of saliencybased fast visual attention for rapid scene analysis</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>20</volume>
          (
          <issue>11</issue>
          ):
          <fpage>1254</fpage>
          -
          <lpage>1259</lpage>
          ,
          <year>November 1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Johansen</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Hansen</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          <article-title>Do we need eye trackers to tell where people look? Proceedings of Computer-Human Interaction extended abstracts on human factors in computing systems (</article-title>
          <year>2006</year>
          ),
          <fpage>923</fpage>
          -
          <lpage>928</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Johnston</surname>
            ,
            <given-names>W. A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Dark</surname>
            ,
            <given-names>V. J.</given-names>
          </string-name>
          <article-title>Selective attention</article-title>
          .
          <source>Annual Review of Psychology</source>
          (
          <year>1986</year>
          ),
          <volume>37</volume>
          ,
          <fpage>43</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Ullman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>Shifts in selective visual attention: Towards the underlying neural circuitry</article-title>
          .
          <source>Human Neurobiology</source>
          (
          <year>1985</year>
          ),
          <volume>4</volume>
          ,
          <fpage>219</fpage>
          -
          <lpage>227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kowler</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dosher</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Blaser</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>The role of attention in the programming of saccades</article-title>
          .
          <source>Vision Research</source>
          (
          <year>1995</year>
          ),
          <volume>35</volume>
          ,
          <fpage>1897</fpage>
          -
          <lpage>1916</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>McCarthy</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sasse</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Riegelsberger</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>Can I have the menu please? An eyetracking study of design conventions</article-title>
          .
          <source>Proceedings of HumanComputer Interaction</source>
          ,
          <fpage>401</fpage>
          -
          <lpage>414</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Nielsen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2008</year>
          , May).
          <source>How little do users read? Retrieved May 12</source>
          ,
          <year>2009</year>
          from http://www.useit.com/alertbox/percent-text-read.html.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Parkhurst</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Niebur</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>Modeling the role of salience in the allocation of overt visual attention</article-title>
          .
          <source>Vision Research</source>
          (
          <year>2002</year>
          ),
          <volume>42</volume>
          ,
          <fpage>107</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Rayner</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Eye movements in reading and information processing: 20 years of research</article-title>
          . Psychological
          <string-name>
            <surname>Bulletin</surname>
          </string-name>
          (
          <year>1998</year>
          ),
          <volume>124</volume>
          (
          <issue>3</issue>
          ),
          <fpage>372</fpage>
          -
          <lpage>422</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Tarasewich</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pomplun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fillion</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Broberg</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>The enhanced restricted focus viewer</article-title>
          .
          <source>International Journal of Human-Computer Interaction</source>
          (
          <year>2005</year>
          ),
          <volume>19</volume>
          (
          <issue>1</issue>
          ),
          <fpage>35</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Treisman</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <article-title>Perceptual grouping and attention in visual search for features and for objects</article-title>
          .
          <source>Journal of Experimental Psychology: Human Perception and Performance</source>
          (
          <year>1982</year>
          ),
          <volume>8</volume>
          ,
          <fpage>194</fpage>
          -
          <lpage>214</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Walther</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Modeling attention to salient proto-objects</article-title>
          .
          <source>Neural Networks</source>
          (
          <year>2006</year>
          ),
          <volume>19</volume>
          ,
          <fpage>1395</fpage>
          -
          <lpage>1407</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Wolfe</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          <article-title>Guided Search 4.0: Current Progress with a model of visual search</article-title>
          . In W. Gray (Ed.),
          <source>Integrated Models of Cognitive Systems</source>
          (pp.
          <fpage>99</fpage>
          -
          <lpage>119</lpage>
          ). New York: Oxford,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>