<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hanna Knaeusl</string-name>
          <email>hanna.knaeusl@ur.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Elsweiler</string-name>
          <email>david.elsweiler@ur.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bernd Ludwig</string-name>
          <email>bernd.ludwig@ur.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Chair for Information Science, University Regensburg</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1872</year>
      </pub-date>
      <fpage>2</fpage>
      <lpage>5</lpage>
      <abstract>
        <p>Wikipedia is a resource used by many people for many different purposes. We posit that it might be bene cial to alter the content or the way content is presented depending on the task context. Here we describe a small pilot lab study to investigate features of interaction that might help to infer the contextual situation surrounding wikipedia search tasks. We describe our e ort to collect data and analyse relationships between the features and the assigned task context.</p>
      </abstract>
      <kwd-group>
        <kwd>Eyetracking</kwd>
        <kwd>Wikipedia</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        Information portals such as Wikipedia represent rich
sources of information covering an incredibly broad range of
topics. Many Wikipedia entries are also long and can cover
aspects ranging from overviews and introductions to more
detailed descriptions of advanced aspects that are perhaps only
suitable for topic experts. Single pages can also contain not
only text, but images, info-graphics, lists and navigational
information. Previous research suggests that these resources
will have several di erent contexts of use. For example,
Marchionini [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] identi es three main types of search tasks, all
of which are applicable to Wikipedia: Lookup tasks include
nding answers to speci c questions, known-item searches
or navigating to speci c pages. These tasks are contrasted
with exploratory search tasks, which include learn tasks,
where the aim is to acquire larger amounts of knowledge and
achieve an enhanced understanding of a given topic, and
investigate tasks, where the user makes use of found
information and continues to contribute to or generate knowledge in
some way. Elsweiler et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] provide an additional task
dimension, distinguishing between work-oriented tasks where
Presented at EuroHCIR2012. Copyright c 2012 for the individual papers
by the papers’ authors. Copying permitted only for private and academic
purposes. This volume is published and copyrighted by its editors.
information is required to complete some job and
casualleisure tasks, where the aim is more pleasure-focused, e.g.
to pass time, to relax, to be entertained etc.
      </p>
      <p>Wikipedia contributors are encouraged to create pages in
a way that meets the needs of as many users as possible
by including information on a topic with su cient quantity,
quality and completeness and structuring the content in a
way that makes sense generally. Nevertheless, one could
imagine that di erent content or di erent presentations of
the same content might be more suitable in speci c
contexts. For example, lookup tasks may be best supported
when facts in an article are presented as a list that can be
scanned easily. In such scenarios, content such as images
may be less helpful and perhaps even distracting.
Contrastingly, in casual-leisure situations, users may want to focus
on multimedia content or have information presented in a
way that encourages browsing and information discovery.</p>
      <p>We believe examples like this suggest there may be bene t
in moving away from static pages, which try to cater for all
usage situations, to dynamic pages that are generated
appropriately based on the context of use. As a rst step towards
exploring this hypothesis, in this paper, we investigate how
the context of use { the task type being performed { might be
detected automatically from user-interactions with the
system. We want to establish if the way the user interacts with
the system, e.g. his mouse and keyboard interactions, eye
movements, and click behaviour can provide implicit
feedback regarding the usage scenario and user goals.</p>
      <p>With this aim in mind, we present a small pilot study that
allows us to evaluate a methodology for detecting the
features of interaction that might help us infer the contextual
situation surrounding a user's search task. We collect
interaction data in the context of a controlled laboratory study
and analyse relationships between the features of
interaction and the assigned task context. The data show that
for the small number of users in our study, the behaviour
exhibited when completing tasks of di erent types is very
di erent; users interact with di erent types of content in
di erent ways. Further, we provide evidence that it is
possible, at least for some users, to predict these behaviours
based purely on mouse and keyboard interactions.
2.</p>
    </sec>
    <sec id="sec-2">
      <title>RELATED WORK</title>
      <p>In the IR community a large amount of work has been
performed to establish if interaction data can be used as a
surrogate for explicit relevance judgements. This is known
as implicit relevance feedback. Early research in this area
demonstrated a correlation between the time spent reading a
Examine EX
Navigate NV
label description
RE User is reading text
SC User scans content e.g. headlines, lists
or whole page
User examines element</p>
      <p>User navigates
element label
Headline HD
List LI
Picture PI
Charts, tables etc. IG
Other navigation ON
element label
Text passage TX
Introduction IN
Info Box IB</p>
      <p>
        Links in Wikipedia WI
document and explicit relevance judgements [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Although
this has been disputed in naturalistic situations [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], White
and Kelly show that when task type is taken into account
clear signals can be found[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Other studies have shown
that the amount of scrolling on a Web page [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], click-through
for documents in a browser [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], bookmarking behaviour [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
and eye movements during the search [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]can all be used as
implicit feedback to improve retrieval performance.
      </p>
      <p>
        Interaction data can also be used as a means to predict
user emotions. For example, Fox et al., show that query log
features can be used to predict searcher satisfaction [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and
Feild et al.[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] used interaction data and physical sensors to
predict levels of user frustration with high accuracy.
      </p>
      <p>
        A third group of studies show correlations between di
erent styles of interactions e.g. for some users visual attention
on the screen can be predicted via mouse coordinates [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
We believe that the interaction style, the emotional state of
the user and the motivating task context will be intrinsically
related and that the work done previously suggests it may
be possible to predict the task based on interaction data.
We explore this in a small pilot study below.
      </p>
    </sec>
    <sec id="sec-3">
      <title>DATA COLLECTION</title>
      <p>In this section we provide details of the data collected and
explain the motivation behind recording the data.
3.1</p>
    </sec>
    <sec id="sec-4">
      <title>Study Design</title>
      <p>Data was collected via a laboratory based user study with
4 users. The participants were information science students
(1 male, 3 female) aged between 20 and 30. All of the
participants were experienced wikipedia users and were
comfortable using the wikipedia search facilities. Although this
user population is not large or diverse enough to provide
generalisable results, it is su cient for our aims, which were
to evaluate and improve the methodology and get a sense
for the feasibility of our ideas.</p>
      <p>Each participant performed 6 Wikipedia search tasks (2
of each of the 3 types of interest - lookup, learn and
casualleisure). The tasks were presented in the form of a simulated
scenario and were ordered randomly to minimise learning
e ects. Example tasks for each type are shown in Figure 2.</p>
      <p>After initially greeting the participant, the experimental
procedure was explained in person. Then, to prevent biases,
the participant was led automatically through the
experiment on screen, with task descriptions, questionnaires and
a web-browser window appearing when appropriate. The
experimenters observed the tasks remotely in an adjoining
room, where the participant's screen was mirrored.
3.2</p>
    </sec>
    <sec id="sec-5">
      <title>Data Collected</title>
      <p>We collected a large amount of data from each participant
before, during and after the study.</p>
      <p>Questionnaires: A pre-study questionnaire collected
demographics, search experience, and experience with wikipedia
of the participants. Pre-and post-task questionnaires elicited
perceptions of the task and domain knowledge, of success
and the experience including emotional aspects, and nally
a post-study questionnaire provided general impressions of
the experiment.</p>
      <p>
        Eyetracking Data: We recorded participant gaze patterns
using an SMI RED eye-tracker. The associated BeGaze
software recorded videos les of screen interactions with an
additional layer indicating the area of the screen where the user
is focusing his gaze. We manually annotated these complete
overlaid video sequences with two labels. The rst describes
what the user is doing ("action"). This is a simple coding
scheme but aligns with reading psychology research [
        <xref ref-type="bibr" rid="ref13 ref14">14, 13</xref>
        ].
      </p>
      <p>It was the annotator who decided which action to code at
what moment by following the focus displayed in the layer
on top of the recorded screen. The second label describes
the content ("element") being focused on and is derived from
the elements available in Wikipedia pages. The label was
assigned when the focussed on an area on the screen so long
that the annotator could assume the element in the area was
perceived.The full set of labels for actions and elements is
presented in Fig. 1. The intuition behind the labels was that
the style of reading for di erent task types and the content
elements used will be very di erent. By labelling videos in
this way we could test this intuition empirically.</p>
      <p>Browser Logs: We instrumented the refox web-browser
to log all user interactions during the search process.</p>
      <p>Timestamp information was used to align interaction data
from di erent sensors.</p>
      <p>Lookup: Last night you watched a documentary about the sinking of the Titanic. Suddenly you wonder how many passengers were on
board when the catastrophe happened. Search in Wikipedia for this information.</p>
      <p>Learn: Friends from abroad are visiting Germany and you plan to travel together to visit the small but beautiful city of Regensburg.</p>
      <p>As preparation for the trip you want to know more about the city and its history. Use Wikipedia to do this.
Casual-leisure: You have a few minutes before your class starts but you are already sitting in the lecture hall. Kill this time using
wikipedia using the next six minutes to look at whatever topic(s) take your fancy.</p>
    </sec>
    <sec id="sec-6">
      <title>EVALUATION OF THE DATA</title>
      <p>We analyse the data in two stages. First, in Section 4.1, we
examine the distribution of video labels for di erent types of
task to determine if users behave di erently or focus their
attention on di erent kinds of topics when completing di erent
task types. Second, in Section 4.2, we show how these labels
can, in turn, be predicted using interaction data from the
eyetracker and browser. The rst stage provides evidence
that the user's preferences for content elements depends on
the search task, endorsing our suggestion to customise web
pages at run time. The second stage provides some evidence
for our hypothesis that the interactions a user performs in
a browser may be used to predict which actions he trying
to complete and which content elements he is preferring at
that moment.</p>
    </sec>
    <sec id="sec-7">
      <title>Reading Style and Content for Task-types</title>
      <p>Technical di culties meant we were only able to work with
data for 6 casual-leisure, 4 lookup tasks and 4 learn tasks.
We rst divided the data into 500ms frames, allowing us to
normalise the counts by task length, and counted relative
frequencies of frames for which label combinations occur for
each task type (see Table 1). Visually inspecting the
distribution of content for actions, suggests the reading style and
the elements of content interacted with were very di erent in
di erent task contexts. This is con rmed by pair-wise
comparisons using chi-squared tests for the distributions content
elements for each possible pair of task types (see Table 2).</p>
      <p>Examining the results in Table 2, we observe that all but
one combination of action type shows highly signi cant
differences in the distribution of content elements examined.
The exception is the distribution of elements for lookup and
casual-leisure tasks, which initially seems counterintuitive,
as one would expect these two tasks to be very di erent.
Below we summarise the main similarities and di erences
between the task-types and attempt to explain what these
mean in the context of our work.</p>
      <p>When completing lookup tasks, the participants do not
typically read content, the exception being page
introductions. Instead they scan large portions of the page very
quickly, looking for the snippets of information that will
satisfy their speci c information need. They tend to scan a
number of di erent kinds of content elements during tasks.
This can be seen from Table 1 with counts being spread
over text passages, introduction, info boxes, lists and
headers. Images are noticeably missing from lookup tasks. It
seems as if the participants have decided that for the tasks
assigned, images will not useful and are able to avoid them.</p>
      <p>Learn and casual-leisure di er from lookup tasks in that
they both tend to be longer in time and have more
interactions. They also both involve reading actions, which were
rare for lookup. By this we mean that the user focuses
attention on whole passages of text and attends the text from left
to right and line by line. Another similarity between learn
and casual-leisure tasks is the way that text passages are
consumed, with the counts for these tasks being very
similar. There are di erences between learn and casual-leisure
tasks, particularly in terms of the elements used other than
text passages. During learn tasks the focus tended to be on
headers, while for casual leisure, the focus was on elements
such as introductions and info boxes, which allow the user
to gain an overview of what a page is about and allow them
to judge whether it is interesting or not. We assume that
headers are useful for learn tasks because here there is a
concrete information need i.e. users do not just need to nd
something that is interesting or not, but need speci c
informational content. In this sense headers will help the user
determine whether a paragraph is worth reading or not.
4.2</p>
    </sec>
    <sec id="sec-8">
      <title>Predicting Style and Content Preferences</title>
      <p>To determine if the manually assigned labels can be
predicted from interaction data alone, we calculated statistics
for counts of the synchronous occurrences of video labels
and input events for the 500ms frames introduced above.
As we were searching for the simplest features possible (so
they could eventually be computed easily during a browser
session at runtime) we used the frequencies of the most
common mouse events and the average saccade distance (i.e. eye
movement) per frame as features. More precisely, for each
frame we descretised these features into two levels: low and
high based on the mean value over all frames.</p>
      <p>
        Table 3 (left) gives an example for the information we
computed from the raw log data. In order to understand
whether the knowledge of the mousemove frequency is
relevant for predicting user actions and content elements, we
performed a series of 2-squared tests for all six search tasks
for one of the test participants chosen at random (in total
about 30 minutes of interaction). The results are reported in
Table 3(right). With the exception of the rare click events,
all features are highly signi cant. We interpret this as a
positive indication that for individual users { depending on their
personal interaction style (see [
        <xref ref-type="bibr" rid="ref1 ref8">1, 8</xref>
        ]) { it is feasible that the
reading behaviour label could be predicted during a
browsaction
NV
RE
SC
element
IN
IB
WI
LI
      </p>
      <p>Task
***
***
ing session. The results of the 2-squared tests indicate that
knowing at run-time whether the observed input events
occur below or above average at any point of time increases the
accuracy of predicting the video labels as annotated for that
moment as the distribution P (actionjevent = low) di ers
signi cantly from the distribution P (actionjevent = high)
for any annotated action and for any annotated element
type. This oberservation opens the way for runtime
prediction of the user action and preferred elements. From that
information, the system can predict the current task type
and use this information for generating content dynamically.</p>
    </sec>
    <sec id="sec-9">
      <title>CONCLUSIONS</title>
      <p>The preliminary data analysis we have presented provides
clues that, rstly, reading behaviour and preferences for
content elements depend on the surrounding task context and,
secondly, both behaviour and preferences may be predicted
for individual users based on their interaction style.</p>
      <p>
        There are several limitations to this work. That we only
have data from four participants from a relatively
homogenous group means we cannot generalise. However, we claim
that the presented methodology is well suited to address
our long term research questions outlined in the
introduction and the pilot has provided us with insight into how to
improve a full study. In addition to resolving several
technical challenges, we have learned that the great care will
need to be taken when simulating tasks. For example, were
few images looked at in lookup tasks, simply because of the
tasks we chose? We also plan to look at more complicated
prediction features and account for the fact that individual
di erences in participants (cognitive, reading style [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]) will
exists and that users interact in di erent ways (people who
follow eye movements with their mouse, people who don't)
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. At EuroHCIR, we look forward to engaging with the
broader HCI and IR communities to discuss the ideas in this
paper; we are particularly eager to receive feedback on the
next steps along this research path, including brainstorming
solutions to some of the empirical design challenges of
running such experiments and identifying and dealing with the
many factors which should be incorporated in the full study.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Agichtein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dumais</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Ragno</surname>
          </string-name>
          .
          <article-title>Learning user interaction models for predicting web search result preferences</article-title>
          .
          <source>In Proceedings of SIGIR, SIGIR '06</source>
          , pages
          <fpage>3</fpage>
          {
          <fpage>10</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Buscher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dengel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. Van</given-names>
            <surname>Elst</surname>
          </string-name>
          .
          <article-title>Eye movements as implicit relevance feedback</article-title>
          .
          <source>In CHI'08: Extended Abstracts on Human Factors in Computing Systems, page 2991{2996</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Claypool</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wased</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Brown</surname>
          </string-name>
          .
          <article-title>Implicit interest indicators</article-title>
          .
          <source>In Proceedings of the IUI, page</source>
          <volume>33</volume>
          {
          <fpage>40</fpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Elsweiler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Wilson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B. Kirkegaard</given-names>
            <surname>Lunn</surname>
          </string-name>
          .
          <article-title>New Directions in Information Behaviour, chapter Understanding Casual-leisure Information Behaviour</article-title>
          .
          <source>Emerald Publishing</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Feild</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Allan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Jones</surname>
          </string-name>
          .
          <article-title>Predicting searcher frustration</article-title>
          .
          <source>In Proc of SIGIR</source>
          <year>2010</year>
          ,,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Fox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Karnawat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mydland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dumais</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>White</surname>
          </string-name>
          .
          <article-title>Evaluating implicit measures to improve web search</article-title>
          .
          <source>ACM Trans. Inform</source>
          . Syst.,
          <volume>23</volume>
          (
          <issue>2</issue>
          ):
          <volume>147</volume>
          {
          <fpage>168</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Guo</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Agichtein</surname>
          </string-name>
          .
          <article-title>Ready to buy or just browsing?: detecting web searcher goals from interaction data</article-title>
          .
          <source>In Proceedings of SIGIR</source>
          , pages
          <volume>130</volume>
          {
          <fpage>137</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>White</surname>
          </string-name>
          , and
          <string-name>
            <surname>G. Buscher.</surname>
          </string-name>
          <article-title>User see, user point: gaze and cursor alignment in web search</article-title>
          .
          <source>In Proceedings of CHI, CHI '12</source>
          , pages
          <fpage>1341</fpage>
          {
          <fpage>1350</fpage>
          , New York, NY, USA,
          <year>2012</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Joachims</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Granka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hembrooke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Radlinki</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Gay.</surname>
          </string-name>
          <article-title>Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search</article-title>
          .
          <source>ACM Trans. Inform</source>
          . Syst.,
          <volume>25</volume>
          (
          <issue>2</issue>
          ),
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kelly</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Belkin</surname>
          </string-name>
          .
          <article-title>Reading time, scrolling and interaction: exploring implicit sources of user preferences for relevance feedback</article-title>
          .
          <source>In Proceedings of SIGIR, page</source>
          <volume>408</volume>
          {
          <fpage>409</fpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Marchionini</surname>
          </string-name>
          .
          <article-title>Exploratory search: from nding to understanding</article-title>
          .
          <source>Commun. ACM</source>
          ,
          <volume>49</volume>
          (
          <issue>4</issue>
          ):
          <volume>41</volume>
          {
          <fpage>46</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Morita</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shinoda</surname>
          </string-name>
          .
          <article-title>Information ltering based on user behavior analysis and best match text retrieva</article-title>
          .
          <source>In Proceedings of SIGIR</source>
          , pages
          <volume>272</volume>
          {{281,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Nielsen</surname>
          </string-name>
          . Designing Web Usability. New Riders, Berkeley, Calif.,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>Rayner</surname>
          </string-name>
          .
          <source>Eye movements in reading and information processing: 20 years of research. Psych. Bull</source>
          ,
          <volume>124</volume>
          (
          <issue>3</issue>
          ):372{{422,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>K.</given-names>
            <surname>Rodden</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.</given-names>
            <surname>Fu</surname>
          </string-name>
          .
          <article-title>Exploring how mouse movements relate to eye movements on web search results pages</article-title>
          .
          <source>In SIGIR Workshop on Web Information Seeking and Interaction</source>
          , pages
          <volume>29</volume>
          {
          <fpage>32</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>White</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kelly</surname>
          </string-name>
          .
          <article-title>A study on the e ects of personalization and task information on implicit feedback performance</article-title>
          .
          <source>In Proceedings of CIKM</source>
          <year>2006</year>
          , page
          <volume>297</volume>
          {
          <fpage>306</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>