<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring the potential contribution of mobile eye-tracking technology in enhancing the museum visit experience</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Moayad Mokatren The University of Haifa Mount Carmel</institution>
          ,
          <addr-line>Haifa, 31905</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>An intelligent mobile museum visitors' guide is a canonical case of a context-aware mobile system. Museum visitors move in the museum, looking for interesting exhibits, and wish to acquire information to deepen their knowledge and satisfy their interests. A smart context-aware mobile guide may provide the visitor with personalized relevant information from the vast amount of content available at the museum, adapted for his or her personal needs. Earlier studies relied on using sensors for location-awareness and interest detection. This work explores the potential of mobile eye-tracking and vision technology in enhancing the museum visit experience. Our hypothesis is that the use of the eye tracking technology in museums' mobile guides can enhance the visit experience by enabling more intuitive interaction. We report here on satisfactory preliminary results from examining the performance of a mobile eye tracker in a realistic setting - the technology has reached a reliable degree of maturity that can be used for developing a system based on it.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Copyright © 2016 for this paper by its authors. Copying permitted for
private and academic purposes.</p>
      <p>Tsvi Kuflik</p>
      <p>The University of Haifa
Mount Carmel, Haifa, 31905</p>
      <p>+97248288511
tsvikak@is.haifa.ac.il
Falk and Dierking [2000] and Falk [2009] tried to answer the
question of what do visitors remember from their visit and
what factors seemed to most contribute to visitors' forming of
long-terms memories: “when people are asked to recall their
museum experiences, whether a day or two later or after
twenty or thirty years, the most frequently recalled and
persistent aspects relate to the physical context-memories of
what they saw, what they did, and how they felt about these
experiences.”. Stock et al. [2009], and Dim and Kuflik [2014]
explored the potential of novel, mobile technology in
identifying visitors behavior types in order to consider
what/how/when to provide them with relevant services.
A key challenge in using mobile technology for supporting
museum visitors' is figuring out what they are interested in.
This may be achieved by tracking where the visitors are and
the time they spend there [Yalowitz and Bronnenkant, 2009].
A more challenging aspect is finding out what exactly they are
looking at [Falk and Dierking, 2000]. Given todays' mobile
devices, we should be able to gain access seamlessly to
information of interest, without the need to take pictures or
submit queries and look for results, which are the prevailing
interaction methods with our mobile devices. As we move
towards "Cognition-aware computing" [Bulling and Zander
2014], it becomes clearer that eye-gaze based interaction may
play a major role in human-computer interaction before/until
brain computer interaction methods will become a reality
[Bulling et al. 2012]. The study of eye movements started long
almost 100 years ago, Jacob and Karn [2003] presented a brief
history of techniques that were used to detect eye movements,
the major works dealt with usability researches, one of the
important works started in 1947 by Fitts and his colleagues
[Fitts et al. 1950] when they began using motion picture
cameras to study the movements of pilots’ eyes as they used
cockpit control and instruments to land an airplane. “It is clear
that the concept of using eye tracking to shed light on usability
issues has been around since before computer interfaces, as we
know them” [Jacob and Karn 2003]. Certain mobile eye
tracking devices that enables to detect what someone is
looking at and stores the data for later use and analysis, have
been developed and could be found in the market nowadays
[Hendrickson et al. 2014]. In recent years, eye tracking and
image based object recognition technology have reached a
reliable degree of maturity that can be used for developing a
system based on it, precisely identifying what the user is
looking at [Kassner et al. 2014]. We shall refer to this field by
reviewing techniques for image matching and extend them for
location-awareness use and we will follow the approach of
“What you look at is what you get” [Jacob 1991].</p>
      <p>With the advent of mobile and ubiquitous computing, it is time
to explore the potential of this technology for natural,
intelligent interaction of users with their smart environment,
not only in specific tasks and uses, but for a more ambitious
goal of integrating eye tracking into the process of inferring
mobile users’ interests and preferences for providing them
with relevant services and enhancing their user models, an
area that received little attention so far. This work aims at
exploring the potential of mobile eye tracking technology in
enhancing the museum visit experience by integrating and
extending these technologies into a mobile museum visitors'
guide system, so to enable using machine vision for
identifying visitors' position and their object of interest in this
place, as a trigger for personalized information delivery.
2. BACKGROUND
2.1 Museum visitors and their visit experience
Understanding who visits the museum, their behaviors and the
goal of the visit can play an important role in the design of
museums’ mobile guide (and other technologies) that
enhances the visit experience, “the visitors’ social context has
an impact on their museum visit experience. Knowing the
social context may allow a system to provide socially aware
services to the visitors.” [Bitgood 2002; Falk 2009; Falk and
Dierking 2000; Leinhardt and Knutson 2004; Packer and
Ballantyne 2005]. Falk [2009] argued that many studies have
been done on who visits museums, what visitors do in the
museum and what visitors learn from the museum, and tried to
understand the whole visitor and the whole visit experience as
well as after the visit. Furthermore, he proposed the idea of
visitors "identity" and identified five, distinct, identity-related
categories:
• Explorers: Visitors who are curiosity-driven with a generic
interest in the content of the museum. They expect to find
something that will grab their attention and fuel their
learning.
• Facilitators: Visitors who are socially motivated. Their visit
is focused on primarily enabling the experience and
learning of others in their accompanying social group.
• Professional/Hobbyists: Visitors who feel a close tie
between the museum content and their professional or
hobbyist passions. Their visits are typically motivated by a
desire to satisfy a specific content-related objective.
• Experience Seekers: Visitors who are motivated to visit
because they perceive the museum as an important
destination. Their satisfaction primarily derives from the
mere fact of having ‘been there and done that’.
• Rechargers: Visitors who are primarily seeking to have a
contemplative, spiritual and/or restorative experience. They
see the museum as a refuge from the work-a-day world or as
a confirmation of their religious beliefs.</p>
      <p>In addition, he argued that the actual museum visit experience
is strongly shaped by the needs of the visitor’s identity-related
visit motivations, and the individual’s entering motivations
creates a basic trajectory for the visit, though the specifics if
what the visitor actually sees and does are strongly influenced
by the factors described by the Contextual Model of Learning:
• Personal Context: The visitor’s prior knowledge,
experience, and interest.
• Physical Context: The specifics of the exhibitions,
programs, objects, and labels they encounter.
• Socio-cultural Context: The within-and between-group
interactions that occur while in the museum and the visitor’s
cultural experiences and values.</p>
      <p>Nevertheless the visitor perceives his or her visit experience to
be satisfying if this marriage of perceived identity-related
needs and museum affordance proves to be well-matched.
Hence, considering the use of technology for supporting
visitors and enhancing the museum visit experience, it seems
that these aspects need to be addressed by identifying visitors'
identity and providing them relevant support.
2.2 Object recognition and image matching
Modern eye trackers usually record video by a front camera of
the scenes for further analysis [Kassner et al. 2014]. Object
recognition is a task within computer vision of finding and
identifying objects in an image or video sequence. Humans
recognize a multitude of objects in images with little effort,
despite the fact that the image of the objects may vary
somewhat in different viewpoints, in many different sizes and
scales or even when they are translated or rotated. Objects can
even be recognized when they are partially obstructed from
view. This task is still a challenge for computer vision systems
[Pinto et al. 2008]. Many approaches to the task have been
implemented over multiple decades. For example, diffusing
models to perform image-to-image matching [Thirion 1998],
parametric correspondence technique [Barrow 1977] and The
Adaptive Least Squares Correlation [Gruen 1985] were
presented as a techniques for image matching. Techniques
from [Naphade et al. 1999], [Hampapur et al. 2001] and [Kim
et al. 2005] were presented for image sequence matching
(video stream). A related field is visual saliency or saliency
detection, “it is the distinct subjective perceptual quality which
makes some items in the world stand out from their neighbors
and immediately grab our attention.” [Laurent 2007].
Goferman et al. [2012] proposed a new type of saliency which
aims at detecting the image regions that represent the scene. In
our case, we can exploit the use of eye tracking to detect
salience in an efficient way since we have fixation points
representing points of interests in a scene.
3. RELATED WORK
As mentioned above, many studies were conducted in
detecting eye movements before considering their integration
with computer interfaces, as we know them today. The studies
have been around HCI and usability and techniques were
presented that can be extended for further eye tracking studies
and not just in the field of HCI. Jacob [1991] presented
techniques for local calibrating of an eye tracker, which is a
procedure of producing a mapping of the eye movements’
measures and wandering in the scene measures. In addition, he
presented techniques for fixation recognition with respect to
extracting data from noisy, jittery, error-filled stream and for
addressing the problem of "Midas touch” where people look at
an item without having the look “mean” something. Jacob and
Karn [2003] presented a list of promising eye tracking metrics
for data analysis:
• Gaze duration - cumulative duration and average spatial
location of a series of consecutive fixations within an area of
interest.
• Gaze rate – number of gazes per minute on each area of
interest.
• Number of fixation on each area of interest.
• Number of fixation, overall.
• Scan path – sequence of fixations.
• Number of involuntary and number of voluntary fixations
(short fixations and long fixations should be defined well in
term of millisecond units).</p>
      <p>Using handheld devices as a multimedia guidebook in
museums has led to improvement in the museum visit
experience. Researches have confirmed the hypothesis that a
portable computer with an interactive multimedia application
has the potential to enhance interpretation and to become a
new tool for interpreting museum collections [Evans et al.
2005, Evans et al. 1999, Hsi 2003]. Studies about integration
of multimedia guidebooks with eye tracking have already been
made in the context of museums and cultural heritage sites.
Museum Guide 2.0 [Toyama et al. 2012] was presented as a
framework for delivering multimedia content for museum’s
visitors which runs on handheld device and uses the SMI
viewX eye tracker and object recognition techniques. The
visitor can hear audio information when detecting an exhibit.
A users' study was conducted in a laboratory setting, but not in
a real museum. We plan to extend this work by integrating an
eye tracker into real museum visitors' guide system and
experiment it is realistic setting.</p>
      <p>Brône et al. [2011] have implemented effective new methods
for analyzing gaze data collected with eye-tracking device and
how to integrate it with object recognition algorithms. They
presented a series of arguments why an object-based approach
may provide a significant surplus, in terms of analytical
precision. Specifically they discussed solutions in order to
reduce the substantial cost of manual video annotation of gaze
behavior, and have developed a series of proof-of-concept
case studies in different real world situations, each with its
own challenges and requirements. We plan to use their lessons
in our study. Pfeiffer et al. [2014] presented "EyeSee3D",
where they combined geometric modelling with inexpensive
3D marker tracking to align virtual proxies with the real-world
objects. This allowed classifying fixations on objects of
interest automatically while supporting a free movement of the
participant. During the analysis of the accuracy of the pose
estimation they found that the marker detection may fail from
several reasons: First, sometimes the participant looked
sideways and there simply was no marker within view. More
often, however, swift head movements or extreme position
changes were causing these issues. Ohm et al. [2014] tried to
find where people look at, when navigating in a large scale
indoor environment, and what objects can assist them to find
their ways. They conducted a user study and assessed the
visual attractions of objects with an eye tracker. Their findings
show that functional landmarks like doors and stairs are most
likely to be looked at. In our case we can use these landmarks
as reliable points of interest that can be used for finding the
location of the visitor in the museum. Beugher et al. [2014]
presented a novel method for the automatic analysis of mobile
eye-tracking data in natural environment for object
recognition. The obtained results were satisfactory for most of
the objects. However, a large scale variance results in a lower
detection rate (for objects which were looked at both from
very far away and from close by.)
Schrammel et al. [2011, 2014] studied attentional behavior of
users on the move. They discussed the unique potential and
challenges of using eye tracking in mobile settings and
demonstrated the ability to use it to study the attention on
advertising media in two different situations: within a digital
display in public transport and towards logos in a pedestrian
shopping street as well as ideas about a general attention
model based on eye gaze. Kiefer et al. [2014] also explored
the possibility of identifying users’ attention by eye tracking in
the setting of tourism – when a tourist gets bored looking at a
city panorama – this scenario may be of specific interest for us
as locations or objects that attracted more or less interest may
be used to model user's interest and trigger further
services/information later on. Nakano and Ishii (2010) studied
the use of eye gaze as an indicator for user engagement, trying
also to adapt it to individual users. Engagement may be used
as an indicator for interest and the ability to adapt engagement
detection to individual users may enable us also to infer
interest and build/adapt a user model using this information.
Furthermore, Ma et al. [2015] demonstrated an initial ability to
extract user models based on eye gaze of users viewing
videos. Xu et al. [2008] also used eye gaze to infer user
preferences in the content of documents and videos by the
users attention as inferred from gaze analysis (number of
fixations on word/image).</p>
      <p>
        As we have seen, there is a large body of work about
monitoring and analyzing users' eye gaze in general and also
in cultural heritage setting. Moreover, the appearance of
mobile eye trackers opens up new opportunities for research in
mobile scenarios. It was also demonstrated in several
occasions that eye gaze may be useful in enhancing a user
model, as it may enable to identify users' attention (and
interests). Considering mobile scenarios, when users also carry
smartphones - equipped with various sensors - implicit user
modeling can take place by integrating signals from various
sensors, including the new sensor of eye-gaze for better
modeling the user and offering better personalized services. So
far sensors like GPS, compass, accelerometers and voice
detectors were used in modeling users' context and interests,
        <xref ref-type="bibr" rid="ref9">(see for instance [Dim &amp; Kuflik. 2014])</xref>
        . When we mention
mobile scenarios, we refer to a large variety of different
scenarios – pedestrians' scenario differs from jogging or
shopping or cultural heritage scenario. The tasks are different
and users' attention is split differently. The cultural heritage
domain is an example where users have long term interests
that can be modeled and the model can be used and updated
during a museum visit by information collected implicitly
from various sensors, including eye-gaze. In this sense, the
proposed research extends and aims at generalizing the work
of Kardan and Conati [2013]. Still, even though a lot of
research effort was invested in monitoring, analyzing and
using eye gaze for inferring user interests, so far, little research
attention was paid to users gazing behavior "on the go". This
scenario poses major challenges as it involves splitting
attention between several tasks at the same time – avoiding
obstacles, gathering information and paying attention to
whatever seems relevant, for many reasons.
4. RESEARCH GOAL AND QUESTIONS
Our goal is to examine the potential of integrating the eye
tracking technology with a mobile guide for a museum visit
and try to answer the question: How can the use of mobile
eye tracker enhance the museum visit experience? Our
focus will be on developing a technique for location awareness
based on eye gaze detection and image matching, and integrate
it with a mobile museum visitor’s guide that provides
multimedia content to the visitor. For that we will design and
develop a system that runs on handheld device and uses Pupil
Dev [Kassner et al. 2014] eye tracker for identifying objects of
interest and delivering multimedia content to visitor in the
museum. Then we will evaluate the system in a user study in a
real museum to find out how the use of eye tracker integrated
with a multimedia guide can enhance the museum visit
experience. In our study, we have to consider different factors
and constraints that may affect the performance of the system,
such as the real environment lighting conditions which are
different from laboratory conditions and can greatly affect the
process of object recognition. Another aspect may be the
position of the exhibits relative to the eye tracker holder, since
the eye tracker device is mounted as this is constrained by the
museum layout. While having many potential benefits, a
mobile guide can also have some disadvantages [Lanir et al,
2013]. It may focus the visitor’s attention on the mobile device
rather than on the museum artifacts [Grinter et al, 2002]. We
will also examine this behavior and try to review whether the
use of eye tracker in mobile guide can increase the looking
time at the exhibits. In addition, we will try to build a system
that runs in various real environments with different factors
and have the same constraints such as the light and the
position constraints.
5. TOOLS AND METHODS
A commercial mobile eye tracker will be integrated into a
mobile museum visitors' guide system as a tool for location
awareness, interest detection and focus of attention by using
computer vision techniques. Our hypothesis is that the use of
the eye tracker in mobile guides can enhance the visit
experience. The system will be evaluated in user studies, the
participants will be students from University of Haifa. The
1
study will be conducted in Hecht museum , which is a small
museum, located at the University of Haifa that has both an
archeological and art collections. The study will include an
orientation about using the eye tracker and the mobile guide,
then taking a tour with the eye tracker and handheld device,
multimedia content will be delivered by showing information
on the screen or by listening to audio by earphones. Data will
be collected as follows: The students will be interviewed and
asked about their visit experience, and will be asked to fill
questionnaires regarding general questions such as if it is the
first time that they have visited the museum, their gender and
age, and more. Visit logs will be collected and analyzed for
later use, we can come to conclusions about the exhibit
importance and where the visitors tend to look, the positioning
of the exhibits, and the time of the visits or explorations. The
study will compare the visit experience when using two
different system versions – a conventional one and one with an
integrated eye tracker, we will choose the work of [Kuflik et
al. 2012] that was conducted in Hecht museum and which uses
“light weight” proximity based indoor positioning sensors for
location-awareness as a comparison system for examining the
user experience.
6. PRELIMINARY RESULTS
It was important to examine the accuracy of eye gaze detection
when using the Pupil Dev mobile eye-tracker device. For that,
we have conducted several small-scale user studies onsite.
6.1 The Pupil eye tracker
Pupil eye tracker [Kassner et al. 2014] is an open source
platform for pervasive eye tracking and gaze-based
interaction. It comprises a light-weight eye tracking headset
that includes high-resolution scene and eye cameras, an open
source software framework for mobile eye tracking, as well as
a graphical user interface to playback and visualize video and
gaze data. The software and GUI are platform-independent
and include algorithms for real-time pupil detection and
tracking, calibration, and accurate gaze estimation. Results of
a performance evaluation show that Pupil can provide an
average gaze estimation accuracy of 0.6 degree of visual angle
(0.08 degree precision) with a processing pipeline latency of
only 0.045 seconds.
6.2 User study 1: Look at a grid cells
Five students from the University of Haifa, without any visual
disabilities participated in this study. They were asked to look
at wall-mounted grid from a distance of 2 meters and track a
finger (see figure 2). On every cell that the finger pointed at,
they were asked to look at for approximately 3 seconds. Data
was collected for determining the practical measurement
accuracy. The results were as follows: on average, fixation
detection rate was ~80% (most missed fixations were in the
edges/corners – see table 1 for details about misses). In
addition, average fixation point error rate, in terms of distance
from the center of grids, was approximately 5 cm (exact error
rate can be calculated using simple image processing
techniques for detecting the green circle and applying mapping
transform to the real word).
      </p>
      <p>During the study we ran into several practical problems. The
Pupil Dev eye tracker that we are using is not fitted for every
person. The device consists of two cameras, the first for
delivering the scene and the second directed to the right eye
for detecting fixations. In some cases when the device is not
fitted correctly, the vision range got smaller and parts of the
pupil got out from the capture frame (see figure 3 for
example). As a consequence no fixations were detected.
Another limitation was that when using the eye tracker with
tall participants, they have to step back from the object which
negatively affects the accuracy.
6.3 User study 2: Look at an exhibit
In this study we examined the accuracy of the eye tracker in a
realistic setting. One participant (1.79m tall) was asked to look
at exhibits at the Hecht museum. Several exhibits where
chosen with different factors and constraints (see figure 4, 5,
and 6). The main constraint in this case is the distance from
the exhibit since the visual range gets larger when the distance
grows, and mainly we have to cover all the objects that we are
interested in. Table 2 presents the objects height from the floor
and the distance of the participant from the object. The next
step was to examine fixations accuracy after making sure that
the participant is standing in a proper distance. The participant
was asked to look at different points in the exhibit/scene. In
the gallery exhibits, the scan path has been set to be the four
corners of the picture and finally the center of it. Regarding
the vitrine exhibits, for each jug one point at the center has
been defined
It’s important to note that the heights/distances relation is for
visual range (having the objects in the frame of the camera)
and not for fixations detections. Since missed fixations could
be as a result of a set of constraints and not the distance from
the object, thing that we have not examined yet.
7. SYSTEM DESIGN
A smart context-aware mobile museum visitors' guide may
provide the visitor with personalized relevant information
from the vast amount of content available at the museum,
adapted for his or her personal needs. Furthermore, the system
may provide recommendations and location-relevant
information. However, the potential benefit may also have a
cost, the notifications may interrupt the user in the current task
and be annoying in the wrong context. Beja et al. [2015]
examined the effect of notifications in a special leisure
scenario - a museum visit. Following Beja et al [2015], we
will consider three different scenarios:
I. The Visitor is looking at an exhibit. The region of interest
will be defined as the region from the scene around the
gaze fixation point. Then object matching procedure will
be applied (see section 8). It will enable us to determine
both the visitor’s position and the object of interest.
II.</p>
      <p>III.</p>
      <p>The visitor is looking at the tablet. This could be done in
two ways: 1) the visitor is watching multimedia
information, in this scenario there is nothing to do for him.
2) The visitor may need service from the system or a
recommendation, so it is the right time to deliver him.
The visitor is wandering in the museum. According to Beja
et al. [2015], it is the best time for sending notifications.
As a basic system we will use the PIL museum visitor's guide
system [Kuflik et al 2012; Lanir et al. 2013]. The system is a
context aware, mobile museum visitors' guide system. Its
positioning mechanism is based on proximity based RF
technology that enables to identify the visitor's position –
when the visitor is near a point of interest. As vision is the
main sense for gathering information, we plan to replace the
system's positioning component with an eye-tracker based
positioning and object of interest identification component.
Hence we will enhance the positioning system by providing
the system the ability to pin-point the object of interest. The
rest of the system will remain unchanged. Having these two
versions of systems will enable us to compare and evaluate the
benefits of the eye-tracker as a positioning and pointing device
in the museum.
8. OBJECT MATCHING PROCEDURE
8.1 Data-set preparation
A set of images of the exhibits will be taken, each image may
contain one or more objects. Each image will be given a
distinct label value and size of region around the object (in
terms of width and height – rectangular shape).
8.2 Object matching
The matching procedure will be done in three steps:
1. Eye-tracker scene camera frame is taken (figure 7) and
image-to-image matching applied. The result is an image
with labeled regions in the current scene’s frame (figure 8).
2. Mapping transformation – We need to transform the
fixation point in the eye-tracker scene camera to a
suitable/matched point in the image that we got in step one
(image from the data-set with labeled regions), since the
viewpoint of the objects can be different from this in the
data set. For example one image is rotated relative to the
other or one is zoomed in/out as a result of standing in
different distance from the object when the data-set image
was taken.
3. Finding the object - This is step is simple since we have a
mapped fixation points and labeled regions. What remains
is determining for which object the point does it relates (or
it relates to nothing).
9. DISCUSSION
We conducted these small-scale user studies in order to gain
initial first-hand experience with the eye-tracker in a realistic
setting. Furthermore, we tried to clarify which exhibits are
appropriate to be included in our future study and, given the
limitation of the device, what portion of the museum exhibits
may be included in general. Not surprisingly, we got 100%
accuracy rate when we examined the device in the art wing
since all the pictures are placed in ideal height. Regarding the
archeological wing, it is considerably more challenging
environment, since objects are placed in different heights and
have unequal sizes. As a result the visitor may have to stand
far away from the objects in order to get them into the
eyetracker front camera frame, a fact that can negatively affect the
visit experience. In the case of archeological wing we
approximate that about 60% of the exhibits may be detectable
with the current device. Regarding the low-height exhibits we
don’t know yet whether they can be considered or not. More
challenging exhibits are these that are placed in harsh light
conditions or placed in low height (see figure 9 for example)
and/or these that are too large to fit in one frame (see figure 10
for example).
10. CONCLUSIONS AND FUTURE WORK
This paper presents a work-in-progress that aims at exploring
the potential contribution of the mobile eye tracking
technology in enhancing the museum visit experience. For that
we have done small-scale experiments in order to get an
understanding of the performance of the system in realistic
setting. We got satisfactory results from these studies and an
understanding of the limitations of the equipment. The next
step in the study is to design and build a museum mobile guide
that extends the use of mobile eye tracking as a tool for
identifying the visitor position and points of interests. We will
use the eye-tracker scene camera captures and the collected
gaze data to develop a technique for location-awareness. The
system will run on tablet, and the multimedia content will be
delivered to the participants by listening to audio guide via
earphones or by watching slides. Furthermore, knowing
exactly where the visitor look in the scene (specific object)
will allow us to deliver personalized information. Our research
will be a supplement to the nowadays mobile museum guide
that uses location-awareness technology and techniques that
enhances the visit experience. The system can also be
extended and used in other venues such as outdoors cultural
heritage sites as well as shopping centers/markets after further
validation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ardissono</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuflik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Petrelli</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Personalization in cultural heritage: the road travelled and the one ahead. User modeling and user-adapted interaction</article-title>
          ,
          <volume>22</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>73</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Barrow</surname>
            ,
            <given-names>H. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tenenbaum</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bolles</surname>
            ,
            <given-names>R. C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wolf</surname>
            ,
            <given-names>H. C.</given-names>
          </string-name>
          (
          <year>1977</year>
          ).
          <article-title>Parametric correspondence and chamfer matching: Two new techniques for image matching (No. TN-153)</article-title>
          . SRI international, Menlo Park CA, Artificial Intelligence center.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Beja</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lanir</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kuflik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Examining Factors Influencing the Disruptiveness of Notifications in a Mobile Museum Context</article-title>
          .
          <string-name>
            <surname>Human-Computer Interaction</surname>
          </string-name>
          just-accepted (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Bitgood</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Environmental psychology in museums, zoos, and other exhibition centers</article-title>
          .
          <source>Handbook of environmental psychology</source>
          ,
          <volume>461</volume>
          -
          <fpage>480</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Brône</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oben</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Beeck</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Goedemé</surname>
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Towards a more effective method for analyzing mobile eye-tracking data: integrating gaze data with object recognition algorithms</article-title>
          .
          <source>UbiComp '11</source>
          ,
          <string-name>
            <surname>Sep</surname>
          </string-name>
          17-Sep 21,
          <year>2011</year>
          , Beijing, China.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Bulling</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dachselt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duchowski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jacob</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stellmach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sundstedt</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Gaze interaction in the post-WIMP world</article-title>
          .
          <source>In CHI'12 Extended Abstracts on Human Factors in Computing Systems</source>
          ,
          <volume>1221</volume>
          -
          <fpage>1224</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Bulling</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zander</surname>
            ,
            <given-names>T. O.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Cognition-aware computing</article-title>
          .
          <source>Pervasive Computing</source>
          , IEEE,
          <volume>13</volume>
          (
          <issue>3</issue>
          ),
          <fpage>80</fpage>
          -
          <lpage>83</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>De</given-names>
            <surname>Beugher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Brône</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            , &amp;
            <surname>Goedemé</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection</article-title>
          .
          <source>In Proceedings of VISIGRAPP</source>
          <year>2014</year>
          ,
          <volume>1</volume>
          ,
          <fpage>625</fpage>
          -
          <lpage>633</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Dim</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kuflik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Automatic detection of social behavior of museum visitor pairs</article-title>
          .
          <source>ACM Transactions on Interactive Intelligent Systems</source>
          ,
          <volume>4</volume>
          (
          <issue>4</issue>
          ),
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sterry</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <article-title>Portable computers &amp; interactive media: A new paradigm for interpreting museum collections</article-title>
          . In: Bearman,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Trant</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . (eds.)
          <source>Cultural Heritage Informatics 1999: Selected papers from ICHIM</source>
          <year>1999</year>
          ,
          <volume>93</volume>
          -
          <fpage>101</fpage>
          . Kluwer Academic Publishers, Dordrecht.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Falk</surname>
          </string-name>
          , John H., and
          <string-name>
            <surname>Lynn</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Dierking</surname>
          </string-name>
          .
          <article-title>Learning from museums: Visitor experiences and the making of meaning</article-title>
          . Altamira Press,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Fitts</surname>
            ,
            <given-names>P. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>R. E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Milton</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          (
          <year>1950</year>
          ).
          <article-title>Eye movements of aircraft pilots during instrument-landing approaches</article-title>
          .
          <source>Aeronautical Eng. Review</source>
          <volume>9</volume>
          (
          <issue>2</issue>
          ),
          <fpage>24</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Goferman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zelnik-Manor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Context-aware saliency detection</article-title>
          .
          <source>Pattern Analysis and Machine Intelligence</source>
          , IEEE Transactions on,
          <volume>34</volume>
          (
          <issue>10</issue>
          ),
          <fpage>1915</fpage>
          -
          <lpage>1926</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Grinter</surname>
            ,
            <given-names>R. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aoki</surname>
            ,
            <given-names>P. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szymanski</surname>
            ,
            <given-names>M. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thornton</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woodruff</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hurst</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Revisiting the visit: understanding how technology can shape the museum visit</article-title>
          .
          <source>In Proceedings of CSCW</source>
          <year>2002</year>
          ,
          <volume>146</volume>
          -
          <fpage>155</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Gruen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>1985</year>
          ).
          <article-title>Adaptive least squares correlation: a powerful image matching technique</article-title>
          .
          <source>South African Journal of Photogrammetry, Remote Sensing and Cartography</source>
          ,
          <volume>14</volume>
          (
          <issue>3</issue>
          ),
          <fpage>175</fpage>
          -
          <lpage>187</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Hampapur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hyun</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Bolle</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          (
          <year>2001</year>
          , December).
          <article-title>Comparison of sequence matching techniques for video copy detection</article-title>
          .
          <source>In Electronic Imaging</source>
          <year>2002</year>
          .
          <fpage>194</fpage>
          -
          <lpage>201</lpage>
          . International Society for Optics and Photonics.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Hendrickson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ailawadi</surname>
            ,
            <given-names>K. L.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Six lessons for in-store marketing from six years of mobile eyetracking research. Shopper Marketing and the Role of InStore Marketing</article-title>
          .
          <source>Review of Marketing Research</source>
          ,
          <volume>11</volume>
          ,
          <fpage>57</fpage>
          -
          <lpage>74</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Kardan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Conati</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Comparing and Combining Eye Gaze and Interface Actions for Determining User Learning with an Interactive Simulation</article-title>
          .
          <source>In proceedings of UMAP</source>
          <year>2013</year>
          ,
          <volume>215</volume>
          -
          <fpage>227</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Kassner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patera</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Bulling</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2014</year>
          ,
          <article-title>September)</article-title>
          .
          <article-title>Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction</article-title>
          .
          <source>In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication</source>
          .
          <fpage>1151</fpage>
          -
          <lpage>1160</lpage>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Katy</surname>
            <given-names>Micha</given-names>
          </string-name>
          , Daphne
          <string-name>
            <surname>Economou</surname>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Using Personal Digital Assistants (PDAs) to Enhance the Museum Visit Experience</article-title>
          .
          <source>In proceedings of PCI</source>
          <year>2005</year>
          , Volas, Greece,
          <source>November 11-13</source>
          ,
          <year>2005</year>
          . Proceedings.
          <volume>188</volume>
          -
          <fpage>198</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Kiefer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giannopoulos</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kremer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schlieder</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Raubal</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Starting to get bored: An outdoor eye tracking study of tourists exploring a city panorama</article-title>
          .
          <source>In Proceedings of the Symposium on Eye Tracking Research and Applications</source>
          (pp.
          <fpage>315</fpage>
          -
          <lpage>318</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Vasudev</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Spatiotemporal sequence matching for efficient video copy detection. Circuits and Systems for Video Technology</article-title>
          , IEEE Transactions on,
          <volume>15</volume>
          (
          <issue>1</issue>
          ),
          <fpage>127</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Kuflik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lanir</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dim</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wecker</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zancanaro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Stock</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Indoor positioning in cultural heritage: Challenges and a solution</article-title>
          .
          <source>In Electrical &amp; Electronics Engineers in Israel (IEEEI)</source>
          ,
          <source>2012 IEEE 27th Convention of. 1-5</source>
          . IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Lanir</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuflik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dim</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wecker</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Stock</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>The influence of a location-aware mobile guide on museum visitors' behavior</article-title>
          .
          <source>Interacting with Computers</source>
          ,
          <volume>25</volume>
          (
          <issue>6</issue>
          ),
          <fpage>443</fpage>
          -
          <lpage>460</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Laurent</surname>
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2007</year>
          )
          <article-title>Visual salience</article-title>
          .
          <source>Scholarpedia</source>
          ,
          <volume>2</volume>
          (
          <issue>9</issue>
          ):
          <fpage>3327</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Leinhardt</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>and Knutson K. Listening in on museum conversations</article-title>
          .
          <source>Rowman Altamira</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Ma</surname>
          </string-name>
          , K. T.,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sim</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kankanhalli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Eye-2-I: Eye-tracking for just-in-time implicit user profiling</article-title>
          .
          <source>arXiv preprint arXiv:1507</source>
          .
          <fpage>04441</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Nakano</surname>
            ,
            <given-names>Y. I.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ishii</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Estimating user's engagement from eye-gaze behaviors in human-agent conversations</article-title>
          .
          <source>In Proceedings of IUI</source>
          <year>2010</year>
          .
          <volume>139</volume>
          -
          <fpage>148</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Naphade</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yeung</surname>
            ,
            <given-names>M. M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Yeo</surname>
            ,
            <given-names>B. L.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Novel scheme for fast and efficient video sequence matching using compact signatures</article-title>
          .
          <source>Electronic Imaging</source>
          .
          <fpage>564</fpage>
          -
          <lpage>572</lpage>
          . International Society for Optics and Photonics.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>O.</given-names>
            <surname>Stock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zancanaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pianesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tomasini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Rocchi</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Formative evaluation of a tabletop Display meant to orient casual conversation</article-title>
          .
          <source>Journal of Knowledge, Technology and Policy</source>
          <volume>22</volume>
          (
          <issue>1</issue>
          ),
          <fpage>17</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Ohm</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Müller</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ludwig</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Bienk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Where is the Landmark? Eye Tracking Studies in LargeScale Indoor Environments</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Packer</surname>
            , Jan, and
            <given-names>Roy Ballantyne.</given-names>
          </string-name>
          (
          <year>2005</year>
          )
          <article-title>"Solitary vs. shared: Exploring the social dimension of museum learning."</article-title>
          <source>Curator: The Museum Journal 48.</source>
          <volume>2</volume>
          <fpage>177</fpage>
          -
          <lpage>192</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Pinto</surname>
            , Nicolas,
            <given-names>David D.</given-names>
          </string-name>
          <string-name>
            <surname>Cox</surname>
          </string-name>
          , and
          <string-name>
            <surname>James J. DiCarlo</surname>
          </string-name>
          .
          <article-title>"Why is real-world visual object recognition hard?</article-title>
          .
          <source>"</source>
          (
          <year>2008</year>
          ):
          <fpage>e27</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Pfeiffer</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Renner</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Eyesee3d: A low-cost approach for analyzing mobile 3d eye tracking data using computer vision and augmented reality technology</article-title>
          .
          <source>In Proceedings of the Symposium on Eye Tracking Research and Applications</source>
          .
          <volume>369</volume>
          -
          <fpage>376</fpage>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <surname>ROBERT J. K. JACOB</surname>
          </string-name>
          (
          <year>1991</year>
          ).
          <article-title>The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look At is What You Get</article-title>
          ,
          <source>ACM Transactions on Information Systems</source>
          ,
          <volume>9</volume>
          (
          <issue>3</issue>
          ),
          <source>April</source>
          <year>1991</year>
          ,
          <fpage>152</fpage>
          -
          <lpage>169</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Robert</surname>
            <given-names>J. K.</given-names>
          </string-name>
          <string-name>
            <surname>Jacob</surname>
          </string-name>
          , Keith S.
          <string-name>
            <surname>Karn</surname>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>Eye Tracking in Human-Computer Interaction</article-title>
          and Usability Research:
          <article-title>Ready to Deliver the Promises, Elsevier Science BV</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <surname>Schrammel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mattheiss</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Döbelt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paletta</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Almer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Attentional behavior of users on the move towards pervasive advertising media</article-title>
          .
          <source>In Pervasive Advertising</source>
          ,
          <fpage>287</fpage>
          -
          <lpage>307</lpage>
          ..
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <surname>Schrammel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Regal</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tscheligi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Attention approximation of mobile users towards their environment</article-title>
          .
          <source>In CHI'14 Extended Abstracts on Human Factors in Computing Systems</source>
          ,
          <volume>1723</volume>
          -
          <fpage>1728</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hsi</surname>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>A study of user experiences mediated by nomadic web content in a museum</article-title>
          .
          <source>The Exploratorium, 3601 Lyon Street</source>
          , San Francisco, CA 94123
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <surname>Takumi</surname>
            <given-names>Toyama</given-names>
          </string-name>
          , Thomas Kieninger, Faisal Shafait,
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Dengel</surname>
          </string-name>
          .
          <article-title>Gaze guided object recognition using a head-mounted eye tracker</article-title>
          .
          <source>ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications</source>
          ,
          <volume>91</volume>
          -
          <fpage>98</fpage>
          , ACM New York, NY, USA 2012
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <surname>Thirion</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>Image matching as a diffusion process: an analogy with Maxwell's demons</article-title>
          .
          <source>Medical image analysis</source>
          ,
          <volume>2</volume>
          (
          <issue>3</issue>
          ),
          <fpage>243</fpage>
          -
          <lpage>260</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lau</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Personalized online document, image and video recommendation via commodity eye-tracking</article-title>
          .
          <source>In Proceedings of RecSys 2008</source>
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <surname>Yalowitz</surname>
            ,
            <given-names>S.S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Bronnenkant</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2009</year>
          )
          <article-title>Timing and tracking: unlocking visitor behavior</article-title>
          .
          <source>Visit</source>
          . Stud.,
          <volume>12</volume>
          ,
          <fpage>47</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deriche</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Faugeras</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Luong</surname>
            ,
            <given-names>Q. T.</given-names>
          </string-name>
          (
          <year>1995</year>
          ).
          <article-title>A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry</article-title>
          .
          <source>Artificial intelligence</source>
          ,
          <volume>78</volume>
          (
          <issue>1</issue>
          ),
          <fpage>87</fpage>
          -
          <lpage>119</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>