<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Visualization of Cultural-Heritage Content based on Individual Cognitive Diferences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>George E. Raptis</string-name>
          <email>raptisg@upnet.gr</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christina Katsini</string-name>
          <email>katsinic@upnet.gr</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christos Fidas</string-name>
          <email>ifdas@upatras.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nikolaos Avouris</string-name>
          <email>avouris@upatras.gr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Cultural Heritage</institution>
          ,
          <addr-line>Management and New, Technologies</addr-line>
          ,
          <institution>University of</institution>
          ,
          <addr-line>Patras, Patras</addr-line>
          ,
          <country country="GR">Greece</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>HCI Group, Dept. Electrical, and Computer Engineering, University of Patras</institution>
          ,
          <addr-line>Patras</addr-line>
          ,
          <country country="GR">Greece</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Human Opsis and HCI, Group, Dept. of Electrical, and Computer Engineering, University of Patras</institution>
          ,
          <addr-line>Patras, Grece</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Human Opsis and HCI, Group, Dept. of Electrical, and Computer Engineering, University of Patras</institution>
          ,
          <addr-line>Patras</addr-line>
          ,
          <country country="GR">Greece</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>2091</volume>
      <abstract>
        <p>Comprehension of visual content is linked with the visitors' experience within cultural heritage contexts. Considering the diversity of visitors towards human cognition and the influence of individual cognitive diferences on information comprehension, current visualization techniques could lead to unbalances regarding visitors' learning and experience gains. In this paper, we investigate whether the visualization of cultural-heritage content, tailored to the visitors' individual cognitive characteristics, would improve the comprehension of the cultural-heritage content. We followed a two-step experimental approach, and we conducted two small-scale between-subject eye-tracking studies (exploratory and comparative study), in which people with diferent cognitive style participated in a gallery tour. The analysis of the results of the exploratory study revealed that people with diferent cognitive style, difer in the way they process visual information, which influences the content comprehension. Based on these results we developed cognitivecentered visualizations and we performed a comparative study, which revealed that such visualizations could help the users towards content comprehension. In this respect, individual cognitive diferences could be used as the basis for providing personalized experiences to cultural-heritage visitors, aiming to help them towards content-comprehension.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing → Empirical studies in HCI;
Visualization; HCI theory, concepts and models; • Computing
methodologies → Cognitive science;</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Over the last years, cultural heritage has been a favored domain for
personalization research [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Stakeholders from interdisciplinary
ifelds (e.g., computer science, user modeling, heritage sciences)
have collaborated to develop adaptive information systems that
provide personalized cultural-heritage experiences to the end-users
(e.g., museum visitors). When designing such systems, several
userspecific and context-specific aspects [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] must be considered to
provide the most appropriate content in the most suitable way to
the end-users, aiming to assist them to have a more eficient and
efective comprehension of the cultural-heritage content. With
regards to the user-specific aspects, the information system designers
must comply with the diversity of individuals who have diferent
characteristics such as personality traits [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], goals [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and visiting
styles [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. An aspect, which is not being considered as an important
design factor by the current practices, is the human cognition,
although several researchers have confirmed existing efects towards
content comprehension in diverse application domains, such as
usable security [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], gaming [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], and e-learning [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <p>
        Given that cultural-heritage activities often include visual
content comprehension tasks (e.g., viewing a painting in an art
museum), human cognitive characteristics related to the
comprehension of visual information would be of great interest as a
personalization factor within a cultural-heritage context. The cognitive
style Visualizer-Verbalizer (V-V) is such a cognitive characteristic.
According to the V-V theory [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], information is processed and
mentally represented in two ways: verbally and visually. Hence,
the individuals are distinguished to those who think either more in
pictures (visualizers) or more in words (verbalizers) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Research
has shown that V-V influences learning and content comprehension
[
        <xref ref-type="bibr" rid="ref10 ref12">10, 12</xref>
        ] and that it is associated with visual behavior [
        <xref ref-type="bibr" rid="ref10 ref13 ref28">10, 13, 28</xref>
        ].
      </p>
      <p>Despite that there is an extensive body of research which
underpins that V-V afects users’ comprehension of visual content,
current design approaches do not leverage on these findings and do
not consider V-V as an important factor when designing
culturalheritage activities. This can be accredited to the fact that there is a
lack in understanding the interplay among visual behavior,
culturalheritage activities, and human cognition factor, which have not
been investigated in depth. Hence, this results to an insuficient
understanding on whether and how to consider such human
cognitive factors practically within current state-of-the-art design
approaches. Therefore, the research question that this paper discusses
is whether V-V afects users’ content comprehension when
performing a typical cultural-heritage activity, and if so, whether there
are specific visualization types, based on users’ V-V cognitive style,
that can be used to help users towards a deeper understanding of
the visual cultural-heritage content.</p>
    </sec>
    <sec id="sec-3">
      <title>STUDIES AND RESULTS</title>
      <p>To answer the research question, we followed a two-step
betweensubject experimental approach. In the first step, we performed
an exploratory study, investigating whether and how the visual
behavior of individuals who have diferent V-V cognitive style
inlfuenced the comprehension of the cultural-heritage content. In
the second step, based on the results of the exploratory study, we
created cognitive-specific visualizations, and performed a
comparative study, aiming to evaluate the efects of the cognitive-specific
visualizations.
2.1</p>
    </sec>
    <sec id="sec-4">
      <title>Exploratory Study</title>
      <p>2.1.1 Hypotheses. To answer the first part of the research
question, we formed the following null hypotheses:
H01 There is no diference between visualizers and verbalizers
regarding the content comprehension.</p>
      <p>H02 Visual behavior of visualizers and verbalizers is not
associated with the content comprehension.</p>
      <p>
        2.1.2 Cultural heritage activity. Considering that browsing
virtual collections and galleries is a popular way for delivering
culturalheritage content [
        <xref ref-type="bibr" rid="ref26 ref30">26, 30</xref>
        ], we developed a web-based virtual-tour
application with five paintings of the National Gallery of Greece: a)
Child with rabbits by Polychronis Lembesis, b) Café Neon at night by
Yiannis Tsarouchis, c) The Sphinx in Cairo by Pericles Cirigotis, d) In
surgery by Georgios Roilos, and e) The dirge in Psara by Nikephoros
Lytras. The paintings are depicted in Figure 1. Each painting was
accompanied with a textual description, and thus, each painting
had two types of content: pictorial and textual.
      </p>
      <p>
        2.1.3 Instruments and metrics. To classify the participants as
either visualizers or verbalizers, we used a version of the
VerbalVisual Learning Style Rating questionnaire (VVLSR) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and the
Verbalizer-Visualizer Questionnaire (VVQ) [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. Both tests have
been widely used in similar studies in varying contexts, such as
e-learning [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and comprehension of multimedia material [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
      </p>
      <p>To measure the visual-content comprehension (VCC), we
designed a post-test VCC questionnaire. It consisted of ten
multiplechoice questions (two questions for each painting: one about the
pictorial content and one about the textual content), with high
reliability (.738) according to Kuder-Richardson-20 Test. None of
the participants had seen the paintings before, thus, they had no
prior knowledge about their content.</p>
      <p>
        Regarding the eye-tracking metrics, we focused on fixations on
the areas of interest (AOIs), following common practice [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Given
that each painting was accompanied with a textual description,
two diferent types of AOI are identified: pictorial and textual AOIs.
For each type, we measured the: number of fixations in each AOI,
ifxation duration in each AOI, entry time in each AOI, number
of transitions among AOIs, and fixation ratio. For each metric,
we considered the computed measures: sums, means, max, min.
To capture the participants’ eye-gaze behavior we used Tobii Pro
Glasses 2 at 50Hz.
      </p>
      <p>2.1.4 Participants. 23 adult individuals (10 females and 13 males),
ranging in age between 18 and 33 years old (m = 23.3, sd = 4.9),
took part in the study. According to VVLSR and VVQ, 12
participants were classified as visualizers and 11 participants were
classiifed as verbalizers.</p>
      <p>2.1.5 Procedure. We recruited 23 study participants, using
varying methods (e.g., personal contacts, social media announcements).
The participants had to meet a set of minimum requirements: have
never taken VVQ and VVLSR tests before; be older than 18 years;
know nothing about the paintings used in the study; have little
knowledge of art history and theory. All participants were informed
about the study and signed a consent form. For each participant,
we scheduled a single virtual exhibition tour of the study paintings.
Each virtual took took place in our lab at a mutually agreed date
and time. Before entering the tour, the participant completed the
VVQ and VVLSR tests (20 minutes). Next, she/he navigated through
the scene (20 minutes) and viewed all the paintings (no view-order
restrictions). Then, she/he distracted with a playful activity (30
minutes), which was not relevant to the virtual tour. Finally, she/he
iflled a form about demographics informations and answered the
VCC questionnaire (15 minutes).</p>
      <p>2.1.6 Results. To investigate H01, we performed a Mann-Whitney
U Test. The test met the required assumptions, as the distributions
of the correct answers (i.e., VCC score) for both visualizers and
verbalizers were similar, as assessed by visual inspection. Median score
for visualizers and verbalizers was not statistically significantly
different (Table 1). However, the analysis regarding the comprehension
on each type of content (i.e., VCCpic for pictorial-content
comprehension and VCCt ex t for textual-content comprehension) revealed
significant diferences. In particular, visualizers had a significantly
better VCCpic (U = 32.000, z = −2.217, p = .027), while visualizers
had a significantly better VCC t ex t (U = 33.500, z = −2.287, p =
.022).</p>
      <p>To investigate H02 we performed a series of Spearman’s
correlation test between the visual behavior metrics and VCC. The
results revealed several low and moderate correlations, and a strong
positive correlation (rs = .883, p &lt; .001) between VCC and the ratio
of fixation duration on pictorial and textual AOIs (Equation 1).</p>
      <sec id="sec-4-1">
        <title>Fixation.durationpictor ial .aois</title>
        <p>V Bdur −r at io =</p>
      </sec>
      <sec id="sec-4-2">
        <title>Fixation.durationt ex tual .aois</title>
        <p>To further investigate the efect of V-V cognitive style on the
visual behavior of the users, we performed an independent-samples
t-test to determine whether there are diferences in VB dur −r at io
between visualizers and verbalizers. The test met all the required
assumptions. The VBdur −r at io was higher for visualizers (m =
1.890, sd = .775) than verbalizers (m = 1.238, sd = .299),
statistically significant diference of ( p = .017, t (21) = 2.619, d =
1.110, 95%CI : [.135, .172]). The results underpin that visualizers
tend to perform longer fixations on the pictorial AOIs, while the
verbalizers tend to perform longer fixations on the textual AOIs.
(1)
2.2</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Visualization</title>
      <p>The results underpin the necessity of providing customized
visualizations for both visualizers and verbalizers, in order to help them
comprehend better the content of the paintings. Considering that
visualizers have an inherent preference for pictorial content, while
verbalizers have an inherent preference for textual content, we
propose a cognition-based visualization that aims to trigger the
visualizers’ attention to textual AOIs and the verbalizers’ attention to
pictorial AOIs. Through the cognition-based visualization we expect
visualizers to comprehend better the textual content and verbalizers
to comprehend better the pictorial content of the paintings.</p>
      <p>
        A common approach to make an individual with specific
cognitive characteristics to focus on specific AOIs is to exclude the other
AOIs [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. However, this cannot be applied in a virtual gallery-tour,
where both pictorial and textual AOIs are important to the visitor.
Therefore, we cannot exclude one type or another, but we need to
direct users’ attention to the AOI type that they do not inherently
prefer. In particular, we need to direct the visualizers’ attention to
textual AOIs and the verbalizers’ attention to pictorial AOIs.
      </p>
      <p>
        To help visualizers pay more attention on the textual AOIs and
increase textual-content comprehension, we adopted a popular
technique found in the literature: emphasize specific key-words, that are
critical for a better comprehension [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Hence, the textual AOIs can
be visualized in two ways: the default way, which is recommended
for verbalizers, and the emphasizing way, which is recommended
for visualizers. To help verbalizers pay more attention on the
pictorial AOIs and increase pictorial-content comprehension, we applied
a saliency filter to the pictorial AOIs, which is a typical technique
to attract attention to specific areas of pictures [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Hence, the
pictorial AOIs can be visualized in two ways: the default way, which
is recommended for visualizers, and the salient way, which is
recommended for verbalizers. The simple dichotomous algorithm (in
pseudo-code) to define the visualization of each painting is:
Algorithm 1 Simple dichotomous algorithm to set the visualization
of an AOI based on the user’s V-V cognitive dimension
1: procedure SetCognitionBasedVisualization
2: if user is visualizer then
3: Set AOI → text → vis to "emphasis"
4: Set AOI → pic → vis to "default"
5: else
6:
7:
      </p>
      <p>Set AOI → text → vis to "default"</p>
      <p>Set AOI → pic → vis to "salient"
2.3</p>
    </sec>
    <sec id="sec-6">
      <title>Comparative study</title>
      <p>To investigate whether the cognition-based visualization would
assist visualizers and verbalizers to comprehend better the paintings’
content, we conducted a between-subject comparative study.</p>
      <p>2.3.1 Hypotheses. To answer the second part of the research
question, we formed the following null hypotheses:
H03 Cognition-based visualization does not afect significantly
the visual behavior of visualizers and verbalizers.</p>
      <p>H04 Cognition-based visualization does not afect significantly
the comprehension of visualizers and verbalizers regarding
paintings’ content.</p>
      <p>2.3.2 Cultural heritage activity. The activity was the same with
the one discussed in the exploratory study. However, the
cognitionbased visualization was applied for each painting, depending on
the V-V cognitive dimension of the user.</p>
      <p>2.3.3 Instrument and metrics. They were identical with the
instruments and metrics that were used in the exploratory study.</p>
      <p>2.3.4 Participants. We recruited 20 adult individuals (8 females,
12 males) ranging in age between 20 and 31 years old (m = 25.3, sd =
3.8). According to VVLSR and VVQ, 10 participants were classified
as visualizers and 10 participants were classified as verbalizers.</p>
      <p>2.3.5 Procedure. We followed the same study procedure with
the exploratory study.</p>
      <p>2.3.6 Results. To investigate H03, we performed a two-way
ANOVA with V-V cognitive dimension and the type of the
visualization as the independent variables, and the VBdur −r at io as
the dependent variable. The test met all the required assumptions.
The results revealed a significant interaction efect ( F (1, 39) =
4.835, p = .034, eta = .110). Focusing on each independent
variable, a significant efect was revealed both for cognitive dimension
(F (1, 39) = 6.272, p = .019, eta = .129) and the visualization type
(F (1, 39) = 4.039, p = .047, eta = 1.104). Regarding the main
effects, the visualization type helped most the visualizers as they
increased the fixation duration on the textual AOIs, and thus, their
VBdur −r at io was decreased (F (1, 39) = 9.039, p = .005, eta = .188).
No main efects were revealed for the verbalizers regarding the
visualization type. Regarding the cognitive dimension, no efects were
revealed for the subjects who used the cognition-based
visualization type, while there were significant efects for the subjects who
used the default visualization type, as discussed in the exploratory
study. The results are depicted in Figure 2.</p>
      <p>To investigate H04, we performed a two-way ANOVA with
VV cognitive dimension and the type of the visualization as the
independent variables, and VCC as the dependent variable. The
test met all the required assumptions. The results revealed no
interaction efect. Focusing on each content-type, the analysis
revealed no efects for VCC pic (Figure 3). Regarding, VCCt x t , the
analysis revealed an efect both for the V-V cognition dimension
(F (1, 39) = 7.013, p = .012, eta = .152) and the visualization
type (F (1, 39) = 8.940, p = .005, eta = .186). Focusing on main
efects, visualizers who used the cognition-based visualization
provided significantly more correct answers regarding the textual AOIs
(F (1, 39) = 5.520, p = .024, eta = .124), as depicted in Figure 4.
3</p>
    </sec>
    <sec id="sec-7">
      <title>DISCUSSION</title>
      <p>
        The results of the exploratory study underpin that individual
cognitive diferences have an impact on the users’ visual behavior and
content comprehension when performing a cultural activity. As
expected, visualizers focused on the pictorial content and the
verbalizers focused on the textual content in the visual exploratory
activity (i.e., virtual gallery tour), verifying the results of other
studies [
        <xref ref-type="bibr" rid="ref10 ref28">10, 28</xref>
        ] in other domains. Considering that each
painting provided information both in pictorial and textual format, the
overall content comprehension of both visualizers and
verbalizers was not diferent, but it was average. The inherent preference
of visualizers for pictorial content influenced the content-related
comprehension, as they comprehended the content of the pictorial
areas of interest, but not the content of the textual areas of interest,
as they produced shorter fixations on them, which implies
dificulties in memorability [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. Likewise, the inherent preference of
verbalizers in processing textual information, resulted in shorter
ifxations on the pictorial areas of interest. Hence, verbalizers had
low performance regarding pictorial-context comprehension, but
they performed well regarding textual-context comprehension.
3.1
      </p>
    </sec>
    <sec id="sec-8">
      <title>Cognition-based visualizations</title>
      <p>The aforementioned results underpin the necessity of adopting
cognition-based visualizations to help both visualizers and
verbalizers to comprehend better the visual information presented in
cultural-heritage contexts. We proposed a simple dichotomous rule
(Algorithm 1) which provides a customized visualization of each
art-exhibit based on the cognitive profile of the user. In the case
of a visualizer, the visualization type aims to direct her/his
attention to textual areas of interest, while in the case of a verbalizer,
the visualization type aims to direct her/his attention to pictorial
areas of interest. To evaluate the proposed visualization
mechanism, we performed a small-scale between-subject eye-tracking
study. The results revealed that the cognition-based visualization
helped both user types to perform better regarding the
comprehension of the paintings’ content. The visualizers who used the
cognition-based visualization mechanism provided more correct
answers to the textual-content questions than the visualizers who
used the default mechanism. Likewise, the verbalizers who used
the cognition-based visualization mechanism provided more
correct answers to the pictorial-content questions than the verbalizers
who used the default mechanism. At the same time, there were
no diferences between visualizers and verbalizers regarding
either the pictorial or the textual content comprehension. Therefore,
they both increased the overall score of the content comprehension
(including questions related to both pictorial and textual content).
3.2</p>
    </sec>
    <sec id="sec-9">
      <title>Towards a cognition-centered approach for presenting cultural-heritage content</title>
      <p>The results of the comparative study underpin the necessity of
adopting a cognition-centered approach, such as a framework, to
deliver personalized cultural-heritage activities, tailored to the users’
individual cognitive preferences and needs. Such framework is
expected to benefit both cultural-heritage stakeholders and end-users.
Stakeholders from interdisciplinary fields (e.g., curators, educators,
guides, designers) are expected to use such framework to create
personalized cultural-heritage activities, tailored to the cognitive
characteristics of the end-users (e.g., museum visitors). End-users
are expected to be benefited towards achieving their goals (e.g.,
improve content comprehension) through cognition-efortless
personalized interventions, as they adapt to the end-users’ individual
cognitive characteristics.</p>
      <p>
        As discussed in [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], the cognition-centered framework consists
of two main modules: the user-modeling module and the
personalization module. The user-modeling module is responsible to elicit,
store, and maintain cognition-centered user profiles. It can based
on elicitation mechanisms which exploit data from various sources,
such as eye-gaze interaction [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and social-behavior data [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Reifnement processes based on machine learning and computer vision
techniques can be used to ensure the accuracy and the robustness
of the user-modeling module.
      </p>
      <p>
        The personalization module aims to adapt the cultural-heritage
activity to the unique personalized configurations for users with
specific cognitive characteristics. The personalization engine takes
as an input the cognitive profile of the user, provided by the
usermodeling module, and exports the personalized cognition-based
visualizations, following a rule-based approach. Studies like the
reported one provide the personalization rules. Following an
inclusive and open approach, the cognition-centered framework should
support various cognitive styles and skills that have been found to
afect users’ experience and/or behavior in cultural-heritage
contexts, such as field dependence-independence [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], visual working
memory [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], and personality traits [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
3.3
      </p>
    </sec>
    <sec id="sec-10">
      <title>Implicit elicitation of Visualizer-Verbalizer cognitive style</title>
      <p>
        The study results revealed that there is a strong correlation between
users’ visual behavior and content-comprehension, when
considering the Visualizer-Verbalizer cognitive dimension as the control
factor. Given that eye-trackers become cheaper, smaller, more
robust, and they are integrated in varying technological frameworks,
such as mobile devices [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and head-mounted displays [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and
they have already been used and evaluated within cultural-heritage
contexts [
        <xref ref-type="bibr" rid="ref14 ref17">14, 17</xref>
        ], eye-gaze data could be the building factors of the
cognition-centered framework, aiming to a) implicitly elicit user
cognitive profile and b) provide personalized visualizations.
      </p>
      <p>
        Considering the recent works on eye-gaze based elicitation of
users’ cognitive characteristics [
        <xref ref-type="bibr" rid="ref23 ref27 ref8">8, 23, 27</xref>
        ] and the technological
advances in the eye-tracking industry, the development of
transparent and in-run time elicitation modules that would model the users
according to their cognitive characteristics is feasible in the near
future and in immersive contexts that are based on visual interaction,
such as mixed-reality [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Our recent works have revealed that the
elicitation of the users’ cognitive style can be performed with high
accuracy and in the early stages of a visual search activity when
considering task complexity [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], task segments [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], and time [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
as the elicitation parameters along with the eye-gaze data.
      </p>
      <p>Therefore, our study findings could contribute to building a
user-modeling module which extends the current range of
cognitive characteristics and increases the validity of other studies (and
eventually the elicitation accuracy and performance). Based on
the transparent and run-time elicitation of users’ cognitive
characteristics, adaptation interventions can be applied in order for the
cognition-centered framework to provide personalized
visualizations, tailored to the users’ individual characteristics. For example,
when a user is classified as visualizer in a virtual gallery tour, the
framework would provide her/him with default pictorial areas of
interest along with emphasizing textual AOIs, based on the
appropriate adaptation rules, aiming to disperse her/his attention on both
types of areas of interest.
4</p>
    </sec>
    <sec id="sec-11">
      <title>STUDY VALIDITY AND LIMITATIONS</title>
      <p>This research work entails several limitations inherent to the
multidimensional character and complexity of the factors investigated.
Regarding internal validity the study environment and the study
procedure remained the same for all participants. The methodology
and statistical tests used to answer the research objectives met all
the required assumptions, despite the rather limited size of the
sample, providing internally valid results.</p>
      <p>Regarding the ecological validity of our study, the study sessions
performed in times and days convenient for each participant. The
desktop computer was powerful enough to support the virtual
guide tour and did not afect participants’ experience in the shade
of poor performance. The use of an eye-tracking technology was
a limitation, as the individuals do not use such equipment when
performing computer-mediated activities. However, the fact that
the eye-tracking technology used were wearable glasses, made
the participants feel more comfortable after a while, as they could
interact with the system as they would normally do. At this point
is worth-mentioning that we used an expensive and accurate
eyetracking apparatus which could sabotage the application of such
schemes in typical real-life cultural-heritage scenarios. Therefore,
there is a need to investigate whether we would have the same
results when using more conventional and cheaper eye-tracking
tools (e.g., based on web-camera feed) or whether simple eye-gaze
data that are easily detected, such as number of blinks, could provide
similar results.</p>
      <p>
        For the scope of the study, we focused only on visual
interactions. However, cultural-heritage activities also include audio-based
and spatial interactions, such as storytelling applications [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and
location-based games [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Hence, there is a need of investigating
whether individual cognitive characteristics influence visitors’
behavior and experience in such contexts. In the same line, recent
studies in the cultural-heritage domain have raised the importance
of the visitors’ emotional engagement [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]; an aspect that needs to
be investigated in relation to visitors’ cognitive characteristics.
      </p>
      <p>
        We expect that our results will be replicated for activities that are
based visual search tasks which can be found in varying domains,
besides cultural-heritage, such as e-shopping, e-learning, and
engineering. Regarding the technological context, we expect our results
to be applicable for contexts which exploit the technologies across
the virtuality continuum (AR/MR/VR), especially contexts that
create environments rich in visual information, such as head-mounted
displays (HMDs) and cave automatic virtual environments (CAVEs).
Finally, our study increases the external validity of studies which
investigate the efects of Visualizer-Verbalizer cognitive style on
visual-search tasks [
        <xref ref-type="bibr" rid="ref10 ref28">10, 28</xref>
        ].
5
      </p>
    </sec>
    <sec id="sec-12">
      <title>CONCLUSION</title>
      <p>In this paper, we first reported the results of an eye-tracking study
aiming to investigate the efects of V-V cognitive style on the
comprehension of the content of five paintings during a virtual gallery
tour and explain the results considering the users’ visual
behavior. Significant diferences were revealed between visualizers and
verbalizers regarding the comprehension of pictorial and textual
content. Their performance was also strongly related to their
visual behavior, which was diferent for visualizers and verbalizers.
Hence, this paper provides evidence that users with diferent V-V
cognitive style follow diferent strategies when performing a
visual exploratory cultural-heritage activity (e.g., virtual gallery tour).
These strategies are reflected on their visual behavior and they lead
to unbalances regarding content comprehension. Triggered by the
study results, we designed an assistive mechanism based on the
visual behavior of the visualizers and verbalizers, which provided
customized cognition-based visualizations of the paintings. To
evaluate its eficiency, we conducted a comparative eye-tracking study.
The results revealed that the cognition-based visualizations helped
both visualizers and verbalizers to comprehend better textual and
pictorial content respectively. Therefore, this work provides
evidence that the cognitive styles (e.g., Visualizer-Verbalizer) can be
used to provide personalized cultural-heritage experiences, aiming
to improve content comprehension and eliminate learning
unbalances between users with diferent cognitive characteristics.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Angeliki</given-names>
            <surname>Antoniou</surname>
          </string-name>
          and
          <string-name>
            <given-names>George</given-names>
            <surname>Lepouras</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Modeling Visitors' Profiles: A Study to Investigate Adaptation Aspects for Museum Learning Technologies</article-title>
          .
          <source>Journal on Computing and Cultural Heritage (JOCCH) 3</source>
          ,
          <issue>2</issue>
          , Article 7 (Oct.
          <year>2010</year>
          ),
          <volume>19</volume>
          pages. https://doi.org/10.1145/1841317.1841322
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Liliana</given-names>
            <surname>Ardissono</surname>
          </string-name>
          , Tsvi Kuflik, and
          <string-name>
            <given-names>Daniela</given-names>
            <surname>Petrelli</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Personalization in Cultural Heritage: The Road Travelled and the One Ahead</article-title>
          .
          <source>User Modeling and User-Adapted Interaction 22</source>
          ,
          <issue>1</issue>
          (
          <issue>01</issue>
          <year>Apr 2012</year>
          ),
          <fpage>73</fpage>
          -
          <lpage>99</lpage>
          . https://doi.org/10.1007/ s11257-011-9104-x
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Barz</surname>
          </string-name>
          , Florian Daiber, and
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Bulling</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Prediction of Gaze Estimation Error for Error-aware Gaze-based Interfaces</article-title>
          .
          <source>In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research &amp; Applications. ACM</source>
          ,
          <volume>275</volume>
          -
          <fpage>278</fpage>
          . https://doi.org/10.1145/2857491.2857493
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Chih-Ming Chen</surname>
          </string-name>
          and
          <string-name>
            <surname>Sheng-Hui Huang</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Web-based Reading Annotation System with an Attention-based Self-regulated Learning Mechanism for Promoting Reading Performance</article-title>
          .
          <source>British Journal of Educational Technology</source>
          <volume>45</volume>
          ,
          <issue>5</issue>
          (
          <year>2014</year>
          ),
          <fpage>959</fpage>
          -
          <lpage>980</lpage>
          . https://doi.org/10.1111/bjet.12119
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Eyal</given-names>
            <surname>Dim</surname>
          </string-name>
          and
          <string-name>
            <given-names>Tsvi</given-names>
            <surname>Kuflik</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Automatic Detection of Social Behavior of Museum Visitor Pairs</article-title>
          .
          <source>ACM Transactions on Interactive Intelligent Systems (TIIS) 4</source>
          ,
          <issue>4</issue>
          ,
          <string-name>
            <surname>Article 17</surname>
          </string-name>
          (
          <issue>Nov</issue>
          .
          <year>2014</year>
          ),
          <volume>30</volume>
          pages. https://doi.org/10.1145/2662869
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Seongwon</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sungwon</given-names>
            <surname>Yang</surname>
          </string-name>
          , Jihyoung Kim, and
          <string-name>
            <given-names>Mario</given-names>
            <surname>Gerla</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>EyeGuardian: A Framework of Eye Tracking and Blink Detection for Mobile Device Users</article-title>
          .
          <source>In Proceedings of the Twelfth Workshop on Mobile Computing Systems &amp; Applications. ACM</source>
          , 6. https://doi.org/10.1145/2162081.2162090
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Akrivi</given-names>
            <surname>Katifori</surname>
          </string-name>
          , Manos Karvounis, Vassilis Kourtis, Marialena Kyriakidi, Maria Roussou, Manolis Tsangaris, Maria Vayanou, Yannis Ioannidis, Olivier Balet, Thibaut Prados, Jens Keil, Timo Engelke, and
          <string-name>
            <given-names>Laia</given-names>
            <surname>Pujol</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>CHESS: Personalized Storytelling Experiences in Museums</article-title>
          . In Interactive Storytelling, Alex Mitchell,
          <string-name>
            <surname>Clara</surname>
          </string-name>
          Fernández-Vara, and David Thue (Eds.). Springer International Publishing, Cham,
          <fpage>232</fpage>
          -
          <lpage>235</lpage>
          . https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -12337-0_
          <fpage>28</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Christina</given-names>
            <surname>Katsini</surname>
          </string-name>
          , Christos Fidas, George E. Raptis, Marios Belk, George Samaras, and
          <string-name>
            <given-names>Nikolaos</given-names>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Eye Gaze-driven Prediction of Cognitive Diferences during Graphical Password Composition</article-title>
          .
          <source>In 23rd International Conference on Intelligent User Interfaces. ACM</source>
          ,
          <volume>147</volume>
          -
          <fpage>152</fpage>
          . https://doi.org/10.1145/3172944.3172996
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Christina</given-names>
            <surname>Katsini</surname>
          </string-name>
          , Christos Fidas, George E. Raptis, Marios Belk, George Samaras, and
          <string-name>
            <given-names>Nikolaos</given-names>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Influences of Human Cognition and Visual Behavior on Password Strength During Picture Password Composition</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)</source>
          . ACM, New York, NY, USA, Article
          <volume>87</volume>
          , 14 pages. https://doi.org/10.1145/3173574.3173661
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Marta</given-names>
            <surname>Koć-Januchta</surname>
          </string-name>
          , Tim Höfler,
          <string-name>
            <surname>Gun-Brit</surname>
            <given-names>Thoma</given-names>
          </string-name>
          , Helmut Prechtl, and
          <string-name>
            <given-names>Detlev</given-names>
            <surname>Leutner</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Visualizers versus Verbalizers: Efects of Cognitive Style on Learning with Texts and Pictures - An Eye-Tracking Study</article-title>
          .
          <source>Computers in Human Behavior</source>
          <volume>68</volume>
          (
          <year>2017</year>
          ),
          <fpage>170</fpage>
          -
          <lpage>179</lpage>
          . https://doi.org/10.1016/j.chb.
          <year>2016</year>
          .
          <volume>11</volume>
          .028
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Laura</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Massa</surname>
            and
            <given-names>Richard E.</given-names>
          </string-name>
          <string-name>
            <surname>Mayer</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>Testing the ATI Hypothesis: Should Multimedia Instruction Accommodate Verbalizer-Visualizer Cognitive Style? Learning and</article-title>
          <source>Individual Diferences</source>
          <volume>16</volume>
          ,
          <issue>4</issue>
          (
          <year>2006</year>
          ),
          <fpage>321</fpage>
          -
          <lpage>335</lpage>
          . https://doi.org/10.1016/ j.lindif.
          <year>2006</year>
          .
          <volume>10</volume>
          .001
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Richard</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Mayer</surname>
            and
            <given-names>Laura J.</given-names>
          </string-name>
          <string-name>
            <surname>Massa</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>Three Facets of Visual and Verbal Learners: Cognitive Ability, Cognitive Style, and Learning Preference</article-title>
          .
          <source>Journal of educational psychology 95</source>
          ,
          <issue>4</issue>
          (
          <year>2003</year>
          ),
          <fpage>833</fpage>
          -
          <lpage>846</lpage>
          . https://doi.org/10.1037/
          <fpage>0022</fpage>
          -
          <lpage>0663</lpage>
          .
          <year>95</year>
          .4.
          <fpage>833</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Tracey</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Mehigan</surname>
            , Mary Barry, Aidan Kehoe, and
            <given-names>Ian</given-names>
          </string-name>
          <string-name>
            <surname>Pitt</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Using Eye Tracking Technology to Identify Visual and Verbal Learners</article-title>
          .
          <source>In 2011 IEEE International Conference on Multimedia and Expo (ICME)</source>
          .
          <source>IEEE</source>
          , 1-
          <fpage>6</fpage>
          . https: //doi.org/10.1109/ICME.
          <year>2011</year>
          .6012036
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Moayad</surname>
            <given-names>Mokatren</given-names>
          </string-name>
          , Tsvi Kuflik, and
          <string-name>
            <given-names>Ilan</given-names>
            <surname>Shimshoni</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Exploring the Potential of a Mobile Eye Tracker as an Intuitive Indoor Pointing Device: A Case Study in Cultural Heritage</article-title>
          .
          <source>Future Generation Computer Systems</source>
          <volume>81</volume>
          (
          <year>2018</year>
          ),
          <fpage>528</fpage>
          -
          <lpage>541</lpage>
          . https://doi.org/10.1016/j.future.
          <year>2017</year>
          .
          <volume>07</volume>
          .007
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Yannick</surname>
            <given-names>Naudet</given-names>
          </string-name>
          , Angeliki Antoniou, Ioanna Lykourentzou, Eric Tobias, Jenny Rompa, and
          <string-name>
            <given-names>George</given-names>
            <surname>Lepouras</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Museum Personalization Based on Gaming and Cognitive Styles: The BLUE Experiment</article-title>
          .
          <source>International Journal of Virtual Communities and Social Networking (IJVCSN) 7</source>
          ,
          <issue>2</issue>
          (
          <year>2015</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          . https://doi.org/ 10.4018/IJVCSN.2015040101
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Allan</given-names>
            <surname>Paivio</surname>
          </string-name>
          .
          <year>1990</year>
          .
          <article-title>Mental Representations: A Dual Coding Approach</article-title>
          . Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195066661.001.0001
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Isabel</surname>
            <given-names>Pedersen</given-names>
          </string-name>
          , Nathan Gale, Pejman Mirza-Babaei, and
          <string-name>
            <given-names>Samantha</given-names>
            <surname>Reid</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>More than Meets the Eye: The Benefits of Augmented Reality and Holographic Displays for Digital Cultural Heritage</article-title>
          .
          <source>Journal on Computing and Cultural Heritage (JOCCH) 10</source>
          ,
          <issue>2</issue>
          (
          <year>2017</year>
          ),
          <volume>11</volume>
          . https://doi.org/10.1145/3051480
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Sara</surname>
            <given-names>Perry</given-names>
          </string-name>
          , Maria Roussou, Maria Economou, Laia Pujol-Tost, and
          <string-name>
            <given-names>Hilary</given-names>
            <surname>Young</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Moving Beyond the Virtual Museum: Engaging Visitors Emotionally</article-title>
          .
          <source>In In the Proceedings of the 23rd International Conference on Virtual Systems and Multimedia (VSMM</source>
          <year>2017</year>
          ). Dublin, Ireland.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>George</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Raptis</surname>
            , Christos Fidas, and
            <given-names>Nikolaos</given-names>
          </string-name>
          <string-name>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Efects of MixedReality on Players' Behaviour and Immersion in a Cultural Tourism Game: A Cognitive Processing Perspective</article-title>
          .
          <source>International Journal of Human-Computer Studies</source>
          <volume>114</volume>
          (
          <year>2018</year>
          ),
          <fpage>69</fpage>
          -
          <lpage>79</lpage>
          . https://doi.org/10.1016/j.ijhcs.
          <year>2018</year>
          .
          <volume>02</volume>
          .003
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>George</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Raptis</surname>
          </string-name>
          , Christos A.
          <string-name>
            <surname>Fidas</surname>
          </string-name>
          , and
          <string-name>
            <surname>Nikolaos</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Do Field Dependence-Independence Diferences of Game Players Afect Performance and Behaviour in Cultural Heritage Games?</article-title>
          .
          <source>In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY '16)</source>
          . ACM, New York, NY, USA,
          <fpage>38</fpage>
          -
          <lpage>43</lpage>
          . https://doi.org/10.1145/2967934.2968107
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>George</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Raptis</surname>
          </string-name>
          , Christos A.
          <string-name>
            <surname>Fidas</surname>
          </string-name>
          , and
          <string-name>
            <surname>Nikolaos</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Using Eye Tracking to Identify Cognitive Diferences: A Brief Literature Review</article-title>
          .
          <source>In Proceedings of the 20th Pan-Hellenic Conference on Informatics (PCI '16)</source>
          . ACM, New York, NY, USA, Article
          <volume>21</volume>
          , 6 pages. https://doi.org/10.1145/3003733.3003762
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>George</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Raptis</surname>
          </string-name>
          , Christos A.
          <string-name>
            <surname>Fidas</surname>
          </string-name>
          , Christina Katsini, and
          <string-name>
            <surname>Nikolaos</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Towards a Cognition-Centered Personalization Framework for CulturalHeritage Content</article-title>
          .
          <source>In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18)</source>
          . ACM, New York, NY, USA, Article
          <issue>LBW011</issue>
          , 6 pages. https://doi.org/10.1145/3170427.3190613
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>George</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Raptis</surname>
            , Christina Katsini, Marios Belk, Christos Fidas, George Samaras, and
            <given-names>Nikolaos</given-names>
          </string-name>
          <string-name>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Using Eye Gaze Data and Visual Activities to Infer Human Cognitive Styles: Method and Feasibility Studies</article-title>
          .
          <source>In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM</source>
          ,
          <volume>164</volume>
          -
          <fpage>173</fpage>
          . https://doi.org/10.1145/3079628.3079690
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Alan</given-names>
            <surname>Richardson</surname>
          </string-name>
          .
          <year>1977</year>
          .
          <article-title>Verbalizer - Visualizer: A Cognitive Style Dimension</article-title>
          .
          <source>Journal of Mental Imagery</source>
          <volume>1</volume>
          (
          <year>1977</year>
          ),
          <fpage>109</fpage>
          -
          <lpage>125</lpage>
          . Issue 1.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Christos</surname>
            <given-names>Sintoris</given-names>
          </string-name>
          , Nikoleta Yiannoutsou, Soteris Demetriou, and
          <string-name>
            <surname>Nikolaos</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Avouris</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Discovering the Invisible City: Location-based Games for Learning in Smart Cities. Interaction Design and Architecture(s)</article-title>
          <source>Journal - IxD&amp;A</source>
          <volume>16</volume>
          (
          <year>2013</year>
          ),
          <fpage>47</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Styliani</surname>
            <given-names>Sylaiou</given-names>
          </string-name>
          , Fotis Liarokapis, Kostas Kotsakis, and
          <string-name>
            <given-names>Petros</given-names>
            <surname>Patias</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Virtual Museums, a Survey and Some Issues for Consideration</article-title>
          .
          <source>Journal of Cultural Heritage</source>
          <volume>10</volume>
          ,
          <issue>4</issue>
          (
          <year>2009</year>
          ),
          <fpage>520</fpage>
          -
          <lpage>528</lpage>
          . https://doi.org/10.1016/j.culher.
          <year>2009</year>
          .
          <volume>03</volume>
          .003
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Dereck</surname>
            <given-names>Toker</given-names>
          </string-name>
          , Sébastien Lallé, and
          <string-name>
            <given-names>Cristina</given-names>
            <surname>Conati</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Pupillometry and Head Distance to the Screen to Predict Skill Acquisition During Information Visualization Tasks</article-title>
          .
          <source>In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM</source>
          ,
          <volume>221</volume>
          -
          <fpage>231</fpage>
          . https://doi.org/10.1145/3025171.3025187
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Nikos</surname>
            <given-names>Tsianos</given-names>
          </string-name>
          , Panagiotis Germanakos, Zacharias Lekkas, Costas Mourlas, and
          <string-name>
            <given-names>George</given-names>
            <surname>Samaras</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Eye-Tracking Users' Behavior in Relation to Cognitive Style within an E-learning Environment</article-title>
          .
          <source>In 2009 Ninth IEEE International Conference on Advanced Learning Technologies</source>
          .
          <volume>329</volume>
          -
          <fpage>333</fpage>
          . https://doi.org/10.1109/ ICALT.
          <year>2009</year>
          .110
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Michel</given-names>
            <surname>Wedel</surname>
          </string-name>
          and
          <string-name>
            <given-names>Rik</given-names>
            <surname>Pieters</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Eye Fixations on Advertisements and Memory for Brands: A Model and Findings</article-title>
          .
          <source>Marketing Science</source>
          <volume>19</volume>
          ,
          <issue>4</issue>
          (
          <year>2000</year>
          ),
          <fpage>297</fpage>
          -
          <lpage>312</lpage>
          . https://doi.org/10.1287/mksc.19.4.297.11794
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Rafal</surname>
            <given-names>Wojciechowski</given-names>
          </string-name>
          , Krzysztof Walczak,
          <string-name>
            <given-names>Martin</given-names>
            <surname>White</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Wojciech</given-names>
            <surname>Cellary</surname>
          </string-name>
          .
          <year>2004</year>
          .
          <article-title>Building Virtual and Augmented Reality Museum Exhibitions</article-title>
          .
          <source>In Proceedings of the Ninth International Conference on 3D Web Technology (Web3D '04)</source>
          . ACM, New York, NY, USA,
          <fpage>135</fpage>
          -
          <lpage>144</lpage>
          . https://doi.org/10.1145/985040.985060
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>