=Paper= {{Paper |id=Vol-2091/paper9 |storemode=property |title=Visualization of Cultural-Heritage Content based on Individual Cognitive Differences |pdfUrl=https://ceur-ws.org/Vol-2091/paper9.pdf |volume=Vol-2091 |authors=George E. Raptis,Christina Katsini,Christos Fidas,Nikolaos Avouris |dblpUrl=https://dblp.org/rec/conf/avi/RaptisKFA18 }} ==Visualization of Cultural-Heritage Content based on Individual Cognitive Differences== https://ceur-ws.org/Vol-2091/paper9.pdf
  Visualization of Cultural-Heritage Content based on Individual
                       Cognitive Differences
          George E. Raptis                            Christina Katsini                 Christos Fidas                    Nikolaos Avouris
      Human Opsis and HCI                          Human Opsis and HCI             Dept. of Cultural Heritage         HCI Group, Dept. Electrical
     Group, Dept. of Electrical                   Group, Dept. of Electrical         Management and New               and Computer Engineering,
    and Computer Engineering,                    and Computer Engineering,         Technologies, University of           University of Patras
       University of Patras                         University of Patras                     Patras                         Patras, Greece
          Patras, Grece                                Patras, Greece                    Patras, Greece                  avouris@upatras.gr
        raptisg@upnet.gr                             katsinic@upnet.gr                 fidas@upatras.gr

ABSTRACT                                                                            provide personalized cultural-heritage experiences to the end-users
Comprehension of visual content is linked with the visitors’ expe-                  (e.g., museum visitors). When designing such systems, several user-
rience within cultural heritage contexts. Considering the diversity                 specific and context-specific aspects [2] must be considered to
of visitors towards human cognition and the influence of individ-                   provide the most appropriate content in the most suitable way to
ual cognitive differences on information comprehension, current                     the end-users, aiming to assist them to have a more efficient and
visualization techniques could lead to unbalances regarding visi-                   effective comprehension of the cultural-heritage content. With re-
tors’ learning and experience gains. In this paper, we investigate                  gards to the user-specific aspects, the information system designers
whether the visualization of cultural-heritage content, tailored to                 must comply with the diversity of individuals who have different
the visitors’ individual cognitive characteristics, would improve                   characteristics such as personality traits [1], goals [2], and visiting
the comprehension of the cultural-heritage content. We followed a                   styles [5]. An aspect, which is not being considered as an important
two-step experimental approach, and we conducted two small-scale                    design factor by the current practices, is the human cognition, al-
between-subject eye-tracking studies (exploratory and comparative                   though several researchers have confirmed existing effects towards
study), in which people with different cognitive style participated                 content comprehension in diverse application domains, such as
in a gallery tour. The analysis of the results of the exploratory study             usable security [9], gaming [20], and e-learning [28].
revealed that people with different cognitive style, differ in the                     Given that cultural-heritage activities often include visual con-
way they process visual information, which influences the content                   tent comprehension tasks (e.g., viewing a painting in an art mu-
comprehension. Based on these results we developed cognitive-                       seum), human cognitive characteristics related to the comprehen-
centered visualizations and we performed a comparative study,                       sion of visual information would be of great interest as a person-
which revealed that such visualizations could help the users to-                    alization factor within a cultural-heritage context. The cognitive
wards content comprehension. In this respect, individual cognitive                  style Visualizer-Verbalizer (V-V) is such a cognitive characteristic.
differences could be used as the basis for providing personalized ex-               According to the V-V theory [16], information is processed and
periences to cultural-heritage visitors, aiming to help them towards                mentally represented in two ways: verbally and visually. Hence,
content-comprehension.                                                              the individuals are distinguished to those who think either more in
                                                                                    pictures (visualizers) or more in words (verbalizers) [12]. Research
CCS CONCEPTS                                                                        has shown that V-V influences learning and content comprehension
                                                                                    [10, 12] and that it is associated with visual behavior [10, 13, 28].
• Human-centered computing → Empirical studies in HCI;
                                                                                       Despite that there is an extensive body of research which un-
Visualization; HCI theory, concepts and models; • Computing
                                                                                    derpins that V-V affects users’ comprehension of visual content,
methodologies → Cognitive science;
                                                                                    current design approaches do not leverage on these findings and do
ACM Reference Format:                                                               not consider V-V as an important factor when designing cultural-
George E. Raptis, Christina Katsini, Christos Fidas, and Nikolaos Avouris.
                                                                                    heritage activities. This can be accredited to the fact that there is a
2018. Visualization of Cultural-Heritage Content based on Individual Cog-
                                                                                    lack in understanding the interplay among visual behavior, cultural-
nitive Differences. In Proceedings of 2nd Workshop on Advanced Visual Inter-
faces for Cultural Heritage (AVI-CH 2018). Vol. 2091. CEUR-WS.org, Article          heritage activities, and human cognition factor, which have not
9. http://ceur-ws.org/Vol-2091/paper9.pdf, 7 pages.                                 been investigated in depth. Hence, this results to an insufficient
                                                                                    understanding on whether and how to consider such human cog-
1     INTRODUCTION                                                                  nitive factors practically within current state-of-the-art design ap-
                                                                                    proaches. Therefore, the research question that this paper discusses
Over the last years, cultural heritage has been a favored domain for
                                                                                    is whether V-V affects users’ content comprehension when per-
personalization research [2]. Stakeholders from interdisciplinary
                                                                                    forming a typical cultural-heritage activity, and if so, whether there
fields (e.g., computer science, user modeling, heritage sciences)
                                                                                    are specific visualization types, based on users’ V-V cognitive style,
have collaborated to develop adaptive information systems that
                                                                                    that can be used to help users towards a deeper understanding of
                                                                                    the visual cultural-heritage content.
AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy
© 2018 Copyright held by the owner/author(s).

                                                                               1
AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy                                                                       G. E. Raptis et al.




Figure 1: The paintings used in our study (from left to right): Child with rabbits (1879) by Polychronis Lembesis, Café "Neon"
at night (1965) by Yiannis Tsarouchis, The Sphinx in Cairo (n/a) by Pericles Cirigotis, In surgery (n/a) by Georgios Roilos, and
The dirge in Psara (1888) by Nikephoros Lytras.


2     STUDIES AND RESULTS                                                         Regarding the eye-tracking metrics, we focused on fixations on
To answer the research question, we followed a two-step between-               the areas of interest (AOIs), following common practice [21]. Given
subject experimental approach. In the first step, we performed                 that each painting was accompanied with a textual description,
an exploratory study, investigating whether and how the visual                 two different types of AOI are identified: pictorial and textual AOIs.
behavior of individuals who have different V-V cognitive style in-             For each type, we measured the: number of fixations in each AOI,
fluenced the comprehension of the cultural-heritage content. In                fixation duration in each AOI, entry time in each AOI, number
the second step, based on the results of the exploratory study, we             of transitions among AOIs, and fixation ratio. For each metric,
created cognitive-specific visualizations, and performed a compara-            we considered the computed measures: sums, means, max, min.
tive study, aiming to evaluate the effects of the cognitive-specific           To capture the participants’ eye-gaze behavior we used Tobii Pro
visualizations.                                                                Glasses 2 at 50Hz.
                                                                                  2.1.4 Participants. 23 adult individuals (10 females and 13 males),
2.1    Exploratory Study                                                       ranging in age between 18 and 33 years old (m = 23.3, sd = 4.9),
   2.1.1 Hypotheses. To answer the first part of the research ques-            took part in the study. According to VVLSR and VVQ, 12 partici-
tion, we formed the following null hypotheses:                                 pants were classified as visualizers and 11 participants were classi-
                                                                               fied as verbalizers.
 H01 There is no difference between visualizers and verbalizers
                                                                                   2.1.5 Procedure. We recruited 23 study participants, using vary-
     regarding the content comprehension.
                                                                               ing methods (e.g., personal contacts, social media announcements).
 H02 Visual behavior of visualizers and verbalizers is not associ-
                                                                               The participants had to meet a set of minimum requirements: have
     ated with the content comprehension.
                                                                               never taken VVQ and VVLSR tests before; be older than 18 years;
   2.1.2 Cultural heritage activity. Considering that browsing vir-            know nothing about the paintings used in the study; have little
tual collections and galleries is a popular way for delivering cultural-       knowledge of art history and theory. All participants were informed
heritage content [26, 30], we developed a web-based virtual-tour               about the study and signed a consent form. For each participant,
application with five paintings of the National Gallery of Greece: a)          we scheduled a single virtual exhibition tour of the study paintings.
Child with rabbits by Polychronis Lembesis, b) Café Neon at night by           Each virtual took took place in our lab at a mutually agreed date
Yiannis Tsarouchis, c) The Sphinx in Cairo by Pericles Cirigotis, d) In        and time. Before entering the tour, the participant completed the
surgery by Georgios Roilos, and e) The dirge in Psara by Nikephoros            VVQ and VVLSR tests (20 minutes). Next, she/he navigated through
Lytras. The paintings are depicted in Figure 1. Each painting was              the scene (20 minutes) and viewed all the paintings (no view-order
accompanied with a textual description, and thus, each painting                restrictions). Then, she/he distracted with a playful activity (30
had two types of content: pictorial and textual.                               minutes), which was not relevant to the virtual tour. Finally, she/he
                                                                               filled a form about demographics informations and answered the
   2.1.3 Instruments and metrics. To classify the participants as              VCC questionnaire (15 minutes).
either visualizers or verbalizers, we used a version of the Verbal-               2.1.6 Results. To investigate H01 , we performed a Mann-Whitney
Visual Learning Style Rating questionnaire (VVLSR) [12] and the                U Test. The test met the required assumptions, as the distributions
Verbalizer-Visualizer Questionnaire (VVQ) [24]. Both tests have                of the correct answers (i.e., VCC score) for both visualizers and ver-
been widely used in similar studies in varying contexts, such as               balizers were similar, as assessed by visual inspection. Median score
e-learning [10] and comprehension of multimedia material [11]                  for visualizers and verbalizers was not statistically significantly dif-
   To measure the visual-content comprehension (VCC), we de-                   ferent (Table 1). However, the analysis regarding the comprehension
signed a post-test VCC questionnaire. It consisted of ten multiple-            on each type of content (i.e., VCCpic for pictorial-content compre-
choice questions (two questions for each painting: one about the               hension and VCCt ex t for textual-content comprehension) revealed
pictorial content and one about the textual content), with high                significant differences. In particular, visualizers had a significantly
reliability (.738) according to Kuder-Richardson-20 Test. None of              better VCCpic (U = 32.000, z = −2.217, p = .027), while visualizers
the participants had seen the paintings before, thus, they had no              had a significantly better VCCt ex t (U = 33.500, z = −2.287, p =
prior knowledge about their content.                                           .022).
                                                                           2
Cognition-based Cultural-Heritage Visualizations                                        AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy

   Table 1: Statistical analysis on content comprehension                        where both pictorial and textual AOIs are important to the visitor.
                                                                                 Therefore, we cannot exclude one type or another, but we need to
                                                          VCCtot al              direct users’ attention to the AOI type that they do not inherently
                                                                                 prefer. In particular, we need to direct the visualizers’ attention to
  Cognitive dimension             N    Median      Mean            Std.
                                                                                 textual AOIs and the verbalizers’ attention to pictorial AOIs.
  Visualizer                      12     6.000     6.082           .900
                                                                                     To help visualizers pay more attention on the textual AOIs and
  Verbalizer                      11     6.000     6.093          1.578
                                                                                 increase textual-content comprehension, we adopted a popular tech-
  Mann-Whitney U Test              U = 60.500, z = −.358, p = .721               nique found in the literature: emphasize specific key-words, that are
                                                                                 critical for a better comprehension [4]. Hence, the textual AOIs can
                                                            VCCpic               be visualized in two ways: the default way, which is recommended
  Cognitive dimension             N    Median      Mean           Std.           for verbalizers, and the emphasizing way, which is recommended
  Visualizer                      12     3.500     3.582          .669           for visualizers. To help verbalizers pay more attention on the picto-
  Verbalizer                      11     3.000     2.820          .982           rial AOIs and increase pictorial-content comprehension, we applied
                                                                                 a saliency filter to the pictorial AOIs, which is a typical technique
  Mann-Whitney U Test            U = 32.000, z = −2.217, p = .027                to attract attention to specific areas of pictures [9]. Hence, the pic-
                                                                                 torial AOIs can be visualized in two ways: the default way, which
                                                           VCCt ex t             is recommended for visualizers, and the salient way, which is rec-
  Cognitive dimension             N    Median      Mean           Std.           ommended for verbalizers. The simple dichotomous algorithm (in
  Visualizer                      12     3.000     2.521          .674           pseudo-code) to define the visualization of each painting is:
  Verbalizer                      11     3.000     3.272          .786
  Mann-Whitney U Test            U = 33.500, z = −2.287, p = .022                Algorithm 1 Simple dichotomous algorithm to set the visualization
                                                                                 of an AOI based on the user’s V-V cognitive dimension
                                                                                  1: procedure SetCognitionBasedVisualization
   To investigate H02 we performed a series of Spearman’s cor-                    2:     if user is visualizer then
relation test between the visual behavior metrics and VCC. The                    3:         Set AOI → text → vis to "emphasis"
results revealed several low and moderate correlations, and a strong              4:         Set AOI → pic → vis to "default"
positive correlation (r s = .883, p < .001) between VCC and the ratio             5:     else
of fixation duration on pictorial and textual AOIs (Equation 1).                  6:         Set AOI → text → vis to "default"
                                                                                  7:         Set AOI → pic → vis to "salient"
                             Fixation.durationpict or ial .aois
         V Bdur −r at io =                                             (1)
                            Fixation.durationt ex tual .aois
   To further investigate the effect of V-V cognitive style on the
visual behavior of the users, we performed an independent-samples                2.3    Comparative study
t-test to determine whether there are differences in VBdur −r at io              To investigate whether the cognition-based visualization would as-
between visualizers and verbalizers. The test met all the required               sist visualizers and verbalizers to comprehend better the paintings’
assumptions. The VBdur −r at io was higher for visualizers (m =                  content, we conducted a between-subject comparative study.
1.890, sd = .775) than verbalizers (m = 1.238, sd = .299), sta-
tistically significant difference of (p = .017, t(21) = 2.619, d =                 2.3.1 Hypotheses. To answer the second part of the research
1.110, 95%CI : [.135, .172]). The results underpin that visualizers              question, we formed the following null hypotheses:
tend to perform longer fixations on the pictorial AOIs, while the                 H03 Cognition-based visualization does not affect significantly
verbalizers tend to perform longer fixations on the textual AOIs.                     the visual behavior of visualizers and verbalizers.
                                                                                  H04 Cognition-based visualization does not affect significantly
2.2    Visualization                                                                  the comprehension of visualizers and verbalizers regarding
The results underpin the necessity of providing customized visual-                    paintings’ content.
izations for both visualizers and verbalizers, in order to help them
comprehend better the content of the paintings. Considering that                    2.3.2 Cultural heritage activity. The activity was the same with
visualizers have an inherent preference for pictorial content, while             the one discussed in the exploratory study. However, the cognition-
verbalizers have an inherent preference for textual content, we                  based visualization was applied for each painting, depending on
propose a cognition-based visualization that aims to trigger the vi-             the V-V cognitive dimension of the user.
sualizers’ attention to textual AOIs and the verbalizers’ attention to              2.3.3 Instrument and metrics. They were identical with the in-
pictorial AOIs. Through the cognition-based visualization we expect              struments and metrics that were used in the exploratory study.
visualizers to comprehend better the textual content and verbalizers
to comprehend better the pictorial content of the paintings.                        2.3.4 Participants. We recruited 20 adult individuals (8 females,
   A common approach to make an individual with specific cogni-                  12 males) ranging in age between 20 and 31 years old (m = 25.3, sd =
tive characteristics to focus on specific AOIs is to exclude the other           3.8). According to VVLSR and VVQ, 10 participants were classified
AOIs [9]. However, this cannot be applied in a virtual gallery-tour,             as visualizers and 10 participants were classified as verbalizers.
                                                                             3
AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy                                                                       G. E. Raptis et al.




Figure 2: The cognition-based visualization type helped the                   Figure 3: The cognition-based visualization type helped
visualizers increase their fixation duration on the textual                   mainly verbalizers to perform better in pictorial-content
AOIs.                                                                         questions.


   2.3.5 Procedure. We followed the same study procedure with
the exploratory study.
    2.3.6 Results. To investigate H03 , we performed a two-way
ANOVA with V-V cognitive dimension and the type of the visu-
alization as the independent variables, and the VBdur −r at io as
the dependent variable. The test met all the required assumptions.
The results revealed a significant interaction effect (F (1, 39) =
4.835, p = .034, eta = .110). Focusing on each independent vari-
able, a significant effect was revealed both for cognitive dimension
(F (1, 39) = 6.272, p = .019, eta = .129) and the visualization type
(F (1, 39) = 4.039, p = .047, eta = 1.104). Regarding the main ef-
fects, the visualization type helped most the visualizers as they
increased the fixation duration on the textual AOIs, and thus, their
VBdur −r at io was decreased (F (1, 39) = 9.039, p = .005, eta = .188).
No main effects were revealed for the verbalizers regarding the visu-         Figure 4: The cognition-based visualization type helped both
alization type. Regarding the cognitive dimension, no effects were            visualizers and verbalizers to provide more correct answers
revealed for the subjects who used the cognition-based visualiza-             to textual-content questions.
tion type, while there were significant effects for the subjects who
used the default visualization type, as discussed in the exploratory
study. The results are depicted in Figure 2.
    To investigate H04 , we performed a two-way ANOVA with V-
V cognitive dimension and the type of the visualization as the                expected, visualizers focused on the pictorial content and the ver-
independent variables, and VCC as the dependent variable. The                 balizers focused on the textual content in the visual exploratory
test met all the required assumptions. The results revealed no in-            activity (i.e., virtual gallery tour), verifying the results of other
teraction effect. Focusing on each content-type, the analysis re-             studies [10, 28] in other domains. Considering that each paint-
vealed no effects for VCCpic (Figure 3). Regarding, VCCt x t , the            ing provided information both in pictorial and textual format, the
analysis revealed an effect both for the V-V cognition dimension              overall content comprehension of both visualizers and verbaliz-
(F (1, 39) = 7.013, p = .012, eta = .152) and the visualization               ers was not different, but it was average. The inherent preference
type (F (1, 39) = 8.940, p = .005, eta = .186). Focusing on main              of visualizers for pictorial content influenced the content-related
effects, visualizers who used the cognition-based visualization pro-          comprehension, as they comprehended the content of the pictorial
vided significantly more correct answers regarding the textual AOIs           areas of interest, but not the content of the textual areas of interest,
(F (1, 39) = 5.520, p = .024, eta = .124), as depicted in Figure 4.           as they produced shorter fixations on them, which implies diffi-
                                                                              culties in memorability [29]. Likewise, the inherent preference of
3    DISCUSSION                                                               verbalizers in processing textual information, resulted in shorter
The results of the exploratory study underpin that individual cog-            fixations on the pictorial areas of interest. Hence, verbalizers had
nitive differences have an impact on the users’ visual behavior and           low performance regarding pictorial-context comprehension, but
content comprehension when performing a cultural activity. As                 they performed well regarding textual-context comprehension.
                                                                          4
Cognition-based Cultural-Heritage Visualizations                                     AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy


3.1    Cognition-based visualizations                                         reported one provide the personalization rules. Following an inclu-
The aforementioned results underpin the necessity of adopting                 sive and open approach, the cognition-centered framework should
cognition-based visualizations to help both visualizers and verbal-           support various cognitive styles and skills that have been found to
izers to comprehend better the visual information presented in                affect users’ experience and/or behavior in cultural-heritage con-
cultural-heritage contexts. We proposed a simple dichotomous rule             texts, such as field dependence-independence [19], visual working
(Algorithm 1) which provides a customized visualization of each               memory [22], and personality traits [15].
art-exhibit based on the cognitive profile of the user. In the case
of a visualizer, the visualization type aims to direct her/his atten-         3.3    Implicit elicitation of Visualizer-Verbalizer
tion to textual areas of interest, while in the case of a verbalizer,                cognitive style
the visualization type aims to direct her/his attention to pictorial          The study results revealed that there is a strong correlation between
areas of interest. To evaluate the proposed visualization mecha-              users’ visual behavior and content-comprehension, when consid-
nism, we performed a small-scale between-subject eye-tracking                 ering the Visualizer-Verbalizer cognitive dimension as the control
study. The results revealed that the cognition-based visualization            factor. Given that eye-trackers become cheaper, smaller, more ro-
helped both user types to perform better regarding the compre-                bust, and they are integrated in varying technological frameworks,
hension of the paintings’ content. The visualizers who used the               such as mobile devices [6] and head-mounted displays [3], and
cognition-based visualization mechanism provided more correct                 they have already been used and evaluated within cultural-heritage
answers to the textual-content questions than the visualizers who             contexts [14, 17], eye-gaze data could be the building factors of the
used the default mechanism. Likewise, the verbalizers who used                cognition-centered framework, aiming to a) implicitly elicit user
the cognition-based visualization mechanism provided more cor-                cognitive profile and b) provide personalized visualizations.
rect answers to the pictorial-content questions than the verbalizers             Considering the recent works on eye-gaze based elicitation of
who used the default mechanism. At the same time, there were                  users’ cognitive characteristics [8, 23, 27] and the technological
no differences between visualizers and verbalizers regarding ei-              advances in the eye-tracking industry, the development of transpar-
ther the pictorial or the textual content comprehension. Therefore,           ent and in-run time elicitation modules that would model the users
they both increased the overall score of the content comprehension            according to their cognitive characteristics is feasible in the near fu-
(including questions related to both pictorial and textual content).          ture and in immersive contexts that are based on visual interaction,
                                                                              such as mixed-reality [19]. Our recent works have revealed that the
3.2    Towards a cognition-centered approach for                              elicitation of the users’ cognitive style can be performed with high
       presenting cultural-heritage content                                   accuracy and in the early stages of a visual search activity when
The results of the comparative study underpin the necessity of                considering task complexity [23], task segments [23], and time [8]
adopting a cognition-centered approach, such as a framework, to de-           as the elicitation parameters along with the eye-gaze data.
liver personalized cultural-heritage activities, tailored to the users’          Therefore, our study findings could contribute to building a
individual cognitive preferences and needs. Such framework is ex-             user-modeling module which extends the current range of cogni-
pected to benefit both cultural-heritage stakeholders and end-users.          tive characteristics and increases the validity of other studies (and
Stakeholders from interdisciplinary fields (e.g., curators, educators,        eventually the elicitation accuracy and performance). Based on
guides, designers) are expected to use such framework to create               the transparent and run-time elicitation of users’ cognitive charac-
personalized cultural-heritage activities, tailored to the cognitive          teristics, adaptation interventions can be applied in order for the
characteristics of the end-users (e.g., museum visitors). End-users           cognition-centered framework to provide personalized visualiza-
are expected to be benefited towards achieving their goals (e.g.,             tions, tailored to the users’ individual characteristics. For example,
improve content comprehension) through cognition-effortless per-              when a user is classified as visualizer in a virtual gallery tour, the
sonalized interventions, as they adapt to the end-users’ individual           framework would provide her/him with default pictorial areas of
cognitive characteristics.                                                    interest along with emphasizing textual AOIs, based on the appro-
   As discussed in [22], the cognition-centered framework consists            priate adaptation rules, aiming to disperse her/his attention on both
of two main modules: the user-modeling module and the personal-               types of areas of interest.
ization module. The user-modeling module is responsible to elicit,
store, and maintain cognition-centered user profiles. It can based            4     STUDY VALIDITY AND LIMITATIONS
on elicitation mechanisms which exploit data from various sources,            This research work entails several limitations inherent to the mul-
such as eye-gaze interaction [23] and social-behavior data [5]. Re-           tidimensional character and complexity of the factors investigated.
finement processes based on machine learning and computer vision              Regarding internal validity the study environment and the study
techniques can be used to ensure the accuracy and the robustness              procedure remained the same for all participants. The methodology
of the user-modeling module.                                                  and statistical tests used to answer the research objectives met all
   The personalization module aims to adapt the cultural-heritage             the required assumptions, despite the rather limited size of the
activity to the unique personalized configurations for users with             sample, providing internally valid results.
specific cognitive characteristics. The personalization engine takes             Regarding the ecological validity of our study, the study sessions
as an input the cognitive profile of the user, provided by the user-          performed in times and days convenient for each participant. The
modeling module, and exports the personalized cognition-based                 desktop computer was powerful enough to support the virtual
visualizations, following a rule-based approach. Studies like the             guide tour and did not affect participants’ experience in the shade
                                                                          5
AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy                                                                                     G. E. Raptis et al.


of poor performance. The use of an eye-tracking technology was                  pictorial content respectively. Therefore, this work provides evi-
a limitation, as the individuals do not use such equipment when                 dence that the cognitive styles (e.g., Visualizer-Verbalizer) can be
performing computer-mediated activities. However, the fact that                 used to provide personalized cultural-heritage experiences, aiming
the eye-tracking technology used were wearable glasses, made                    to improve content comprehension and eliminate learning unbal-
the participants feel more comfortable after a while, as they could             ances between users with different cognitive characteristics.
interact with the system as they would normally do. At this point
is worth-mentioning that we used an expensive and accurate eye-
tracking apparatus which could sabotage the application of such                 REFERENCES
                                                                                 [1] Angeliki Antoniou and George Lepouras. 2010. Modeling Visitors’ Profiles: A
schemes in typical real-life cultural-heritage scenarios. Therefore,                 Study to Investigate Adaptation Aspects for Museum Learning Technologies.
there is a need to investigate whether we would have the same                        Journal on Computing and Cultural Heritage (JOCCH) 3, 2, Article 7 (Oct. 2010),
results when using more conventional and cheaper eye-tracking                        19 pages. https://doi.org/10.1145/1841317.1841322
                                                                                 [2] Liliana Ardissono, Tsvi Kuflik, and Daniela Petrelli. 2012. Personalization in
tools (e.g., based on web-camera feed) or whether simple eye-gaze                    Cultural Heritage: The Road Travelled and the One Ahead. User Modeling and
data that are easily detected, such as number of blinks, could provide               User-Adapted Interaction 22, 1 (01 Apr 2012), 73–99. https://doi.org/10.1007/
similar results.                                                                     s11257-011-9104-x
                                                                                 [3] Michael Barz, Florian Daiber, and Andreas Bulling. 2016. Prediction of Gaze
   For the scope of the study, we focused only on visual interac-                    Estimation Error for Error-aware Gaze-based Interfaces. In Proceedings of the
tions. However, cultural-heritage activities also include audio-based                Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM,
                                                                                     275–278. https://doi.org/10.1145/2857491.2857493
and spatial interactions, such as storytelling applications [7] and              [4] Chih-Ming Chen and Sheng-Hui Huang. 2014. Web-based Reading Annotation
location-based games [25]. Hence, there is a need of investigating                   System with an Attention-based Self-regulated Learning Mechanism for Promot-
whether individual cognitive characteristics influence visitors’ be-                 ing Reading Performance. British Journal of Educational Technology 45, 5 (2014),
                                                                                     959–980. https://doi.org/10.1111/bjet.12119
havior and experience in such contexts. In the same line, recent                 [5] Eyal Dim and Tsvi Kuflik. 2014. Automatic Detection of Social Behavior of
studies in the cultural-heritage domain have raised the importance                   Museum Visitor Pairs. ACM Transactions on Interactive Intelligent Systems (TIIS)
of the visitors’ emotional engagement [18]; an aspect that needs to                  4, 4, Article 17 (Nov. 2014), 30 pages. https://doi.org/10.1145/2662869
                                                                                 [6] Seongwon Han, Sungwon Yang, Jihyoung Kim, and Mario Gerla. 2012. Eye-
be investigated in relation to visitors’ cognitive characteristics.                  Guardian: A Framework of Eye Tracking and Blink Detection for Mobile Device
   We expect that our results will be replicated for activities that are             Users. In Proceedings of the Twelfth Workshop on Mobile Computing Systems &
                                                                                     Applications. ACM, 6. https://doi.org/10.1145/2162081.2162090
based visual search tasks which can be found in varying domains,                 [7] Akrivi Katifori, Manos Karvounis, Vassilis Kourtis, Marialena Kyriakidi, Maria
besides cultural-heritage, such as e-shopping, e-learning, and engi-                 Roussou, Manolis Tsangaris, Maria Vayanou, Yannis Ioannidis, Olivier Balet,
neering. Regarding the technological context, we expect our results                  Thibaut Prados, Jens Keil, Timo Engelke, and Laia Pujol. 2014. CHESS: Per-
                                                                                     sonalized Storytelling Experiences in Museums. In Interactive Storytelling, Alex
to be applicable for contexts which exploit the technologies across                  Mitchell, Clara Fernández-Vara, and David Thue (Eds.). Springer International
the virtuality continuum (AR/MR/VR), especially contexts that cre-                   Publishing, Cham, 232–235. https://doi.org/10.1007/978-3-319-12337-0_28
ate environments rich in visual information, such as head-mounted                [8] Christina Katsini, Christos Fidas, George E. Raptis, Marios Belk, George Samaras,
                                                                                     and Nikolaos Avouris. 2018. Eye Gaze-driven Prediction of Cognitive Differences
displays (HMDs) and cave automatic virtual environments (CAVEs).                     during Graphical Password Composition. In 23rd International Conference on In-
Finally, our study increases the external validity of studies which                  telligent User Interfaces. ACM, 147–152. https://doi.org/10.1145/3172944.3172996
                                                                                 [9] Christina Katsini, Christos Fidas, George E. Raptis, Marios Belk, George Samaras,
investigate the effects of Visualizer-Verbalizer cognitive style on                  and Nikolaos Avouris. 2018. Influences of Human Cognition and Visual Behavior
visual-search tasks [10, 28].                                                        on Password Strength During Picture Password Composition. In Proceedings of the
                                                                                     2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New
                                                                                     York, NY, USA, Article 87, 14 pages. https://doi.org/10.1145/3173574.3173661
                                                                                [10] Marta Koć-Januchta, Tim Höffler, Gun-Brit Thoma, Helmut Prechtl, and Detlev
5    CONCLUSION                                                                      Leutner. 2017. Visualizers versus Verbalizers: Effects of Cognitive Style on Learn-
                                                                                     ing with Texts and Pictures - An Eye-Tracking Study. Computers in Human
In this paper, we first reported the results of an eye-tracking study                Behavior 68 (2017), 170–179. https://doi.org/10.1016/j.chb.2016.11.028
aiming to investigate the effects of V-V cognitive style on the com-            [11] Laura J. Massa and Richard E. Mayer. 2006. Testing the ATI Hypothesis: Should
                                                                                     Multimedia Instruction Accommodate Verbalizer-Visualizer Cognitive Style?
prehension of the content of five paintings during a virtual gallery                 Learning and Individual Differences 16, 4 (2006), 321–335. https://doi.org/10.1016/
tour and explain the results considering the users’ visual behav-                    j.lindif.2006.10.001
ior. Significant differences were revealed between visualizers and              [12] Richard E. Mayer and Laura J. Massa. 2003. Three Facets of Visual and Verbal
                                                                                     Learners: Cognitive Ability, Cognitive Style, and Learning Preference. Journal of
verbalizers regarding the comprehension of pictorial and textual                     educational psychology 95, 4 (2003), 833–846. https://doi.org/10.1037/0022-0663.
content. Their performance was also strongly related to their vi-                    95.4.833
                                                                                [13] Tracey J. Mehigan, Mary Barry, Aidan Kehoe, and Ian Pitt. 2011. Using Eye
sual behavior, which was different for visualizers and verbalizers.                  Tracking Technology to Identify Visual and Verbal Learners. In 2011 IEEE In-
Hence, this paper provides evidence that users with different V-V                    ternational Conference on Multimedia and Expo (ICME). IEEE, 1–6. https:
cognitive style follow different strategies when performing a vi-                    //doi.org/10.1109/ICME.2011.6012036
                                                                                [14] Moayad Mokatren, Tsvi Kuflik, and Ilan Shimshoni. 2018. Exploring the Potential
sual exploratory cultural-heritage activity (e.g., virtual gallery tour).            of a Mobile Eye Tracker as an Intuitive Indoor Pointing Device: A Case Study
These strategies are reflected on their visual behavior and they lead                in Cultural Heritage. Future Generation Computer Systems 81 (2018), 528–541.
to unbalances regarding content comprehension. Triggered by the                      https://doi.org/10.1016/j.future.2017.07.007
                                                                                [15] Yannick Naudet, Angeliki Antoniou, Ioanna Lykourentzou, Eric Tobias, Jenny
study results, we designed an assistive mechanism based on the                       Rompa, and George Lepouras. 2015. Museum Personalization Based on Gaming
visual behavior of the visualizers and verbalizers, which provided                   and Cognitive Styles: The BLUE Experiment. International Journal of Virtual
                                                                                     Communities and Social Networking (IJVCSN) 7, 2 (2015), 1–30. https://doi.org/
customized cognition-based visualizations of the paintings. To eval-                 10.4018/IJVCSN.2015040101
uate its efficiency, we conducted a comparative eye-tracking study.             [16] Allan Paivio. 1990. Mental Representations: A Dual Coding Approach. Oxford
The results revealed that the cognition-based visualizations helped                  University Press. https://doi.org/10.1093/acprof:oso/9780195066661.001.0001
                                                                                [17] Isabel Pedersen, Nathan Gale, Pejman Mirza-Babaei, and Samantha Reid. 2017.
both visualizers and verbalizers to comprehend better textual and                    More than Meets the Eye: The Benefits of Augmented Reality and Holographic

                                                                            6
Cognition-based Cultural-Heritage Visualizations                                                     AVI-CH 2018, May 29, 2018, Castiglione della Pescaia, Italy


     Displays for Digital Cultural Heritage. Journal on Computing and Cultural Her-               25th Conference on User Modeling, Adaptation and Personalization. ACM, 164–173.
     itage (JOCCH) 10, 2 (2017), 11. https://doi.org/10.1145/3051480                              https://doi.org/10.1145/3079628.3079690
[18] Sara Perry, Maria Roussou, Maria Economou, Laia Pujol-Tost, and Hilary Young.           [24] Alan Richardson. 1977. Verbalizer - Visualizer: A Cognitive Style Dimension.
     2017. Moving Beyond the Virtual Museum: Engaging Visitors Emotionally. In                    Journal of Mental Imagery 1 (1977), 109–125. Issue 1.
     In the Proceedings of the 23rd International Conference on Virtual Systems and          [25] Christos Sintoris, Nikoleta Yiannoutsou, Soteris Demetriou, and Nikolaos M.
     Multimedia (VSMM 2017). Dublin, Ireland.                                                     Avouris. 2013. Discovering the Invisible City: Location-based Games for Learning
[19] George E. Raptis, Christos Fidas, and Nikolaos Avouris. 2018. Effects of Mixed-              in Smart Cities. Interaction Design and Architecture(s) Journal - IxD&A 16 (2013),
     Reality on Players’ Behaviour and Immersion in a Cultural Tourism Game: A                    47–64.
     Cognitive Processing Perspective. International Journal of Human-Computer               [26] Styliani Sylaiou, Fotis Liarokapis, Kostas Kotsakis, and Petros Patias. 2009. Virtual
     Studies 114 (2018), 69–79. https://doi.org/10.1016/j.ijhcs.2018.02.003                       Museums, a Survey and Some Issues for Consideration. Journal of Cultural
[20] George E. Raptis, Christos A. Fidas, and Nikolaos M. Avouris. 2016. Do Field                 Heritage 10, 4 (2009), 520 – 528. https://doi.org/10.1016/j.culher.2009.03.003
     Dependence-Independence Differences of Game Players Affect Performance                  [27] Dereck Toker, Sébastien Lallé, and Cristina Conati. 2017. Pupillometry and Head
     and Behaviour in Cultural Heritage Games?. In Proceedings of the 2016 Annual                 Distance to the Screen to Predict Skill Acquisition During Information Visualiza-
     Symposium on Computer-Human Interaction in Play (CHI PLAY ’16). ACM, New                     tion Tasks. In Proceedings of the 22nd International Conference on Intelligent User
     York, NY, USA, 38–43. https://doi.org/10.1145/2967934.2968107                                Interfaces. ACM, 221–231. https://doi.org/10.1145/3025171.3025187
[21] George E. Raptis, Christos A. Fidas, and Nikolaos M. Avouris. 2016. Using               [28] Nikos Tsianos, Panagiotis Germanakos, Zacharias Lekkas, Costas Mourlas, and
     Eye Tracking to Identify Cognitive Differences: A Brief Literature Review. In                George Samaras. 2009. Eye-Tracking Users’ Behavior in Relation to Cognitive
     Proceedings of the 20th Pan-Hellenic Conference on Informatics (PCI ’16). ACM,               Style within an E-learning Environment. In 2009 Ninth IEEE International Con-
     New York, NY, USA, Article 21, 6 pages. https://doi.org/10.1145/3003733.3003762              ference on Advanced Learning Technologies. 329–333. https://doi.org/10.1109/
[22] George E. Raptis, Christos A. Fidas, Christina Katsini, and Nikolaos M. Avouris.             ICALT.2009.110
     2018. Towards a Cognition-Centered Personalization Framework for Cultural-              [29] Michel Wedel and Rik Pieters. 2000. Eye Fixations on Advertisements and Memory
     Heritage Content. In Extended Abstracts of the 2018 CHI Conference on Human                  for Brands: A Model and Findings. Marketing Science 19, 4 (2000), 297–312.
     Factors in Computing Systems (CHI EA ’18). ACM, New York, NY, USA, Article                   https://doi.org/10.1287/mksc.19.4.297.11794
     LBW011, 6 pages. https://doi.org/10.1145/3170427.3190613                                [30] Rafal Wojciechowski, Krzysztof Walczak, Martin White, and Wojciech Cellary.
[23] George E. Raptis, Christina Katsini, Marios Belk, Christos Fidas, George Samaras,            2004. Building Virtual and Augmented Reality Museum Exhibitions. In Proceed-
     and Nikolaos Avouris. 2017. Using Eye Gaze Data and Visual Activities to Infer               ings of the Ninth International Conference on 3D Web Technology (Web3D ’04).
     Human Cognitive Styles: Method and Feasibility Studies. In Proceedings of the                ACM, New York, NY, USA, 135–144. https://doi.org/10.1145/985040.985060




                                                                                         7