<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ARTISTS: A VIRTUAL REALITY CULTURAL EXPERIENCE PERSONALIZED ARTWORKS SYSTEM: THE “CHILDREN CONCERT” PAINTING CASE STUDY</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>George Trichopoulos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>John Aliprantis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markos Konstantakis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>George Caridakis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Researcher, Department of Cultural Technology and Communication</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of the Aegean</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In recent years, there is a constant tendency in integrating modern technologies into mobile guides and applications in Cultural Heritage (CH) domain, aiming in enriching cultural user experience. Amongst them, Virtual Reality (VR) has widely been used in digital reconstruction or restoration of damaged cultural artifacts and monuments, allowing a deeper perception in their characteristics and unique history. This work presents a VR environment that takes into account the diverse needs and characteristics of visitors and digitally immerses them into paintings, giving them the ability to directly interact with their characteristics with the Leap Motion controller. To test our proposed system, a mobile prototype application has been designed, focused on the famous painting “Children Concert” created by Georgios Iakovidis, which also integrates the User Personas and the different scenarios depending on users' profile.</p>
      </abstract>
      <kwd-group>
        <kwd>Cultural Heritage</kwd>
        <kwd>Cultural User Experience</kwd>
        <kwd>Natural Interaction</kwd>
        <kwd>User personas</kwd>
        <kwd>Virtual Reality</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>(audio guides) or walked freely without any kind of guidance. A VR guide can boost
mental and visual focus on exhibits, achieving a level of flow (Mihaly Csikszentmihalyi,
1975), which motivates user to seek more knowledge and extend his visit.</p>
      <p>
        Meanwhile, personalization methods in User Experience (UX) and Cultural User
Experience (CUX) appear to give a new perspective to mobile guides and applications
in Cultural Heritage (CH). Personalization
        <xref ref-type="bibr" rid="ref1">(Antoniou &amp; Lepouras, 2010)</xref>
        is based on the
assumption that an computer system can understand the user’s needs, while its success
relies greatly on the accurate elicitation of the user profile. The main reason for
personalization need is simple: Everyone is unique. Matching visitor’s experience,
knowledge and demands is a highly challenging and demanding task. Capturing special
personal characteristics, before or during the visit in a cultural site, has been
implemented using several methods, for example using ontologies, methodological
approach, statistical approach (Pujol Laia et al., 2012), or indirect approach by taking
advantage of social networks like Facebook
        <xref ref-type="bibr" rid="ref2 ref5">(Antoniou Angeliki et al., 2016)</xref>
        , or finally,
according to visitor’s age and behavior.
      </p>
      <p>
        Current work presents a Virtual Reality interface that represents digitally the world
of paintings, allowing users to interact with the aspects of the painting in a 3D
environment. The presented framework also integrates personalization, user personas
        <xref ref-type="bibr" rid="ref7">(based on the User Personas methodology [Konstantakis Markos et al., 2017])</xref>
        and
context awareness techniques to improve users’ experience. In Section 2 we briefly
present our ARTISTS framework, the technologies that we used along and how we
integrate them to the application, the frameworks’ architecture and a use case scenario
with our prototype based on and the famous painting “Children Concert” created by
Georgios Iakovidis. Finally in Section 3 we discuss our future work.
      </p>
    </sec>
    <sec id="sec-2">
      <title>ARTISTS Framework</title>
    </sec>
    <sec id="sec-3">
      <title>Description</title>
      <p>ARTISTS is a mobile application that brings to life famous paintings, by digitally
construct its aspects in a Virtual Reality environment, where users can interact with its
3D models. Users immerse into the VR world by using their own devices mounted
on a VR headset (Google Cardboard), and then interact with the 3D environment using
gestures that are captured by the Leap Motion controller, that’s attached on the headset.
The proposed interface not only puts user inside a painting, allowing them to observe
and interact with the 3D models in many angles, but also uses various methodologies
(context-awareness, personalization, and gesture-recognition) in order to enhance user’s
cultural experience.</p>
      <p>ARTISTS prototype has been designed based on the famous painting “Children’s
Concert” by Greek painter George Iakovidis, which can be found in Athens National
Gallery – Greece, and in a digital format in “George Iakovidis” digital gallery, in Hidira
village – Lesvos. For this painting, seven 3D human models were created, along with
their animations and sounds, in accordance with the 7 characters found in the original
painting. Painting’s surrounding space (a bright room having some furniture) has been
digitally reconstructed in a VR environment, taking into consideration the limited
resources of mobile devices.</p>
      <p>ARTISTS prior version was a mobile application in which users were also able to
interact with the 3D version of a painting by just tapping on mobile device’s screen,
thus without totally immersion to the VR environment. Application settings like sound,
running scenarios, animations etc were depending on user’s profile and interests, a
functionality that still stands in ARTISTS, but with the use of more accurate
methodologies.</p>
    </sec>
    <sec id="sec-4">
      <title>Technologies used in ARTISTS Context Awareness</title>
      <p>
        In ARTISTS design, we take into consideration parts of the context like the ambient
noise level, processing power of the mobile device and screen resolution, trying to
improve users’ experience regardless of environmental conditions. In particular, in a
quite noisy environment (to the noise level of 50dB), sound volume can be increased up
to 50%, whilst in extremely noisy conditions (noise level more than 70dB), application
audio volume mutes to avoid Lombard effect
        <xref ref-type="bibr" rid="ref13">(Varadarajan Vaishnevi, Hansen John H.L.,
2006)</xref>
        . In a full scale application of ARTISTS, noise levels would be measured by a
sensors network, in accordance with user’s position in space. Furthermore, processing
power of the portable device in use can be a crucial asset which can deeply affect user
experience. Ιnsufficient resources could affect the reproduction of high- resolution 3D
animation and graphics needed to construct the VR environment, while also screen
resolution could be a negative factor in displaying high resolution graphics. A short
benchmark on the background, during application installation can easily adjust
applications’ settings to the appropriate level based in devices’ capabilities before the
initialization of the application, thus avoiding malfunctions during users’ experience.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Personalized User Experience</title>
      <p>
        In our case, we use the User Personas method, which categorize users based on their
profile during a museum visit. User Personas
        <xref ref-type="bibr" rid="ref9">(Morris, Hargreaves and McIntyre, 2004)</xref>
        are not real people but avatars created studying real people’s characteristics. We use 4
User Personas with the names “Follower”, “Browser”, “Searcher” and “Researcher”.
Followers try to follow any guidance provided by the museum or cultural site, trying
also to learn something by it. Browsers won’t follow a guide but go anywhere, in every
place that looks interesting, and then, they search for information about it. Searchers will
search and collect detailed information on specific exhibits or collections whilst
Researchers step further on a scientific research about specific exhibits
        <xref ref-type="bibr" rid="ref8">(Konstantakis et
al., 2018)</xref>
        .
      </p>
    </sec>
    <sec id="sec-6">
      <title>Gesture Recognition and 3D Interaction</title>
      <p>
        Gesture recognition refers to computers’ ability to understand gestures involving
physical movements of multiple body parts (fingers, arms, hands, head, feet, etc) and
execute commands based on the corresponding gesture, thus allowing interaction with
the computer environment. Many gesture recognition approaches suggest that gestures
used as interaction methods between humans, can also been successfully applied as a
natural and intuitive way to interact with machines [
        <xref ref-type="bibr" rid="ref11">Ren et al., 2016</xref>
        ][
        <xref ref-type="bibr" rid="ref14">Yeo et al, 2015</xref>
        ].
In ARTISTS framework, we use the Leap Motion controller to track users’ hands and
match their movements with commands in the virtual environment. As users’ mobile
device is found into a Google Cardboard type VR device, it is impossible to tap on the
screen. Leap Motion API gives us the tools to interact with the app interface by using
hands. Simple tasks like selecting a character, dragging the volume slider, selecting
from menus and pressing on UI buttons can be done with natural hand movements in
space, in a quite accurate, intuitive and entertaining way.
      </p>
    </sec>
    <sec id="sec-7">
      <title>User Personas</title>
      <p>The design of personas as ‘fictional’ characters is considered as a very consistent and
representative way to define actual users and their goals. However, it is important to
clarify the exact number of personas in each occasion in order to focus on the visitor
profiles to be examined. On ARTISTS, we take into consideration these UPs and their
characteristics and we create more Personas by splitting Followers and Browsers into
3 Levels. Searchers and Researchers are combined and split into 2 Levels. These
Levels have a quantitative meaning. For example, Level 2 Researcher has done more
research and shows more of the initial Researcher characteristics than Level 1
Researcher.</p>
      <p>In order to match each museum (or any other cultural site) visitor to an ARTIST
persona, the system collects and process various data about visitors. Data mining is
ARTISTS involves no user interference or preparation and it’s a 3-stages process:
1. Face recognition: Using Microsoft Cognitive Services, user age and emotions
are calculated by their face picture taken from the device’s front camera that is sent over
network. In addition, a database of visitors is created, turning every possible upcoming
visit into a more successfully personalized experience.
2. Social networks data mining: Using data mining algorithms, visitor’s data (profile
and prior experience) are extracted from user social profiles (Facebook, Twitter
or Instagram). Fully compatible with GDPR rules, algorithms can only use data that
users expose as public.
3. Behavior study: Sensors embedded into the visiting area monitor visitors’ path
and behavior into space, providing ARTISTS more personalization data.</p>
    </sec>
    <sec id="sec-8">
      <title>System Architecture</title>
      <p>ARTISTS is a Client – Server system, as shown in Image 1. Core of the system is a
server, located either in a museum (or any cultural site) or in a remote position. Server
supports communication between database, application and sensors network (installed
in museum). Furthermore, more server tasks are responsible for matching visitors to
predefined personas, or displaying multimedia for the VR environment.</p>
      <p>The mobile application creates the appropriate interface between user and
ARTISTS system. Depending on visitors’ profile, the system shows a different scenario
and service. Server also is responsible for handling sensors’ and Smart Objects (SO)
input that can alter applications’ content.</p>
      <p>Image 1: System architecture in ARTISTS</p>
    </sec>
    <sec id="sec-9">
      <title>Use Case Scenario</title>
      <p>After getting necessary visitor data and assigning one persona from Table 1, one of
the 19 usage scenarios may initiate. These scenarios are 19 in total and matching a
visitor to a scenario is a dynamic process. For example, user can start visiting a museum
as a Level 3 Follower, but after a while, his behavior can turn him into Level 1 Browser
and then Level 2 Browser. This happens because behavior monitoring is an ongoing
process that gives feedback data which can eventually change the flow of user
experience. Each one of the scenarios in Table 2 is different in functionality,
interactivity, display quality and load, audio (Table 2).</p>
      <p>Image 2: The VR representation famous painting “Children Concert” created by Georgios Iakovidis</p>
    </sec>
    <sec id="sec-10">
      <title>Conclusion - Future work</title>
      <p>In this work, we describe the ARTISTS framework, a mobile application that displays a
VR reconstructed environment of a painting, and immerses users allowing them to
interact with its 3D aspects. We used the Leap Motion controller as a sensor for detecting
gestures, alongside with Unity, Microsoft’s Azure Cognitive Services and Android
Studio for the implementation of the application and the MySQL database that stores
the 3D environment and painting’s data. Our next step includes the ARTISTS evaluation
stage, in which we will test our framework to evaluate user’s experience and the
efficiency of our integrated technologies.</p>
    </sec>
    <sec id="sec-11">
      <title>Acknowledgments</title>
      <p>The research of this paper was financially supported by the General Secretariat for
Research and Technology (GSRT) and the Hellenic Foundation for Research and
Innovation (HFRI). John Aliprantis has been awarded with a scholarship for his PhD
research from the “1st Call for PhD Scholarships by HFRI” – “Grant Code 234”.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Antoniou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Lepouras</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Modelling visitors' profiles: A study to investigate adaptation aspects for museum learning technologies</article-title>
          .
          <source>J. Comput. Cult. Herit</source>
          .
          <volume>3</volume>
          (
          <issue>2</issue>
          ),
          <source>Article No.7</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Antoniou</given-names>
            <surname>Angeliki</surname>
          </string-name>
          et al. (
          <year>2016</year>
          ).
          <article-title>Capturing the Visitor Profile for a Personalized Mobile Museum Experience: an Indirect Approach</article-title>
          , University of Peloponnese, University of Athens, Pompeu Fabra University,
          <source>CEUR Workshop Proceedings</source>
          , Vol-
          <volume>1618</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Chang</given-names>
            <surname>Kuo-En</surname>
          </string-name>
          et al. (
          <year>2014</year>
          ).
          <article-title>Development and behavioral pattern analysis of a mobile guide system with augmented reality for painting appreciation instruction in an art museum</article-title>
          ,
          <source>Elsevier Computers &amp; Education</source>
          <volume>71</volume>
          , p.
          <fpage>185</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Dey A.</given-names>
            ,
            <surname>Abowd</surname>
          </string-name>
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Salber</surname>
          </string-name>
          <string-name>
            <surname>D.</surname>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>A conceptual framework and toolkit for supporting the rapid prototyping of context-aware applications in special issue on contextaware computing, Human Computer Interaction</article-title>
          , J.
          <volume>16</volume>
          (
          <issue>2-4</issue>
          ), pp.
          <fpage>97</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Eardley W.A</surname>
          </string-name>
          . et al. (
          <year>2016</year>
          ).
          <article-title>An Ontology Engineering Approach to User Profiling for Virtual</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>Tours of Museums and Galleries</article-title>
          ,
          <source>International Journal of Knowledge Engineering</source>
          , Vol.
          <volume>2</volume>
          .
          <string-name>
            <given-names>Katz</given-names>
            <surname>Shahar</surname>
          </string-name>
          et al. (
          <year>2014</year>
          ).
          <article-title>Preparing Personalized Multimedia Presentations for a Mobile Museum Visitors' Guide - a Methodological Approach</article-title>
          , The University of Haifa - Israel, ITC- irst - Italy.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Konstantakis</given-names>
            <surname>Markos</surname>
          </string-name>
          et al. (
          <year>2017</year>
          ).
          <article-title>Formalising and evaluating Cultural User Experience</article-title>
          , University of the Aegean, IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Konstantakis</given-names>
            <surname>Markos</surname>
          </string-name>
          et al. (
          <year>2018</year>
          ).
          <article-title>A Methodology for Optimised Cultural User peRsonas Experience -</article-title>
          CURE
          <string-name>
            <surname>Architecture</surname>
          </string-name>
          ,
          <source>British HCI 2018 Conference</source>
          , Belfast, Northern Ireland,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Morris G.</surname>
          </string-name>
          et al. (
          <year>2004</year>
          ).
          <article-title>Learning Journeys: Using technology to connect the four stages of meaning making</article-title>
          ,
          <source>Birmingham: Morris</source>
          , Hargreaves,
          <string-name>
            <given-names>McIntyre</given-names>
            <surname>Website</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Naismith</given-names>
            <surname>Laura</surname>
          </string-name>
          ,
          <string-name>
            <surname>Smith M. Paul</surname>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Using mobile technologies for multimedia tours in a traditional museum setting</article-title>
          ,
          <year>mLearn 2006</year>
          :
          <article-title>Across generations and cultures</article-title>
          , p.
          <fpage>23</fpage>
          ,
          <string-name>
            <surname>Canada</surname>
          </string-name>
          . Pujol Laia et al. (
          <year>2012</year>
          ).
          <article-title>Personalizing interactive digital storytelling in archaeological museums: the CHESS project</article-title>
          ,
          <source>The CHESS Consortium.</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Robust part-based hand gesture recognition using kinect sensor</article-title>
          .
          <source>IEEE Transactions on Multimedia</source>
          ,
          <volume>15</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Roto</surname>
            <given-names>V.</given-names>
          </string-name>
          et al. (
          <year>2010</year>
          ).
          <article-title>User Experience white paper. Bringing clarity to the concept of user experience, Dagstuhl Seminar on Demarcating User Experience</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Varadarajan Vaishnevi S.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Hansen John H.L.</surname>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Analysis of Lombard effect under different types and levels of noise with application to In-set Speaker ID systems</article-title>
          , University of Texas at Dallas, USA.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Yeo</surname>
            ,
            <given-names>H. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>B. G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          ,
          <volume>74</volume>
          (
          <issue>8</issue>
          ),
          <fpage>2687</fpage>
          -
          <lpage>2715</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>