<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Advanced Visual Interfaces for Cultural Heritage</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Author Keywords Advanced Visualization</institution>
          ,
          <addr-line>Cultural Heritage, Workshop</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Cristina Gena Berardina De Carolis Tsvi Kuflik Fabrizio Nunnari The University of Turin The University of Bari The University of Haifa DFKI Corso Svizzera 185 “Aldo Moro” Mount Carmel</institution>
          ,
          <addr-line>Haifa, Campus D3.2, 66123 Italy, 10149 Italy, 70126 Israel, 31905 Saarbrücken</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>AVI provided an attractive opportunity for exploring novel visual interfaces for cultural heritage (CH). CH traditionally draws a lot of research attention when it comes to exploring the potential benefits from application of novel technology in realistic settings. At the same time, AVI focusses on exploring the state of the art visual interfaces and their application in various domains. The AVI-CH workshop nicely demonstrated the potential of combining these two aspects - the state of the art interfaces technologies with the information rich CH domain. The result was a number of high-quality submissions, with the diversity of topics presented by the papers accepted and discussed at the workshop.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>• Human-centered computing~Human computer interaction
(HCI)
1. INTRODUCTION
The rapid development of information and communication
technologies (ICT) and the Internet has enabled cultural
heritage (CH) institutions to provide access to their collections
in multiple various ways, both on-site and online, and to
attract even wider audiences than those that visit the physical
museums. A major driver/enabler of the above is the enormous
growth in user interfaces modalities and in information
visualization technologies. User interface technologies are
growing and evolving by the day. They vary from tiny smart
watch screens to wall-size large public displays and from
mouse and keyboard to touch, voice gesture and gaze activated
systems.</p>
      <p>
        Regarding advanced virtual interfaces, there are several
successful examples of 3D technologies for virtual museums.
The use of (web) 3D in cultural heritage allows the general
public to enjoy immersive experiences in virtual, reconstructed
Copyright © 2016 for this paper by its authors. Copying permitted for
private and academic purposes.
locations, such as ancient buildings and cities, and to visit
existing, remotely located locations, such as world-wide
cultural institutions (such as Google Art Project [
        <xref ref-type="bibr" rid="ref2">1</xref>
        ]). For
preservation purposes, web 3D provides scholars and cultural
heritage professionals with a way to consult and maintain
visual repositories of real exhibits, with the possibility of
visualizing, comparing and studying 3D digital equivalents of
real artworks physically situated in different locations.
In spite of the potential benefits, cultural heritage is also a very
challenging domain of application for such novel ICT
technologies. It is ubiquitous – just look around you and see
that you are surrounded by it. There is abundance of CH
related information available, about almost every object we
can think of. How can we access and enjoy this information in
Ubiquitous Computing scenario?
Advanced and natural human-computer interaction is a key
factor in enabling such access and visual interfaces, whether
they are tiny mobile screens or large wall mounted displays,
they can all be part of the CH IoT and be part of a ubiquitous
CH infrastructure, where information can be personalized and
displayed/projected, on screens or overlaid on real objects.
The goal of the workshop is to bring together researchers and
practitioners interested in addressing the above-described
challenges by exploring the potential of state of the art
advanced visual interfaces in enhancing our daily cultural
heritage experience.
2. CONTRIBUTIONS
The presentations and discussions at the workshop spanned a
large variety of topics combining AVI and CH. We tried to
discuss the submissions from several practical aspects, first of
all looking at onsite vs. online points of view (that appears to
be challenging given the augmented and virtual reality state of
the art technology that can be applied both onsite and offsite)
and then at the variety of interaction techniques and
technologies that were presented.
      </p>
      <p>
        The workshop started with an invited talk by Franco Cutugno
that presented an advanced interface for CH exhibition design.
He presented an example of interactive floor (PaSt project),
which allows users to interact with the history by walking on a
virtual carpet, and then discussed an approach based on Audio
Augmented Interaction adopted in the CARUSO project.
CARUSO is an Audio Augmented Reality android app based
on 3D audio that creates virtual soundscapes through the
binaural reproduction of voices and sound effects. The user is
free to move in the environment. The output follows her
movement since the software works with interactive
headphones, which detect head orientation by an inertial
sensor and communicate via bluetooth with the device.
2.1 On site interaction (I would delete it since the first talk
was also about on-site interaction)
A large and diverse number of the submissions focused on
supporting museum visitor(s) onsite, making use of a diverse
set of technologies for different applications. Emmanouil
Zidianakis presented the design and implementation of a
technological framework based on Ambient Intelligence to
enhance visitor experiences within real or virtual CH
Institutions by augmenting two-dimensional real or virtual
paintings. Among the major contributions of this work is the
support of personalized multi-user access to exhibits,
facilitating also adaptation mechanisms for altering the
interaction style and content to the requirements of each
visitor. A standard compliant knowledge representation and
the appropriate authoring tools guarantee the effective
integration of this approach to the CH context. They suggested
the use of QR codes for two ways interaction with the
environment – providing personal profile and a possible way
of getting personalized information, in addition to information
projection (or tablets) as means for personalized information
delivery to museum visitors. [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ]
Mokatren and Kuflik in a follow-up to [
        <xref ref-type="bibr" rid="ref12 ref13">11, 12</xref>
        ] examined the
potential of using a mobile eye tracker for indoor positioning
and intuitive interaction. They presented the results of a
preliminary study that explored the potential of mobile
eyetracking and vision technology for enhancing the museum visit
experience. Their hypothesis is that the use of eye tracking
technology enables natural and intuitive interaction of the
visitor with the information space. Satisfactory preliminary
results from examining the performance of a mobile eye
tracker in a realistic setting were presented. The technology
has reached a reliable degree of maturity that can be used for
developing a system based on it.
      </p>
      <p>
        Starting from a collaboration with a worldwide famous Italian
designer, Calandra, Di Mauro, Cutugno and Di Martino [
        <xref ref-type="bibr" rid="ref3">2</xref>
        ]
defined a Natural User Interface to explore 360° panoramic
artworks presented on wall-sized displays. Specifically, they
let the user to “move the head” as a way of natural interaction
for exploring these large digital artworks. To this aim, they
developed a system including a remote head pose estimator to
catch movements of users standing in front of the wall-sized
display. With natural user interfaces, it is difficult to get
feedbacks from the users about the interest for the point of the
artwork he/she is looking at. To solve this issue, they
complemented the gaze estimator with a preliminary
emotional analysis solution, able to implicitly infer the interest
of the user in the presented content from his/her pupil size.
Preliminary results of a user study with 51 subjects show that
the most of the subjects were able to properly interact with the
system from the very first use, and that the emotional module
is an interesting solution, even if further work must be devoted
to address specific situations.
      </p>
      <p>
        Gena [
        <xref ref-type="bibr" rid="ref1 ref5">4</xref>
        ] presented a specific aspect of the large-scale
WantEat project. She presented a reward-based field
evaluation of the interaction model developed for the project,
which puts together real and virtual words [
        <xref ref-type="bibr" rid="ref6">5</xref>
        ]. Real objects are
used as gateways for accessing the cultural heritage of a
territory. Hence, they designed an intelligent interaction model
that allows users to explore the region starting from a real
contacted object. In particular the interaction model support
the visualization and the exploration of identifiable objects of
the real world and their connections with other objects. It
proposes a paradigm that enables a personalized, social and
serendipitous interaction with networked things, allowing
continuous transition between the real and the digital worlds.
They illustrate the procedure and the results of such
evaluation, carried out with a prototype application with no
active users’ community. Results show that the interaction
model stimulates the exploration of the objects in the system
and their networks, and partially promotes the interactive
features of the application, as social actions. For more details
see also [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ].
      </p>
      <p>
        Antonio Origlia presented a human-robot interaction setup
where people actively choose how much information
concerning the available topics they would like to access. To
provide engaging presentations, the humanoid robot exhibits a
behaviour modelled on the basis of a human presenter.
Monitoring the evolution of the interactive session allows
estimating users' general interest towards the available
contents. The results show that people were very satisfied by
the interactive experience and that the level of interest
detected automatically by the system were found to be
consistent with the one declared by the users. Both subjective
and objective metrics were used to validate the approach [
        <xref ref-type="bibr" rid="ref13">12</xref>
        ].
Unfortunately, Nicola Orio who was supposed to report the
results of an initial experiment on the acoustic description of
the city of Padova (soundscape) was unable to attend the
workshop. Still, the results of a study about a group of users
that has been involved in recording the sounds of the city and
in tracking their position in space and in time using a web
based interface is reported in the paper. Collaboration and
coordination among participants has been promoted using a
wiki, where participants could assign themselves the locations
to be recorded and define the standard to be followed. The
result is the creation of an acoustic map of the city of Padova,
which can be navigated in space and in time through a web
interface. A mobile version of the interface is currently under
development [
        <xref ref-type="bibr" rid="ref14">13</xref>
        ].
2.2 Online interaction (delete?)
There were a few online-only systems. Again, these were quite
diverse. As an extension of [
        <xref ref-type="bibr" rid="ref10">9</xref>
        ], Lanir presented a system that
visualizes online visitors’ behavior onsite to museum director.
Data collected from automatic tracking of visitors’ movements
in the museum and their interaction with a context aware
museum visitors’ guide system was collected and served as a
basis for the analytic visualization. Using this information,
they provide an interface that visualizes both individual and
small group movement patterns, as well as aggregated
information of overall visitor engagement [
        <xref ref-type="bibr" rid="ref11">10</xref>
        ].
      </p>
      <p>
        Alan Wecker, following [
        <xref ref-type="bibr" rid="ref8">7</xref>
        ][
        <xref ref-type="bibr" rid="ref9">8</xref>
        ], introduced the idea of ongoing
events collection – a “scrapbook” for future use. He presented
the idea concerning the utility and makeup of an application to
collect cultural heritage experiences. The features for such an
application and possible visualizations are discussed [
        <xref ref-type="bibr" rid="ref18">17</xref>
        ].
2.3 Interaction techniques and technologies
As far as technologies, the diverse set of papers spans a large
variety of presentation and interaction techniques, including
conventional desktop display [
        <xref ref-type="bibr" rid="ref11">10</xref>
        ], through the use of mobile
guides for information delivery [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ], the use of eye tracking as
a natural interface for positioning and interaction [
        <xref ref-type="bibr" rid="ref13">12</xref>
        ][
        <xref ref-type="bibr" rid="ref3">2</xref>
        ],
virtual and augmented reality [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ][
        <xref ref-type="bibr" rid="ref3">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">2</xref>
        ], natural language
interaction [
        <xref ref-type="bibr" rid="ref18">17</xref>
        ], audio-based interaction [
        <xref ref-type="bibr" rid="ref14">13</xref>
        ] and even
humanrobot interaction [
        <xref ref-type="bibr" rid="ref13">12</xref>
        ].
3. SUMMARY
The workshop proved to be successful in promoting discussion
on variety of novel AVI technologies and their application to
CH. The wide diversity of aspects and the technologies and
their combination triggered discussion about both
opportunities and challenges in the application of these
technologies for CH. The interaction leaded discussions about
possible future collaboration among AVI-CH participants as
well as a decision to follow up on future results of the studies
presented at the workshop, aiming for a special issue in a
leading scientific journal.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          4.
          <string-name>
            <surname>REFERENCES (ADD REFERENCE TO COTUGNO TALK</surname>
          </string-name>
          <article-title>???)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>[1] https://www.google.com/culturalinstitute/project/artproject</mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Davide</given-names>
            <surname>Maria</surname>
          </string-name>
          <string-name>
            <surname>Calandra</surname>
          </string-name>
          , Dario Di Mauro,
          <source>Franco Cutugno and Sergio Di Martino</source>
          (
          <year>2016</year>
          ).
          <article-title>Navigating Wall-sized Displays with the Gaze: a Proposal for Cultural Heritage</article-title>
          ,
          <source>AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage, Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Fabio</given-names>
            <surname>Marco</surname>
          </string-name>
          <string-name>
            <surname>Caputo</surname>
          </string-name>
          , Irina Mihaela Ciortan, Davide Corsi, Marco De Stefani and
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Giachetti</surname>
          </string-name>
          . (
          <year>2016</year>
          ).
          <article-title>Gestural Interaction and Navigation Techniques Aimed for Virtual Museum Experiences, AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage</article-title>
          , ,
          <source>Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Federica</given-names>
            <surname>Cena</surname>
          </string-name>
          , Luca Console, Cristina Gena and
          <string-name>
            <given-names>Alessandro</given-names>
            <surname>Marcengo</surname>
          </string-name>
          , Amon
          <string-name>
            <surname>Rapp</surname>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A Field Evaluation of an Intelligent Interaction Between People and a Territory and its Cultural Heritage, AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage</article-title>
          , ,
          <source>Bari, 7th of June</source>
          <year>2016</year>
          , CEURWS.org,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Luca</given-names>
            <surname>Console</surname>
          </string-name>
          , Fabrizio Antonelli, Giulia Biamino, Francesca Carmagnola, Federica Cena, Elisa Chiabrando, Vincenzo Cuciti, Matteo Demichelis,Franco Fassio, Fabrizio Franceschi, Roberto Furnari, Cristina Gena, Marina Geymonat, Piercarlo Grimaldi, Pierluigi Grillo, Silvia Likavec, Ilaria Lombardi, Dario Mana, Alessandro Marcengo, Michele Mioli, Mario Mirabelli, Monica Perrero, Claudia Picardi, Federica Protti, Amon Rapp,Rossana Simeoni, Daniele Theseider Dupré, Ilaria Torre, Andrea Toso, Fabio Torta,
          <string-name>
            <given-names>Fabiana</given-names>
            <surname>Vernero</surname>
          </string-name>
          .
          <article-title>Interacting with social networks of intelligent things and people in the world of gastronomy (</article-title>
          <year>2013</year>
          ).
          <article-title>TiiS 3(1): 4</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [6]
          <string-name>
            <surname>De Carolis</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palestra</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>Gaze-based Interaction with a Shop Window</article-title>
          .
          <source>In Proceedings of AVI '16: International Working Conference on Advanced Visual Interfaces</source>
          ,
          <year>June 2016</year>
          , Bari, Italy.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Tsvi</given-names>
            <surname>Kuflik</surname>
          </string-name>
          , Judy Kay, and
          <string-name>
            <surname>Bob Kummerfeld.</surname>
          </string-name>
          (
          <year>2010</year>
          , June).
          <article-title>Lifelong personalized museum experiences</article-title>
          .
          <source>In Proceedings of Workshop on Pervasive User Modeling and Personalization (PUMP'10)</source>
          (pp.
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Tsvi</given-names>
            <surname>Kuflik</surname>
          </string-name>
          , Alan Wecker, Joel Lanir and
          <string-name>
            <given-names>Oliviero</given-names>
            <surname>Stock</surname>
          </string-name>
          . (
          <year>2015</year>
          ).
          <article-title>An integrative framework for extending the boundaries of the museum visit experience: linking the pre, during and post visit phases</article-title>
          .
          <source>Information Technology &amp; Tourism</source>
          ,
          <volume>15</volume>
          (
          <issue>1</issue>
          ),
          <fpage>17</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Joel</given-names>
            <surname>Lanir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Bak</surname>
          </string-name>
          and
          <string-name>
            <given-names>Tsvi</given-names>
            <surname>Kuflik</surname>
          </string-name>
          . (
          <year>2014</year>
          , June).
          <article-title>Visualizing Proximity-Based Spatiotemporal Behavior of Museum Visitors using Tangram Diagrams</article-title>
          . In Computer Graphics Forum (Vol.
          <volume>33</volume>
          , No.
          <issue>3</issue>
          , pp.
          <fpage>261</fpage>
          -
          <lpage>270</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Joel</surname>
            <given-names>Lanir</given-names>
          </string-name>
          , Tsvi Kuflik, Nisan Yavin,
          <string-name>
            <given-names>Kate</given-names>
            <surname>Leiderman</surname>
          </string-name>
          and
          <string-name>
            <surname>Michael Segal.</surname>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>Visualizing Museum Visitors' Behavior, AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage</article-title>
          , ,
          <source>Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Moayad</given-names>
            <surname>Mokatren</surname>
          </string-name>
          and
          <string-name>
            <given-names>Tsvi</given-names>
            <surname>Kuflik</surname>
          </string-name>
          .
          <article-title>Exploring the potential contribution of mobile eye-tracking technology in enhancing the museum visit experience</article-title>
          .
          <source>In proceedings of the The First Joint Workshop on Smart Connected and Wearable Things</source>
          , co-located
          <source>with IUI</source>
          <year>2016</year>
          , Sonoma, Ca,
          <year>10</year>
          .3.
          <year>2016</year>
          . Available online at: https://docs.google.com/viewer?a
          <article-title>=v&amp;pid=sites&amp;srcid=Z GVmYXVsdGRvbWFpbnxzY3d0MjAxNnxneDoyMGJj MDM2MzFjNzJmOTU0</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Moayad</given-names>
            <surname>Mokatren and Tsvi Kuflik</surname>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Exploring the potential contribution of mobile eye-tracking technology in enhancing the museum visit experience</article-title>
          ,
          <source>AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage, , Bari, 7th of June</source>
          <year>2016</year>
          , CEURWS.org,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Antonio</surname>
            <given-names>Origlia</given-names>
          </string-name>
          , Antonio Rossi, Maria Laura Chiacchio and
          <string-name>
            <given-names>Francesco</given-names>
            <surname>Cutugno</surname>
          </string-name>
          . (
          <year>2016</year>
          ).
          <article-title>Cultural heritage presentations with a humanoid robot using implicit feedback</article-title>
          ,
          <source>AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage, , Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Nicola</surname>
            <given-names>Orio.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Padova Soundscape: a Crowdsourcing Approach to Describe the Sound of a City, AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage</article-title>
          , ,
          <source>Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Nikolaos</surname>
            <given-names>Partarakis</given-names>
          </string-name>
          , Margherita Antona, Constantine Stephanidis and
          <string-name>
            <given-names>Emmanouil</given-names>
            <surname>Zidianakis</surname>
          </string-name>
          . (
          <year>2016</year>
          ).
          <article-title>Adaptation and content personalization in the context of multi user museum exhibits</article-title>
          ,
          <source>AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage, , Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Amon</surname>
            <given-names>Rapp</given-names>
          </string-name>
          , Federica Cena, Cristina Gena, Alessandro Marcengo, Luca
          <string-name>
            <surname>Console</surname>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Using game mechanics for field evaluation of prototype social applications: a novel methodology</article-title>
          .
          <source>Behaviour &amp; IT</source>
          <volume>35</volume>
          (
          <issue>3</issue>
          ):
          <fpage>184</fpage>
          -
          <lpage>195</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Antonio</surname>
            <given-names>Sorgente</given-names>
          </string-name>
          , Paolo Vanacore, Antonio Origlia, Enrico Leone, Francesco Cutugno and
          <string-name>
            <given-names>Francesco</given-names>
            <surname>Mele</surname>
          </string-name>
          . (
          <year>2016</year>
          ).
          <article-title>Multimedia Responses in Natural Language Dialogues, AVI*CH, the first Workshop on Advanced Visual Interfaces for Cultural Heritage</article-title>
          , ,
          <source>Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Alan</surname>
            <given-names>Wecker</given-names>
          </string-name>
          , Tsvi Kuflik and
          <string-name>
            <given-names>Oliviero</given-names>
            <surname>Stock</surname>
          </string-name>
          . (
          <year>2016</year>
          ).
          <article-title>CHEST: Cultural Heritage Experience Scrapbook Tool</article-title>
          , AVI*CH,
          <source>the first Workshop on Advanced Visual Interfaces for Cultural Heritage, Bari, 7th of June</source>
          <year>2016</year>
          ,
          <article-title>CEUR-WS.org</article-title>
          ,
          <source>ISSN 1613-0073</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>