<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Semantic Web Technologies for improving remote visits of museums, using a mobile robot</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michel Bu a</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Catherine Faron Zucker</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thierry Bergeron</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hatim Aouzal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universite Co</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>te d'Azur</institution>
          ,
          <addr-line>CNRS, INRIA, I3S</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Azkar research project focuses on the remote control of a mobile robot using the emerging Web technologies WebRTC for real time communication. One of the use cases addressed is a remote visit of the French Museum of the Great War in Meaux. For this purpose, we designed an ontology for describing the main scenes in the museum, the objects that compose them, the di erent trails the robot can follow in a given time period, for a targeted audience, the way points, observation points. This RDF dataset is exploited to assist the human guide in designing a trail, and possibly adapting it during the visit. In this paper we present the Azkar Museum Ontology, the RDF dataset describing some emblematic scenes of the museum, and an experiment that took place in June 2016 with a robot controlled by an operator located 800 kms from the museum. We propose to demonstrate this work during the conference by organizing a remote visit from the conference demo location.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>In this paper we present a work started in 2015 in the context of the Azkar
project1, funded by the French Public Investment Bank, that focuses on the
remote control of a mobile robot using the emerging Web technologies WebRTC
for real time communication. One of the use cases addressed in this project is the
tele-robotic exploration of museums for primary and secondary schools and we
report on our work enabling a remote visit of the French Museum of the Great
War2.</p>
      <p>The research question we address is \How can we assist a teacher planning a
guided tour of a museum for her class and how can we assist her during the visit
itself?" Our contribution lies in the joint use of (1) a mobile robot equipped
with cameras and sensors, (2) the emergent W3C standard WebRTC for real
time communication, and (3) an RDF dataset and the Linked Data to represent
museum data and related resources.</p>
      <p>In the experiment conducted with the Museum of the Great War in 2016,
a human remotely controls the mobile robot in the museum, and plays the role
of guide for children in school, using high-level tools to help him in this task,
for designing the visit, for selecting locations and orientations of the robot in</p>
      <sec id="sec-1-1">
        <title>1 http://azkar.fr</title>
      </sec>
      <sec id="sec-1-2">
        <title>2 http://www.museedelagrandeguerre.eu/en</title>
        <p>front of some scenes, for proposing linked multimedia resources, etc. These tools
rely on the exploitation of semantic descriptions of the scenes, of the objects
in the scenes, of possible locations/orientations for observing a scene, and more
generally, of topology constraints (distances, time to go from one location to
another, on board camera eld of view, etc.). We designed the Azkar Museum
Ontology (AMO) and created an RDF dataset. During the planning of the visit,
this dataset is queried in combination with the Web of Data to retrieve relevant
multimedia resources to propose to the visitors, and during the visit, requests
are triggered in certain situations (geo-localization, time elapsed) in order to
take or suggest decisions (display this video, go to the next location).</p>
        <p>This paper is organized as follows. Section 2 summarizes related works.
Section 3 presents the AMO ontology and the dataset we constructed for the
Museum of the Great War. Section 4 concludes.
2</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Related works</title>
      <p>
        The number of robots that have been deployed in museums and exhibitions has
grown steadily (see [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for a survey), most of them acting as simple mobile video
conference systems, such as 3. However, in this eld, research works involving
mobile robots usually do not rely on semantic descriptions of the scenes, and
focus more on low level constraints such as sensors, latency or security (see for
example [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]).
      </p>
      <p>Regarding the use of knowledge models, several works have been conducted
aiming the development of support systems to museum visits and access to
cul</p>
      <sec id="sec-2-1">
        <title>3 http://www.nma.gov.au/engage-learn/robot-tours</title>
        <p>
          tural heritage, most of them involving mobile devices (phones, tablets) adaptable
to the user's pro le and sensitive to its context, to improve the user experience
and help build his visit of the museum based on his preferences and constraints.
Several kinds of recommendation systems are used, e.g. content based in the
CHIP project [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], collaborative ltering in [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] and [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>
          The Azkar project is at the intersection of both worlds: it uses a mobile
robot and an RDF dataset for describing some high level visitor pro les (primary
schools and high schools) as well as the historical content of the museum scenes
the robot is going to explore. The above cited papers, as well as the Hippie project
(1999) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] have been a good inspiration for the remote control framework we
developed (which is out of the scope of this paper) and for the high level design
of the AMO vocabulary, starting bottom up from a large database that describes
all data in the museum, with many details and object that are not noticeable
during a visit, and arriving to an abstract description of museum scenes.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>The AMO Vocabulary and the RDF Dataset describing the Museum of the Great War</title>
      <p>The AMO vocabulary is available online4; it comprises 9 main classes and 26
properties. Its main classes represent museum objects, scenes, points of interest,
maps, trails, and primary target audience (primary or high school). Its main
properties enable to relate objects, points of interest and external medias (that
may di er depending on the target audience) to scenes, scenes to trails, trails to
maps and to describe these instances.</p>
      <p>Based on AMO, we created an RDF dataset from the Flora relational database5
used by many French museums, that contains detailed descriptions of every
single object in the museum catalog. The Azkar RDF dataset comprises today 421
instances and 2401 triples describing two scenes (a set of fully equipped French
and German soldiers, called "Marne 14", and two trenches).</p>
      <p>Currently, we implemented 32 SPARQL requests, the core ones perform tasks
such as "giving this x and y position of the robot and a radius, give me the
description of the current scene as well as related multimedia resources", or "please
send the accurate description of all the soldiers in the Marne 14 scene with
details and hires pictures (urls) about their equipment. In our experimentation,
we remotely controlled the robot from our o ces, 800 kilometers away from the
museum. The commands, sensor data, audio and video streams are exchanged
using WebRTC, through a p2p connection with the robot. When the robot is
near a given scene, di erent observation points appear on the map, as well as
linked resources (multimedia descriptions of the scene: local resources as well as
resources from external data sources such as DBpedia.fr), as shown in Figure
2. SPARQL queries are triggered depending on the location of the robot, time,
the current observed scene, pilot's interactions and the way the tour has been
designed.</p>
      <sec id="sec-3-1">
        <title>4 http://mainline.i3s.unice.fr/azkar/ontology</title>
      </sec>
      <sec id="sec-3-2">
        <title>5 http://www.everteam.com/fr/cp-certi cation- ora-musee/</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Proposed demonstration</title>
      <p>The demo will be as follow: from the demos session location, attendees will
control in real time a robot located in France (either in the Museum or in a
fake museum area in our lab, time zones may not be compatible). The remote
pilot will see real-time audio video streams and how semantic descriptions of the
scenes and related multimedia resources augment the experience.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>K.O. Arras</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Burgard</surname>
          </string-name>
          , and al.
          <article-title>Robots in exhibitions</article-title>
          .
          <source>In Proceedings of IROS 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems</source>
          , EPFL Lausanne, Switzerland,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>I. Benouaret.</surname>
          </string-name>
          <article-title>Un systeme de recommandation sensible au contexte pour la visite de musee</article-title>
          .
          <source>In CORIA</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>W.</given-names>
            <surname>Burgard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.B.</given-names>
            <surname>Cremers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fox</surname>
          </string-name>
          , D. Hahnel, G. Lakemeyer,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Steiner</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Thrun</surname>
          </string-name>
          .
          <article-title>Experiences with an interactive museum tour-guide robot</article-title>
          .
          <source>Arti cial intelligence</source>
          ,
          <volume>114</volume>
          (
          <issue>1</issue>
          ),
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>P.Y.</given-names>
            <surname>Gicquel</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Lenne</surname>
          </string-name>
          .
          <article-title>Proximites semantiques et contextuelles pour l'apprentissage informel: Application a la visite de musee</article-title>
          .
          <source>In EIAH&amp;IA</source>
          <year>2013</year>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>R</given-names>
            <surname>Oppermann</surname>
          </string-name>
          and
          <string-name>
            <given-names>M</given-names>
            <surname>Specht</surname>
          </string-name>
          .
          <article-title>A context-sensitive nomadic information system as an exhibition guide</article-title>
          .
          <source>handheld &amp; ubiq. In Computing 2nd Int. Symp</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>C.H.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.S.</given-names>
            <surname>Ryu</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.M.</given-names>
            <surname>Howard</surname>
          </string-name>
          .
          <article-title>Telerobotic haptic exploration in art galleries and museums for individuals with visual impairments</article-title>
          .
          <source>IEEE transactions on Haptics</source>
          ,
          <volume>8</volume>
          (
          <issue>3</issue>
          ),
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Stash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Aroyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hollink</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Schreiber</surname>
          </string-name>
          .
          <article-title>Using semantic relations for content-based recommender systems in cultural heritage</article-title>
          .
          <source>In Int. Conf. on Ontology Patterns</source>
          , volume
          <volume>516</volume>
          <source>of CEUR</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>