<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a mixed-reality tool for collaborative mind-maps</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>France philippe.giraudeau@inria.fr</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Author Keywords Spatial Augmented Reality</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tangible Collaborative Learning</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Distributed Cognition.</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Interaction</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>Traditional user interfaces such as WIMP (Windows Icon Menu Pointer) are widespread in schools. They have been proven to be useful in many scenarios, yet they are unfit in the case of collaborative learning. This work aims to understand, design and prototype reality-based interactions to enable collaborative learning. To do so, we first have to understand the key components, and particularly, the cognitive processes underlying such interfaces and how users and artefacts interact with each other. Then, we will implement a tangible and augmented interface supporting learning in school context based on conceptual frameworks and literature. This talk will present our ongoing work on this direction.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Beyond WIMP interfaces, reality-based interaction (RbI)
“increases the realism of interface objects and allow users
to interact even more directly with them using actions that
correspond to daily practices within the non-digital world.”
[
        <xref ref-type="bibr" rid="ref2">8</xref>
        ]. It also provides hands-on activity and gestures which
enhance the user’s learning abilities [6]. Good examples of
RbI are Tangible User Interfaces (TUI, physical handles
Copyright is held by the author/owner(s).
manipulating virtual information) [
        <xref ref-type="bibr" rid="ref7">13</xref>
        ] and spatial
augmented reality (SAR, complementing the real world
with digital content) [
        <xref ref-type="bibr" rid="ref5">11</xref>
        ]. Both SAR and TUI are known to
support collaborative activities like learning and problem
solving [4].
      </p>
      <p>Our goal is to go towards learning interfaces that support
collaborative learning. In this work, we are interested in
exploring new forms of interactions such as reality-based
interaction. To this end we will need first to understand,
from a cognitive science point of view, what are the
determinants enabling interfaces to propose effective
collaborative learning activities. Then, we will design
interactions with the guidelines from previous work and
frameworks in cognitive science. Finally, we will
implement our interactions and propose new reality-based
interfaces.</p>
      <p>To foster collaborative learning, we chose to implement the
mind-map technique [1], which allows users spatializing,
categorising and sorting items, to create a visually
organised graph of information. Mind-maps have a variety
of usages, such as studying, planning, critical thinking and
problem solving [2].</p>
      <p>This work is part of a broader project called e-Tac funded
by the French education and research ministry. E-tac aims
at exploring how tangible and augmented interactions can
benefit to collaborative learning in school contexts from
elementary school to middle-school.</p>
      <p>
        UNDERSTANDING
So far, the literature has not fully identified the cognitive
processes underlying tangible and augmented interactions
[
        <xref ref-type="bibr" rid="ref7">13</xref>
        ], such as learning (alone or in collaboration). Some
studies [
        <xref ref-type="bibr" rid="ref6 ref8">12, 14</xref>
        ] stress the importance of multi-sensory
perception and movements to enhance performances with
RbI. To be able to design interactions that support learning,
we first have to understand which cognitive processes are
involved when manipulating reality-based interfaces.
This research project aims to contribute to a better
understanding of cognition in the field of HCI. Cognitive
science could shed light on HCI researchers to explore
which cognitive processes are solicited using tangible and
augmented interfaces in a learning task while manipulating
new contexts. To this end, we will explore frameworks like
representation and division of space around a subject (i.e.
peri-personal and extra-personal space) [3] and how these
different spaces are used to specialise information on a
tabletop during a learning task. Moreover, collaborative
interactions will be used on our interface and we also have
to consider this while designing the system. Cognitive
science provides frameworks to study these types of
complex relations like the distributed cognition framework
[7],” which considers a collaborative activity as taking
place across individuals, artifacts and internal or external
representations, as one cognitive system” [
        <xref ref-type="bibr" rid="ref10">16</xref>
        ].
      </p>
      <p>
        DESIGNING
Using cognitive frameworks allows us to design a set of
interactions built to fit together both at individual and
collective scale. By using tabletop interactions we can
create a common space for users and the interface. This
horizontal surface will allow us to design interaction with
traditional tools like pen and paper. Hands-on activities
above the surface can also be supported. Such as Papart [
        <xref ref-type="bibr" rid="ref3">9</xref>
        ]
and TinkerLamp, our system [5] provide projections into
the surface, supporting our system manipulation of digital
objects and media.
      </p>
      <p>PROTOTYPING
Once we will better understand the process involved, then
we will be able to iteratively design and prototype relevant
interactions. For this, the hardware components of our
interactive system should enable object tracking as well as
fiducial markers on a tabletop to support spacialisation of
information. To do so, we will use a video-projector to
display content above a surface and optical based tracking
systems to track objects.</p>
      <p>This talk will present our ongoing work towards the
understanding of the cognitive science frameworks related
to our project on collaborative learning with RbI, such as
distributed cognition, embodied cognition and spatial
representation. This talk will also allow us to present our
interface currently being developed as well as the results of
the pilot study on the interface’s learning capacities which
will be conducted before this talk.</p>
      <p>evaluating orchestration technologies for the
classroom. In European Conference on Technology
Enhanced Learning, Springer (2012), 65–78.</p>
      <p>Hollan, J., Hutchins, E., and Kirsh, D. Distributed
cognition: toward a new foundation for
humancomputer interaction research. ACM Transactions on
Computer-Human Interaction (TOCHI) 7, 2 (2000),
174–196.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Goldin-Meadow</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>How gesture promotes learning throughout childhood</article-title>
          .
          <source>Child development perspectives 3</source>
          ,
          <issue>2</issue>
          (
          <year>2009</year>
          ),
          <fpage>106</fpage>
          -
          <lpage>111</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          8.
          <string-name>
            <surname>Jacob</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Girouard</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hirshfield</surname>
            ,
            <given-names>L. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horn</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shaer</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Solovey</surname>
            ,
            <given-names>E. T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Zigelbaum</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Reality-based interaction: a framework for post-wimp interfaces</article-title>
          .
          <source>In Proceedings of the SIGCHI conference on Human factors in computing systems</source>
          ,
          <source>ACM</source>
          (
          <year>2008</year>
          ),
          <fpage>201</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          9.
          <string-name>
            <surname>Laviole</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hachet</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Spatial augmented reality for physical drawing</article-title>
          .
          <source>In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology</source>
          ,
          <source>ACM</source>
          (
          <year>2012</year>
          ),
          <fpage>9</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          10.
          <string-name>
            <surname>Marshall</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <article-title>Do tangible interfaces enhance learning</article-title>
          ?
          <source>In Proceedings of the 1st international conference on Tangible and embedded interaction</source>
          ,
          <source>ACM</source>
          (
          <year>2007</year>
          ),
          <fpage>163</fpage>
          -
          <lpage>170</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          11.
          <string-name>
            <surname>Raskar</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Low</surname>
          </string-name>
          , K.-L.
          <article-title>Interacting with spatially augmented reality</article-title>
          .
          <source>In Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation</source>
          ,
          <source>ACM</source>
          (
          <year>2001</year>
          ),
          <fpage>101</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          12.
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharma</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cuendet</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zufferey</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dillenbourg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Pea</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>Using mobile eye-trackers to unpack the perceptual benefits of a tangible user interface for collaborative learning</article-title>
          .
          <source>ACM Transactions on Computer-Human Interaction (TOCHI) 23</source>
          ,
          <issue>6</issue>
          (
          <year>2016</year>
          ),
          <fpage>39</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          13.
          <string-name>
            <surname>Shaer</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hornecker</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>Tangible user interfaces: past, present, and future directions</article-title>
          .
          <source>Foundations and Trends in Human-Computer Interaction 3</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          (
          <year>2010</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>137</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          14.
          <string-name>
            <surname>Skulmowski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pradel</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Ku ̈hnert, T.,
          <string-name>
            <surname>Brunnett</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Rey</surname>
            ,
            <given-names>G. D.</given-names>
          </string-name>
          <article-title>Embodied learning using a tangible user interface: The effects of haptic perception and selective pointing on a spatial learning task</article-title>
          .
          <source>Computers &amp; Education</source>
          <volume>92</volume>
          (
          <year>2016</year>
          ),
          <fpage>64</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          15.
          <string-name>
            <surname>Stanton</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Neale</surname>
            ,
            <given-names>H. .</given-names>
          </string-name>
          <article-title>The effects of multiple mice on children's talk and interaction</article-title>
          .
          <source>Journal of Computer Assisted Learning 19, 2</source>
          (
          <year>2003</year>
          ),
          <fpage>229</fpage>
          -
          <lpage>238</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          16.
          <string-name>
            <surname>Vasiliou</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ioannou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Zaphiris</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <article-title>Understanding collaborative learning activities in an information ecology: A distributed cognition account</article-title>
          .
          <source>Computers in Human Behavior</source>
          <volume>41</volume>
          (
          <year>2014</year>
          ),
          <fpage>544</fpage>
          -
          <lpage>553</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>