<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The CoWriter robot: improving attention in a learning-by-teaching setup</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pierre Le Denmat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thomas Gargot</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Chetouani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dominique Archambault</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Cohen</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Salvatore M. Anzalone</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institut de Systemes Intelligents et de Robotique, Sorbonne Universite</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Laboratoire de Cognition Humaine et Arti cielle, Universite Paris 8</institution>
          ,
          <addr-line>Saint Denis</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Service de Psychiatrie de l'Enfant et de l'Adolescent</institution>
          ,
          <addr-line>Hopital Pitie-Salp</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>etriere</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we compare three learning-by-teaching scenarios in which a robot, a virtual agent or a voice, guide users and provide them feedback aimed at improving their handwriting abilities. The presented system is part of an e ort focused on assessment and remediation of dysgraphia and dyspraxia. Results show the performances and the limits of the robot on inducing co-presence and attention.</p>
      </abstract>
      <kwd-group>
        <kwd>Social robot</kwd>
        <kwd>Co-Presence orders</kwd>
        <kwd>Virtual Agents</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Introduction
Developmental coordination disorder (DCD or dyspraxia) is a neurological
disorder that impairs the acquisition and the execution of coordinated motor skills [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
It can be usually associated with learning disorders and in particular with
disgraphya, a de cency in the ability of writing, a ecting its execution in legibility
and speed. Nearly 6% of children between 5 and 11 years in France is
diagnosed with dysgraphia. Researchers highlight the fundamental importance of
early intervention of handwriting skills, as soon as possible in children'
educational path. With an onset in the early developmental period, the assessment of
dysgraphia is performed through a standard test called Concise Evaluation Scale
for Children's (BHK) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This assessment can be di cult due to its high costs
and subjectivity. However, recently [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] has been shown how information and
communication technology can help doctors in tackling such issues, by rapidly
performing semi-automated, detailed diagnosis.
      </p>
      <p>
        In this context, social robotics can help both the assessment and the
remediation of children with dyspraxia and, more in general, with handwriting di
culties. A team from EPFL[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] recently proposed a learning-by-teaching scenario in
which children teach handwriting to a robot through a tablet (Figure ??). The
robot is able to fake its handwriting skills, proposing on the tablet purposefully
deformed letters that take in account children performances and errors. Robot's
handwriting skills can improve according to the quality of children' examples.
The idea is to create an emphatic link between the child and its `protege', the
robot, stimulating motivation and commitment. Children will act as mentors
who help their protege robot in the handwriting activities, getting themselves
practicing without even noticing.
      </p>
      <p>
        The main elements of this learning-by-teaching scenario are the robot and
the tablet. This last one is used to collect handwriting samples from users, as
well as to present handwriting feedback. It is possible to speculate about the
importance of the presence of the robot and its ability on stimulating
concentration and engagement. The experiment reported in this paper explores the role
in this scenario of such agent, comparing users performances in three di erent
conditions: handwriting sessions with the CoWriter robot; handwriting sessions
with a virtual agent; handwriting session with the tablet only, guided by a voice.
The hypothesis is that the social robot would be able to elicit an higher attention
than the virtual agent or the vocal guide.
In this experiment we compare three CoWriter-based scenario experiments: in a
rst one, according to its original implementation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a user will interact with
Nao robot, from Softbank Robotics (CR); in a second experiment the user will
interact with a virtual agent, a simulated Nao (CA); in a third experiment, the
user will be guided by a voice (CV).
      </p>
      <p>A total of 12 adults (7 males, 5 females) in age between 12 and 31 years old
(mean=23.9; std=2.75) participated to the experiment. Each participant
interacted in the three conditions in a randomized way. For each condition, 5 words
where chosen. For each word, in turn, the agent or the voice proposed its
handwriting sample through the tablet; then the human user proposed his correction
(Figure 2). The process repeated for each word, until the user positively judged
the last handwriting sample.</p>
      <p>
        At the end of the experiment, a questionnaire, the \Networked Minds Social
Presence Inventory" was administered [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The purpose of this questionnaire is
the measuring of the perceived social presence during an interaction, whether
is face-to-face between two people, in telecommunication or with a non-human
agent. While the original questionnaire is composed by six dimensions, this work
focuses only in four facets, keeping apart the a ective perception as not the focus
of this experiment [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]:
{ Co-Presence: the user's awareness of the presence of the interaction
partner;
{ Attentional allocation: the perceived attention received by the partner as
well as the attention allocated towards the partner;
{ Perceived message understanding: the bidirectional communication
understanding between the partners;
{ Perceived behavioral interdependence: the of the mutual behavioral
connection between the partners.
      </p>
      <p>Each dimension is composed by 6 questions and use a Likert scale 1-7 as response.
A one-way ANOVA test within subjects, comparing the three condition (CR, CA,
CV) has been accomplished on the questionnaire data, revealing an e ect on the
Co-Presence dimension (F2,22 = 8.69; p = 0.002). As in Figure 3 (left), post-hoc
tests highlight a signi cant higher score in the robot condition compared to the
voice condition (CR &gt; CV , p=0.030) or the virtual agent condition (CR &gt; CA,
p = 0.039), but no di erence is highlighted between virtual agent and voice (p
= 1.000).</p>
      <p>The ANOVA test revealed an e ect also in the dimension of Attentional
allocation (F2,22 = 6.50; p = 0.006). As in Figure 3 (right), a signi cant higher
score is found in the robot condition compared to the voice condition (CR &gt; CV ,
p = 0.027), but still no di erence between CV and CA (p = 0.595) and between
CR and CA (0.085) is found.</p>
      <p>The ANOVA test was not able to reveal any e ect in the Perceived message
understanding dimension (F2,22 = 1.16; p = 0.33) as well as in the Perceived
behavioral interdependence dimension (F2,22 = 1.56; p = 0.23).
Results on the Co-Presence dimension highlight that participants feel more the
physical presence of the robot than its virtual agent. It is surprising, however, to
note the absence of di erence between the virtual agent and the voice. Results on
the Attention allocation highlight how the robot is able to elicit more compliance
than the vocal guidance. It should be noted in this case the absence of di erence
between the virtual agent and the robot. Such results can also highlight the
possibility of the agent of capturing too much the attention of the user, acting as
distraction of the task. The absence of di erence in the two other dimensions,
Perceived message understanding and Perceived behavioral interdependence, is
not strange due to the particular task chose. In particular, the presence of a
physical robot or a virtual agent seems does not impact on the comprehension
of the exchange. At the same time, it is possible to hypothesize that the
interpersonal exchange is too simple to be impacted by the presence of the arti cial
agents.</p>
      <p>
        It should be noted how the small number of participant and their age do not
permit a generalisation of the obtained results to children. Also, questionnaires
information should be reinforced by behavioral measures (head movements, body
movements, ...) that will be interpreted as the engagement state of the user
during the shared task [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Anzalone</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boucenna</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ivaldi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chetouani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Evaluating the engagement with social robots</article-title>
          .
          <source>International Journal of Social Robotics</source>
          <volume>7</volume>
          (
          <issue>4</issue>
          ),
          <volume>465</volume>
          {
          <fpage>478</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Asselborn</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gargot</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kidzinski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johal</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jolly</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dillenbourg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Automated</surname>
          </string-name>
          human
          <article-title>-level diagnosis of dysgraphia using a consumer tablet</article-title>
          .
          <source>npj Digital Medicine</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <volume>42</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Association</surname>
            ,
            <given-names>A.P.</given-names>
          </string-name>
          , et al.:
          <article-title>Diagnostic and statistical manual of mental disorders (DSM-5 R )</article-title>
          .
          <source>American Psychiatric Pub</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Biocca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harms</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gregg</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The networked minds measure of social presence: Pilot test of the factor structure and concurrent validity</article-title>
          . In: 4th annual international workshop on presence, Philadelphia, PA. pp.
          <volume>1</volume>
          {
          <issue>9</issue>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Hamstra-Bletz</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DeBie</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Den Brinker</surname>
          </string-name>
          , B.:
          <article-title>Concise evaluation scale for children's handwriting</article-title>
          .
          <source>Lisse: Swets</source>
          <volume>1</volume>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Harms</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Biocca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Internal consistency and reliability of the networked minds measure of social presence (</article-title>
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hood</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lemaignan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dillenbourg</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>The cowriter project: Teaching a robot how to write</article-title>
          .
          <source>In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts</source>
          . pp.
          <volume>269</volume>
          {
          <fpage>269</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>