<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>76-85. DOI:
https://doi.org/10.1007/978-981-16-3264-8_8.
[16] Barbara Singer and Jimena Garcia-Vega. 2017. The Fugl-Meyer Upper Extremity Scale. Journal
of physiotherapy 63</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Engineering Interactive Computing Systems</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Social Robot</institution>
          ,
          <addr-line>Collaboration, Therapy</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Rostock, Chair of Software Engineering</institution>
          ,
          <addr-line>Albert-Einstein Straße 22, 18059 Rostock</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>189</volume>
      <fpage>76</fpage>
      <lpage>85</lpage>
      <abstract>
        <p>In the last years, social robots have been used in rehabilitative therapies in different applications. One reason for this necessity to research such kinds of therapies with social robots came from the worldwide lack of medical staff to provide enough therapy to affected patients. Typically, such applications were not created to include long-term clinical trials and affected patients, to achieve a clinically proven improvement of their health status. Here we present a research project where a social humanoid robot is guiding through the course of rehabilitation exercises for stroke patients with different kinds of handicaps. We display the challenges that arose during the interdisciplinary creation of these tasks, with the beforementioned issue of using therapies to let the patients exercise at their performance limit. We first highlight the research problem and the opportunity for a social robot in this domain. Then we showcase the related work and list our research questions for this project. Finally, we discuss our results, research plan and future experiments.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Use only styles embedded in the document. For paragraph, use Normal. Paragraph text. Paragraph
1.1.</p>
      <p>2020 Copyright for this paper by its authors.
robot system as with trained therapists. Additionally, we want to discover, what are the important factors
of such a system and what could be improved in a future (clinically or commercially) usable SAR
stroke-therapy setup.
1.2.</p>
    </sec>
    <sec id="sec-2">
      <title>Research Questions</title>
      <p>The research questions of this doctoral project are part of the E-BRAiN project. This project wants to
discover how far social robots can be used to motivate stroke patients in neurorehabilitation therapies.
The research questions of this doctoral thesis are:
•
•
•
•
•</p>
      <p>How should the system be designed to perform the four diverse therapies from one system?
How is such a social robot therapy system accepted?
What are the system features patients desire in such a system?
By what actions of the system can the patient better stay motivated and engaged?</p>
      <p>How can the interaction between the patient and system be modeled?</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related work</title>
      <p>
        Feingold-Polak et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] present in their work the closest system to our project. In their work, they
show a novel gamified system for post-stroke patients. They conduct in a time span of 5-7 weeks in
total 15 therapy sessions. They also tried to discover the underlying motivational aspect of such kind of
therapy. They compare the usability and general performance of the robot system with a version of
their tasks, where the robot was not included but now the patients interact with a computer monitor.
The tasks which were offered were games with different objects like cups, play cards and it also was
partly gamified into a small scenario such as an escape-game. The main differentiating point between
our work and theirs will be that we use clinically post-stroke therapies and we compare the results
between a human and the robot instead of robot to a monitor display.
      </p>
      <p>
        The following, other approaches of robot therapy have not been targeted for a long-term therapy use
or are not developed for stroke patients. Gonzalez et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] created NAOTherapist, a system for children
with upper-limb impairments, requiring long-term therapies. Their system aims to be autonomous,
using only a 3D-Kinect camera. The system plans the sessions of the next days. During a session, the
robot enters a loop, in which he starts demonstrating the patient an exercise and to engage him to mimic
this pose.
      </p>
      <p>
        Another application with an autonomous NAO robot was developed by Görer et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The robot
instructs older people to do fitness exercises. They use a fixed dialog process, similar to a finite state
machine. While the robot is demonstrating the exercises, the patient is observed and if needed, feedback
is provided.
      </p>
      <p>
        Shao et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] presented a system for arm exercises with a social robot. The system detects with
cameras the performance and the emotion of the. The robot starts explaining the exercises and lets the
user perform these.
      </p>
      <p>
        Regarding supporting technologies to model the Human-Robot Interaction and possibly make it
easier to work on it, Van der Bergh et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] present in their work an approach with their “Hasselt
UIMS” tool. With that tool it should be easier to work interdisciplinary to create multimodal human
robot applications without needing much technical knowledge to operate the robot. They use multiple
levels of abstraction with the ability to generate executable programs. Inside these they work their
domain specific language and auto-generated finite state machines, to show a graphical overview of the
possible interaction. As a scenario, they demonstrated their tool on a task, which involves a robot giving
components to a worker on request. Later [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] it was modified with a DSL called “DICE-R”, built upon
their previous domain specific language. It works on the concept of composite events, context
information and event-condition-actions.
      </p>
      <p>
        There is not much work done on specific applications for social robots in this domain. But a relevant
work has been presented by Sutherland et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] present a text-based domain specific language
“RoboLang” schoto ease the development for social robots. It integrates certain specific actions for
social robots like a “say” or “play” (a media file). The possible actions of this presented tool are to a
large degree overlapping with the actions we also had to implement in our therapies. In our group [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
we are also developing tools for interaction modelling and automatic generation.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Ideas, proposed approach, results</title>
      <p>To answer the research questions from section 1.2 and for the realization of an autonomous SAR
therapy, we have been developing a robot therapy setup to undertake experiments. We started to
evaluate the system in a study with participants in a rehabilitation therapy setting and validation by
medically trained personnel. To evaluate such a SAR system, we needed to first imitate “human
therapist-patient” therapies into program-“scripts”, which then can be executed by the system. The
specifications for these were gathered with cognitive walkthroughs together with stroke rehabilitation
experts.</p>
      <p>During the therapies of patients with the robot setup their performance is observed and recorded.
After their therapies patients evaluate the implementation of our system in questionnaires and
qualitative interviews.</p>
      <p>Here we present the stroke therapies which we have been implementing to work with the robot, the
robot system and the so far achieved results so far.
3.1.</p>
    </sec>
    <sec id="sec-5">
      <title>The implemented therapies</title>
      <p>We have been implementing four groups of therapies, as seen in Figure 1. Patients were initially
classified depending on their health state and chance for recovery. Arm ability training (AAT) [10] aims
for relatively minor arm movement problems, mirror therapy (MT) for predominantly one-sided arm
paralysis and Arm basis training (ABT) [11] depicted is used for severe arm paralysis. In addition,
neurovisual therapy (NVT) is offered for patients with a neglect handicap. A neglect handicap typically
results in a lack of field of vision of the patient’s eyesight. Patients are not aware, that they have a lack
of their sight.</p>
      <p>While the ABT, MT and AAT are mostly computer scripts, which let the robot streamline a certain
program of text and speech commands, during the NVT the patient interacts additionally with a touch
monitor the whole time.</p>
      <p>The NVT is a new development of a therapy, adapted and similar to [12], and has yet to be tested
for general acceptance and clinical effectiveness. For the nature of the tasks, it has built in certain
features to react on patient speech and precise touch on photos.
healthy arm. Bottom left corner: Examples of the movements of the ABT therapy. Bottom right corner
Example of the visual exploration task of the neurovisual therapy (NVT).</p>
      <p>Robot Control</p>
      <p>Pepper</p>
      <p>MQTTServer</p>
      <p>Management of Therapies</p>
    </sec>
    <sec id="sec-6">
      <title>The implemented therapies</title>
      <p>We use the robot Pepper [13] for our experiments, as seen in Figure 3. In addition, several cameras,
microphones, a confirmation touch panel, a pc and a database with information about the patient and
the helper are connected. The computer unit is directing the dynamic dialogue structure and the actual
content of robot feedback. The system receives explicit confirmation from the physical touch panel and
in some parts speech input from the patient. The components communicate with via the MQTT-protocol
[14].</p>
      <p>On the software side, for the “Management of Therapies” we build a website to add patients, set up
and administer therapy sessions and control a therapy session. Furthermore, since we used
AndroidTablets, we had to develop a suitable app with a reasonable layout for the mostly older adults
participating as patients. Both the tablet and the robot will display images, videos and the instructions
texts on the display. The patients will also be able to confirm certain actions by touching elements on
the screen.</p>
      <p>The dialogue is built on a finite-state-machine [15], which sends out messages to all other devices,
as shown in Figure 2. Inside each device, an interpreter reads these messages and displays the contents
of the message.</p>
      <p>From the patient’s point of view, at the beginning of the session, he/she will sit down and interact
with the nearby tablet/monitor to start the therapy.</p>
    </sec>
    <sec id="sec-7">
      <title>4. Research methodlogy</title>
      <p>We are currently conducting therapies with the beforementioned system and aim for at least 30
patients. Patients are invited to participate and will be screened for a suitable therapy of ours. Then,
they are required to come to these therapy sessions to the Universitätsmedizin Greifswald, Germany.</p>
      <p>Each patient will have a 4 week-therapy, in a random order with a two-week human-therapist
program and a two-week SAR program. The goal of the E-BRAiN project is to explore the success rate
and for ways how to motivate patients during the therapies with the SAR. In the real application, before
the first robot session, a human therapist will do an extensive introduction. The medical experts will
perform the Fugl-Meyer assessment [16] with the patients to collect the starting condition of the patient.
This test measures the patient’s level of impairment and the real success of the therapy will also be
measured with this test.</p>
      <p>For the actual technical evaluation, we use a questionnaire with the ALMERE-Model [17] for the
acceptance of the robot system. The ALMERE-model is a building upon the foundations of the</p>
    </sec>
    <sec id="sec-8">
      <title>5. Current status</title>
    </sec>
    <sec id="sec-9">
      <title>5.1. Timeline</title>
      <p>Month/Year
09/19 – 09/20
09/20 - 09/21
09/21 – 12/22
technology acceptance model (TAM) and on the Unified theory of acceptance and use of technology
(UTAUT)-model. It has been developed precisely for applications like ours, i.e. social robot acceptance
with elderly adults, thus it appears to be the most promising approach. Inside the actual questionnaire,
we will add certain open questions, to broaden our understanding, how the system is perceived by the
patients.</p>
      <p>In a second step, we will additionally conduct usability interviews with frameworks like [18] with
the patients, to gather more precise feedback on certain elements of the session. For this, we would like
to know more about e.g. the speech speed of the robot, the clearness of the given instructions, reaction
time of the system.</p>
      <p>The robot system with all four therapies are done and the therapy study with the aim for at least 30
patients is running since the 09/2021. Currently last preparations are done to prepare and finalize the
technical questionnaires and qualitative interviews.
5.2.</p>
    </sec>
    <sec id="sec-10">
      <title>Results</title>
      <p>So far three different papers have been published as the first author. One work [15] during the early
development of the therapy system describes the software approach for the system. More precisely it
explains on more detailed parts how the dialog and contents of the therapy are put together and how the
system components work together.</p>
      <p>Another work [19] focused on a theoretical approach, how a simple rule-system could be used to
provide motivational feedback during the ABT-therapy.</p>
      <p>Another work [20] was done to investigate for possible user models of a patient and a (human)
therapist. For this, we tried a lesser known method of “repertory-grid” technique with the goal to explore
implicit knowledge about the patients and therapists. We asked the therapists of the E-BRAiN project
about their past patients and of their impression with other therapists. We gathered around 170 attributes
for patients and 45 for therapists. We list the patient and therapist attributes online available at [21].
These attributes and co-developing of therapy applications with medical experts for similar therapies
inside the E-BRAiN project may help to design the robot better. Currently we make use of them, but
we can greatly increase our personalization efforts of the therapy.</p>
    </sec>
    <sec id="sec-11">
      <title>6. Expected contributions and conclusion</title>
      <p>The paper discussed the idea of creating a system for performance-oriented stroke therapies with the
help of a social humanoid robot. Some research questions were identified. We hope that we can show
the following things as a contribution to the HRI area: (1) the acceptance of a SAR in such a style of a
therapy (2) the learnings and improvements to include for such a system (3) a proof that social robots
can be a helpful performer as instructor compared to a human therapist during the recovery of a
strokesurvivor.</p>
      <p>Additionally, we made the first steps for a smarter robot therapy. For the next iteration, we plan to
include an enhanced version of the user models and test our digitalized therapy exercises [22]with real
patients.</p>
      <p>
        We are looking forward for the DC of the EICS conference to learn from the experience of others in
engineering and modelling such medical therapy systems. Even though our patient study has already
started, we may still integrate points to consider like aspects of the evaluation and other things.
Furthermore, we also have other projects, especially to enhance the text-based domain specific language
“DSL-CoTaL” [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] of our group to better work on applications for social robots.
      </p>
    </sec>
    <sec id="sec-12">
      <title>7. Acknowledgements</title>
      <p>This joint research project “E-BRAiN - Evidence-based Robot Assistance in Neurorehabilitation” is
supported by the European Social Fund (ESF), reference: ESF/14-BM-A55-0001/19-A01, and the
Ministry of Education, Science and Culture of Mecklenburg-Vorpommern, Germany. The sponsors had
no role in the decision to publish or any content of the publication.</p>
    </sec>
    <sec id="sec-13">
      <title>8. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Peter</given-names>
            <surname>Forbrig</surname>
          </string-name>
          , Alexandru Bundea, Ann Pedersen, and
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Platz</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Digitalization of Training Tasks and Specification of the Behaviour of a Social Humanoid Robot as Coach</article-title>
          .
          <source>In Human-Centered Software Engineering</source>
          . Springer International Publishing, Cham,
          <fpage>45</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Ronit</given-names>
            <surname>Feingold-Polak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Oren</given-names>
            <surname>Barzel</surname>
          </string-name>
          , and
          <string-name>
            <surname>Shelly</surname>
          </string-name>
          Levy-Tzedek.
          <year>2021</year>
          .
          <article-title>A robot goes to rehab: a novel gamified system for long-term stroke rehabilitation using a socially assistive robotmethodology and usability testing</article-title>
          .
          <source>Journal of neuroengineering and rehabilitation 18</source>
          ,
          <issue>1</issue>
          ,
          <fpage>122</fpage>
          . DOI: https://doi.org/10.1186/s12984-021-00915-2.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>José</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>González</surname>
          </string-name>
          ,
          <string-name>
            <surname>José C. Pulido</surname>
            , and
            <given-names>Fernando</given-names>
          </string-name>
          <string-name>
            <surname>Fernández</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>A three-layer planning architecture for the autonomous control of rehabilitation therapies based on social robots</article-title>
          .
          <source>Cognitive Systems Research</source>
          <volume>43</volume>
          ,
          <fpage>232</fpage>
          -
          <lpage>249</lpage>
          . DOI: https://doi.org/10.1016/j.cogsys.
          <year>2016</year>
          .
          <volume>09</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Binnur</given-names>
            <surname>Görer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Albert A.</given-names>
            <surname>Salah</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Akın</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>An autonomous robotic exercise tutor for elderly people</article-title>
          .
          <source>Auton Robot</source>
          <volume>41</volume>
          ,
          <issue>3</issue>
          ,
          <fpage>657</fpage>
          -
          <lpage>678</lpage>
          . DOI: https://doi.org/10.1007/s10514-016-9598-5.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Mingyang</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <surname>Silas F. D. R. Alves</surname>
            , Omar Ismail, Xinyi Zhang, Goldie Nejat, and
            <given-names>Beno</given-names>
          </string-name>
          <string-name>
            <surname>Benhabib</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>You Are Doing Great! Only One Rep Left: An Affect-Aware Social Robot for Exercising</article-title>
          .
          <source>In 2019 IEEE International Conference on Systems, Man and Cybernetics</source>
          (SMC).
          <source>Bari, Italy. October 6-9</source>
          ,
          <year>2019</year>
          . [IEEE], [Piscataway, New Jersey],
          <fpage>3811</fpage>
          -
          <lpage>3817</lpage>
          . DOI: https://doi.org/10.1109/SMC.
          <year>2019</year>
          .
          <volume>8914198</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Jan</surname>
            <given-names>van den Bergh</given-names>
          </string-name>
          , Fredy Cuenca Lucero, Kris Luyten, and
          <string-name>
            <given-names>Karin</given-names>
            <surname>Coninx</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Toward specifying Human-Robot Collaboration with composite events</article-title>
          .
          <source>In The 25th IEEE International Symposium on Robot and Human Interactive Communication. August 26 to August</source>
          <volume>31</volume>
          ,
          <year>2016</year>
          ,
          <string-name>
            <given-names>Teachers</given-names>
            <surname>College</surname>
          </string-name>
          , Columbia University, New York, U.S.A. IEEE, Piscataway, NJ,
          <fpage>896</fpage>
          -
          <lpage>901</lpage>
          . DOI: https://doi.org/10.1109/ROMAN.
          <year>2016</year>
          .
          <volume>7745225</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Jan</surname>
            <given-names>van den Bergh and Kris</given-names>
          </string-name>
          <string-name>
            <surname>Luyten</surname>
          </string-name>
          . 06262017.
          <string-name>
            <surname>DICE-R</surname>
          </string-name>
          .
          <source>In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. ACM</source>
          , New York, NY, USA,
          <fpage>117</fpage>
          -
          <lpage>122</lpage>
          . DOI: https://doi.org/10.1145/3102113.3102147.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Craig</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Sutherland</surname>
          </string-name>
          and Bruce MacDonald.
          <year>2019</year>
          -
          <fpage>2019</fpage>
          .
          <article-title>RoboLang: A Simple Domain Specific Language to Script Robot Interactions</article-title>
          .
          <source>In 2019 16th International Conference on Ubiquitous Robots (UR)</source>
          . IEEE,
          <fpage>265</fpage>
          -
          <lpage>270</lpage>
          . DOI: https://doi.org/10.1109/URAI.
          <year>2019</year>
          .
          <volume>8768625</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Peter</given-names>
            <surname>Forbrig</surname>
          </string-name>
          and
          <string-name>
            <surname>Alexandru-Nicolae Bundea</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Modelling the Collaboration of a Patient and an Assisting Humanoid Robot During Training Tasks</article-title>
          .
          <source>In Human-Computer Interaction. Multimodal and Natural Interaction</source>
          . Springer International Publishing, Cham,
          <fpage>592</fpage>
          -
          <lpage>602</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>