<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hybrid Brain-Robot Interface for telepresence</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gloria Beraldo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Tortora</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Benini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emanuele Menegatti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Intelligent Autonomous System Lab, Department of Information Engineering, University of Padova</institution>
          ,
          <addr-line>Padua</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Brain-Computer Interface (BCI) technology allows to use brain signals as an alternative channel to control external devices. In this work, we introduce an Hybrid Brain-Robot Interface to mentally drive mobile robots. The proposed system sets the direction of motion of the robot by combining two brain stimulation paradigms: motor imagery and visual event related potentials. The rst enables the user to send turn-left or turn-right commands to the robot by a certain rotation angle, while the second enables the user to easily select high level goals for the robot in the environment. At the end, the system is integrated with a sharedautonomy approach in order to improve the interaction between the user and the intelligent robot, achieving a reliable and robust navigation.</p>
      </abstract>
      <kwd-group>
        <kwd>Human-centered systems</kwd>
        <kwd>Human-robot interaction</kwd>
        <kwd>Machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        In the last few decades, scienti c, medical and industrial research communities
have shown a greater interest to the eld of health care, assistance and
rehabilitation. The result has been a proliferation of Assistive Technologies (ATs) and
healthcare solutions designed to help people su ering from motor, sensor and/or
cognitive impairments due to degenerative diseases or traumatic episodes [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
In this context, researchers highlight the importance to design innovative and
advanced Human-Robot Interfaces in order to combine the user's intentions
decoded directly from (neuro)physiological signals and translate them into
intelligent actions performed by external robotic devices such as wheelchairs,
telepresence robots, exoskeletons and robotic arms [2{4]. With this regards,
BrainComputer Interface (BCI) systems enable users to send commands to the robots
using their brain activity (e.g. based on Electroencephalography (EEG) signals)
by recognizing speci c task-related neural patterns [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. For instance, the user is
required to imagine the movements of the hands and the feet in order to turn
the robot respectively in the left and the right directions [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] or to open/close the
hand of an exoskeleton [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Despite in the last years BCI studies show promising results in di erent
applications, such as device control, target selection, and spellers [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the BCI
system still remains an uncertain channel of communication through which the
Copyright © 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
users are able to deliver only few commands and with a low information transfer
rate. Moreover, the workload required to both healthy and disabled users is still
high [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], slowing down the expansion of these neurorobotics systems and their
translational impact [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. In the assistive robotics literature, the principle of
the shared-autonomy demonstrates how it is possible to partially overcome this
limitation by adding a low-level intelligence on the robot, thanks to which it is
able to contextualize the few high-level commands received by the user according
to the environment and the data from its sensors [
        <xref ref-type="bibr" rid="ref11 ref12 ref4">4, 11, 12</xref>
        ]. The agent maintains
some degree of autonomy avoiding obstacles and determining the best trajectory
to follow when the user cannot deliver new commands. Once a new command is
sent from the user, the robot simply adjusts its current behaviour.
      </p>
      <p>In this work, we propose to combine shared-autonomy approach with an
additional intelligent layer between the user and the robot, inferring the user
intention by mixing two kinds of brain stimulation paradigms. In details, we present
a preliminary Hybrid Brain-Robot Interface for robot navigation, in which the
user pilots the robot through 2-class Motor Imagery (MI), but simultaneously
he/she can be stimulated through a visual event related potentials (P300) to
reach prede ned targets. At the end, the output of the system represents the
predicted direction along which the robot has to move.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Materials and methods</title>
      <p>
        In this study we consider two kinds of standard BCI paradigms: motor imagery
(MI) and visual event related potentials (P300). The rst is designed to make the
robot turn left/right through the imagination of motor movements (e.g. hands vs.
feet). The other makes the robot move in the direction of a speci c target chosen
according to where the user focuses his/her attention while he/she is stimulated
through visual ashes. When the user does not send any command, the robot
keeps moving along its current direction. Behind the system, after the processing
of the EEG signals, two classi ers infer the presence of a motor imagery (MI) or
a visual event related potentials (P300) commands based on the computed raw
probabilities and the evidence accumulation. For more mathematical details on
the speci c BCI classi cation methods please refer to [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] respectively.
Then, to predict the direction of the robot chosen by the user, we combine the last
outcome from the BCI classi ers with the history of the previous decisions. The
aim is to smooth the uncertainties of the single BCI command by integrating the
information about user intention over time, therefore reducing the side e ects of
involuntary commands delivered by the user. An overview of the entire pipeline is
shown in Fig. 1. With this regards, we design a mathematical model to estimate
the new direction based on a weighted sum of Gaussian (N ( )) distributions:
the rst N (xt; t) is the distribution over the new predicted command and the
second N (xt 1; t 1) is the distribution over the previously predicted command.
In details, we compute at each t the following distribution:
0 N (xt; t) + 1 N (xt 1; t 1) + 2 Dt 2jt0:t 3
(1)
where 0; 1; 2 2 Rn such that 0 + 1 + 2 = 1. The term Dt 2jt0:t 3 represents
a memory term to keep memory of the past decisions. The shapes of the normal
distribution are related to the outcome of the two classi ers. Further details are
presented in the following paragraphs, showing how the probability distribution
Dtjt0:t 1 and therefore the kind of movements performed by the robot change
according to three possible situations: a) no new command delivered by the user,
b) the prediction of a new motor imagery command and c) the prediction of a
new P300 command. At the end, the nal predicted direction is given in input
to our shared-autonomy algorithm [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], to manage the movements of the robot
ensuring obstacle avoidance and a reliable navigation.
      </p>
      <sec id="sec-2-1">
        <title>No new command from the user</title>
        <p>When the user does not send commands for a speci c amount of time, we
assume the robot is moving in the direction desired by the user. In this case, the
probability distribution of Eq 1 becomes the following ( 0 = 0), thanks to which
the robot keeps its current orientation :
When a new motor imagery command is predicted by the BCI classi er, the
normal distribution N (xt; t) is centered in the current position of the robot
and the corresponding t is set according to the rotation angle the robot has to
turn in the left/right direction (e.g. 45 ). An illustrative example is shown in
Fig. 3.
When a new P300 command is predicted by the BCI classi er, the normal
distribution</p>
        <p>N (xt; t)
is characterized by a mean equal to the direction of the target chosen by the
user. An illustrative example is shown in Fig. 4.
2.4</p>
      </sec>
      <sec id="sec-2-2">
        <title>Implementation</title>
        <p>
          We implement our system exploiting the standard and the tools provided by
Robot Operating System (ROS), in view of establishing a standardized research
platform for neurorobotics applications [
          <xref ref-type="bibr" rid="ref10 ref9">9,10</xref>
          ]. It includes three main nodes
managing the hybrid bci node, robot controller node and the interface node through
which the user is stimulated and receives the feedback from the system.
In this work, we introduce a preliminary Hybrid Brain-Robot Interface for robots'
navigation that enables the user to interact with the robot based on two brain
stimulation paradigms. The coupling between a probabilistic model to infer the
user's intention and the robot's intelligence might guarantee the robot to
perform the appropriate movements in the environment. The main limitation of this
study is the missing of numerical results to demonstrate the bene ts of fusing
the proposed two brain stimulation paradigms, motor imagery and visual event
related potentials, to mentally drive telepresence robots. With this regards,
future directions include to test and validate our system on a physical robot. In
addition, we will study properly the resulting brain signals when the user is
involved simultaneously in dual mental tasks.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>R.E.</given-names>
            <surname>Cowan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.J.</given-names>
            <surname>Fregly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Boninger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chan</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M Rodgers</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          <string-name>
            <surname>Reinkensmeyer</surname>
          </string-name>
          ,
          <article-title>Recent trends in assistive technology for mobility</article-title>
          ,
          <source>Journal of neuroengineering and rehabilitation</source>
          ,
          <volume>9</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Arbib</surname>
          </string-name>
          , G. Metta, P. van der Smagt, Neurorobotics: from vision to action, Springer handbook of robotics,
          <volume>1453</volume>
          {
          <fpage>1480</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>R.</given-names>
            <surname>Leeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rohm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Desideri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Carlson</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. d. R.</given-names>
            <surname>Millan</surname>
          </string-name>
          ,
          <article-title>Towards independence: a BCI telepresence robot for people with severe motor disabilities</article-title>
          ,
          <source>Proceedings of the IEEE</source>
          ,
          <volume>103</volume>
          , 969{
          <fpage>982</fpage>
          ,
          <year>2015</year>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>G.</given-names>
            <surname>Beraldo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Antonello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cimolato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Menegatti</surname>
          </string-name>
          , L. Tonin,
          <article-title>Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots</article-title>
          ,
          <source>2018 IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <volume>1</volume>
          {
          <fpage>6</fpage>
          ,
          <fpage>2018</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>U.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Birbaumer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramos-Murguialday</surname>
          </string-name>
          ,
          <article-title>Brain{computer interfaces for communication and rehabilitation</article-title>
          ,
          <source>Nature Reviews Neurology</source>
          ,
          <volume>12</volume>
          ,
          <fpage>513</fpage>
          ,
          <year>2016</year>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Crea</surname>
          </string-name>
          , Simona and Nann, Marius and Trigili, Emilio and Cordella, Francesca and Baldoni, Andrea and Badesa, Francisco Javier and Catalan, Jose Maria and
          <article-title>Zollo, Loredana and Vitiello, Nicola and Aracil, Feasibility and safety of shared EEG/EOG and vision-guided autonomous whole-arm exoskeleton control to perform activities of daily living</article-title>
          , Scienti c reports,
          <volume>8</volume>
          ,
          <fpage>10823</fpage>
          <string-name>
            <surname>,</surname>
          </string-name>
          <fpage>2018</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J. del R.</given-names>
            <surname>Millan</surname>
          </string-name>
          , R. Rudiger,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Muller-</article-title>
          <string-name>
            <surname>Putz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Murray-Smith</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Giugliemma</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tangermann</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Vidaurre</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Cincotti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kubler</surname>
            ,
            <given-names>L. Robert</given-names>
          </string-name>
          <article-title>and others, Combining brain{computer interfaces and assistive technologies: state-of-the-art and challenges</article-title>
          , Frontiers in neuroscience,
          <volume>4</volume>
          ,
          <fpage>161</fpage>
          <string-name>
            <surname>,</surname>
          </string-name>
          <fpage>2010</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>S.</given-names>
            <surname>Tortora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Beraldo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tonin</surname>
          </string-name>
          , E. Menegatti,
          <article-title>Entropy-based Motion Intention Identi cation for Brain-Computer Interface</article-title>
          ,
          <source>2019 IEEE International Conference on Systems, Man and Cybernetics</source>
          ,
          <volume>2791</volume>
          {
          <fpage>2798</fpage>
          ,
          <year>2019</year>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>G.</given-names>
            <surname>Beraldo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Castaman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bortoletto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Pagello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. del R.</given-names>
            <surname>Millan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tonin</surname>
          </string-name>
          , E. Menegatti, ROS-Health:
          <article-title>An open-source framework for neurorobotics</article-title>
          ,
          <source>2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR)</source>
          ,
          <volume>174</volume>
          {
          <fpage>179</fpage>
          ,
          <year>2018</year>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. L.
          <string-name>
            <surname>Tonin</surname>
            , G. Beraldo,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Tortora</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Tagliapietra</surname>
            ,
            <given-names>J. del R.</given-names>
          </string-name>
          <string-name>
            <surname>Millan</surname>
            and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Menegatti</surname>
          </string-name>
          ,
          <string-name>
            <surname>ROS-Neuro</surname>
          </string-name>
          :
          <article-title>A common middleware for BMI and robotics</article-title>
          .
          <source>The acquisition and recorder packages</source>
          ,
          <source>2019 IEEE International Conference on Systems, Man and Cybernetics</source>
          ,
          <volume>2767</volume>
          {
          <fpage>2772</fpage>
          ,
          <year>2019</year>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. L.
          <string-name>
            <surname>Tonin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Leeb</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tavella</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Perdikis</surname>
            ,
            <given-names>J. del R.</given-names>
          </string-name>
          <string-name>
            <surname>Millan</surname>
          </string-name>
          ,
          <article-title>The role of sharedcontrol in BCI-based telepresence</article-title>
          ,
          <source>2010 IEEE International Conference on Systems, Man and Cybernetics</source>
          ,
          <volume>1462</volume>
          {
          <fpage>1466</fpage>
          ,
          <year>2010</year>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. G. Beraldo,
          <string-name>
            <given-names>E.</given-names>
            <surname>Termine</surname>
          </string-name>
          , E. Menegatti,
          <article-title>Shared-Autonomy Navigation for mobile robots driven by a door detection module</article-title>
          ,
          <source>International Conference of the Italian Association for Arti cial Intelligence</source>
          ,
          <volume>511</volume>
          {
          <fpage>527</fpage>
          ,
          <year>2019</year>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. G.Beraldo,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tortora</surname>
          </string-name>
          , E.Menegatti,
          <article-title>Towards a Brain-Robot Interface for children</article-title>
          ,
          <source>2019 IEEE International Conference on Systems, Man, and Cybernetics</source>
          ,
          <volume>2799</volume>
          {
          <fpage>2805</fpage>
          ,
          <year>2019</year>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>