=Paper= {{Paper |id=Vol-2594/short2 |storemode=property |title=Hybrid Brain-Robot Interface for Telepresence |pdfUrl=https://ceur-ws.org/Vol-2594/short2.pdf |volume=Vol-2594 |authors=Gloria Beraldo,Stefano Tortora,Michele Benini,Emanuele Menegatti |dblpUrl=https://dblp.org/rec/conf/aiia/BeraldoTBM19 }} ==Hybrid Brain-Robot Interface for Telepresence== https://ceur-ws.org/Vol-2594/short2.pdf
    Hybrid Brain-Robot Interface for telepresence

Gloria Beraldo1 , Stefano Tortora1 , Michele Benini1 , and Emanuele Menegatti3

    Intelligent Autonomous System Lab, Department of Information Engineering,
      University of Padova, Padua, Italy. {gloria.beraldo,stefano.tortora,
                               emg}@dei.unipd.it



      Abstract. Brain-Computer Interface (BCI) technology allows to use
      brain signals as an alternative channel to control external devices. In this
      work, we introduce an Hybrid Brain-Robot Interface to mentally drive
      mobile robots. The proposed system sets the direction of motion of the
      robot by combining two brain stimulation paradigms: motor imagery and
      visual event related potentials. The first enables the user to send turn-left
      or turn-right commands to the robot by a certain rotation angle, while
      the second enables the user to easily select high level goals for the robot
      in the environment. At the end, the system is integrated with a shared-
      autonomy approach in order to improve the interaction between the user
      and the intelligent robot, achieving a reliable and robust navigation.

      Keywords: Human-centered systems, Human-robot interaction, Ma-
      chine learning


1    Introduction

In the last few decades, scientific, medical and industrial research communities
have shown a greater interest to the field of health care, assistance and rehabil-
itation. The result has been a proliferation of Assistive Technologies (ATs) and
healthcare solutions designed to help people suffering from motor, sensor and/or
cognitive impairments due to degenerative diseases or traumatic episodes [1].
In this context, researchers highlight the importance to design innovative and
advanced Human-Robot Interfaces in order to combine the user’s intentions de-
coded directly from (neuro)physiological signals and translate them into intel-
ligent actions performed by external robotic devices such as wheelchairs, telep-
resence robots, exoskeletons and robotic arms [2–4]. With this regards, Brain-
Computer Interface (BCI) systems enable users to send commands to the robots
using their brain activity (e.g. based on Electroencephalography (EEG) signals)
by recognizing specific task-related neural patterns [5]. For instance, the user is
required to imagine the movements of the hands and the feet in order to turn
the robot respectively in the left and the right directions [4] or to open/close the
hand of an exoskeleton [6].
    Despite in the last years BCI studies show promising results in different
applications, such as device control, target selection, and spellers [7], the BCI
system still remains an uncertain channel of communication through which the




Copyright © 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
2       G.Beraldo et al.

users are able to deliver only few commands and with a low information transfer
rate. Moreover, the workload required to both healthy and disabled users is still
high [8], slowing down the expansion of these neurorobotics systems and their
translational impact [9, 10]. In the assistive robotics literature, the principle of
the shared-autonomy demonstrates how it is possible to partially overcome this
limitation by adding a low-level intelligence on the robot, thanks to which it is
able to contextualize the few high-level commands received by the user according
to the environment and the data from its sensors [4,11,12]. The agent maintains
some degree of autonomy avoiding obstacles and determining the best trajectory
to follow when the user cannot deliver new commands. Once a new command is
sent from the user, the robot simply adjusts its current behaviour.
    In this work, we propose to combine shared-autonomy approach with an ad-
ditional intelligent layer between the user and the robot, inferring the user inten-
tion by mixing two kinds of brain stimulation paradigms. In details, we present
a preliminary Hybrid Brain-Robot Interface for robot navigation, in which the
user pilots the robot through 2-class Motor Imagery (MI), but simultaneously
he/she can be stimulated through a visual event related potentials (P300) to
reach predefined targets. At the end, the output of the system represents the
predicted direction along which the robot has to move.


2   Materials and methods

In this study we consider two kinds of standard BCI paradigms: motor imagery
(MI) and visual event related potentials (P300). The first is designed to make the
robot turn left/right through the imagination of motor movements (e.g. hands vs.
feet). The other makes the robot move in the direction of a specific target chosen
according to where the user focuses his/her attention while he/she is stimulated
through visual flashes. When the user does not send any command, the robot
keeps moving along its current direction. Behind the system, after the processing
of the EEG signals, two classifiers infer the presence of a motor imagery (MI) or
a visual event related potentials (P300) commands based on the computed raw
probabilities and the evidence accumulation. For more mathematical details on
the specific BCI classification methods please refer to [4] and [13] respectively.
Then, to predict the direction of the robot chosen by the user, we combine the last
outcome from the BCI classifiers with the history of the previous decisions. The
aim is to smooth the uncertainties of the single BCI command by integrating the
information about user intention over time, therefore reducing the side effects of
involuntary commands delivered by the user. An overview of the entire pipeline is
shown in Fig. 1. With this regards, we design a mathematical model to estimate
the new direction based on a weighted sum of Gaussian (N (·)) distributions:
the first N (xt , σt ) is the distribution over the new predicted command and the
second N (xt−1 , σt−1 ) is the distribution over the previously predicted command.
In details, we compute at each t the following distribution:

        Dt|t0 :t−1 = µ0 · N (xt , σt ) + µ1 · N (xt−1 , σt−1 ) + µ2 · Dt−2|t0 :t−3   (1)
                            Hybrid Brain-Computer Interface for telepresence           3

where µ0 , µ1 , µ2 ∈ Rn such that µ0 +µ1 +µ2 = 1. The term Dt−2|t0 :t−3 represents
a memory term to keep memory of the past decisions. The shapes of the normal
distribution are related to the outcome of the two classifiers. Further details are
presented in the following paragraphs, showing how the probability distribution
Dt|t0 :t−1 and therefore the kind of movements performed by the robot change
according to three possible situations: a) no new command delivered by the user,
b) the prediction of a new motor imagery command and c) the prediction of a
new P300 command. At the end, the final predicted direction is given in input
to our shared-autonomy algorithm [4], to manage the movements of the robot
ensuring obstacle avoidance and a reliable navigation.




Fig. 1. A visual representation of the data processing pipeline. The subject is required
to perform motor imagery task (MI) for driving the robot left and right. At the time,
he/she is stimulated through visual flash (P300) to select targets for instance people in
the environment. The EEG signal are acquired and analysed both in frequency domain
to identify MI features and in time domain in the case of P300. Underlying the system
there are two classifiers whose output (probability distributions) are combined by our
proposed model. This model considers also a memory term based on the previous
distributions of the directions followed by the robot. At the end, the telepresence robot
performs the predicted command.
4       G.Beraldo et al.

2.1   No new command from the user
When the user does not send commands for a specific amount of time, we as-
sume the robot is moving in the direction desired by the user. In this case, the
probability distribution of Eq 1 becomes the following (µ0 = 0), thanks to which
the robot keeps its current orientation θ:
                 Dt|t0 :t−1 = µ1 · N (xt−1 , σt−1 ) + µ2 · Dt−2|t0 :t−3            (2)
An illustrative example is shown in Fig. 2.

2.2   The prediction of a new motor imagery command
When a new motor imagery command is predicted by the BCI classifier, the
normal distribution N (xt , σt ) is centered in the current position of the robot
and the corresponding σt is set according to the rotation angle the robot has to
turn in the left/right direction (e.g. ±45◦ ). An illustrative example is shown in
Fig. 3.

2.3   The prediction of a new P300 command
When a new P300 command is predicted by the BCI classifier, the normal dis-
tribution
                              N (xt , σt )
is characterized by a mean equal to the direction of the target chosen by the
user. An illustrative example is shown in Fig. 4.

2.4   Implementation
We implement our system exploiting the standard and the tools provided by
Robot Operating System (ROS), in view of establishing a standardized research
platform for neurorobotics applications [9,10]. It includes three main nodes man-
aging the hybrid bci node, robot controller node and the interface node through
which the user is stimulated and receives the feedback from the system.




Fig. 2. The trend of the distribution probability (in green) when the user delivers any
commands. The robot keeps its current direction (in grey).
                            Hybrid Brain-Computer Interface for telepresence                        5




Fig. 3. The trend of the distribution probability (in green) when the user delivers two
consecutive right motor imagery commands. The robot turns in the right direction (in
grey).




                                                 0.00 0.11 0.11 0.78          0.00 0.11 0.11 0.78

                                                 T1 T2 T3 T4                  T1 T2 T3 T4




                                                        0.00 0.11 0.11 0.78
                           0.00 0.11 0.11 0.78

                                                        T1 T2 T3 T4
                           T1 T2 T3 T4




Fig. 4. The trend of the distribution probability (in green) when the user delivers a
P300 command and three consecutive left motor imagery commands. The robot moves
along the target direction at the beginning and then turns left.




3    Conclusion


In this work, we introduce a preliminary Hybrid Brain-Robot Interface for robots’
navigation that enables the user to interact with the robot based on two brain
stimulation paradigms. The coupling between a probabilistic model to infer the
user’s intention and the robot’s intelligence might guarantee the robot to per-
form the appropriate movements in the environment. The main limitation of this
study is the missing of numerical results to demonstrate the benefits of fusing
the proposed two brain stimulation paradigms, motor imagery and visual event
related potentials, to mentally drive telepresence robots. With this regards, fu-
ture directions include to test and validate our system on a physical robot. In
addition, we will study properly the resulting brain signals when the user is
involved simultaneously in dual mental tasks.
6       G.Beraldo et al.

References
 1. R.E. Cowan, B.J. Fregly, M. L. Boninger, L. Chan, M. M Rodgers, D.J. Reinkens-
    meyer, Recent trends in assistive technology for mobility, Journal of neuroengi-
    neering and rehabilitation,9, 1-20, 2012.
 2. M. A. Arbib, G. Metta, P. van der Smagt, Neurorobotics: from vision to action,
    Springer handbook of robotics,1453–1480, 2008.
 3. R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson and J. d. R. Millán, Towards
    independence: a BCI telepresence robot for people with severe motor disabilities,
    Proceedings of the IEEE, 103, 969–982, 2015
 4. G. Beraldo, M. Antonello, A. Cimolato, E. Menegatti, L. Tonin, Brain-Computer
    Interface meets ROS: A robotic approach to mentally drive telepresence robots,
    2018 IEEE International Conference on Robotics and Automation (ICRA), 1–6,
    2018
 5. U. Chaudhary, N. Birbaumer, A. Ramos-Murguialday, Brain–computer interfaces
    for communication and rehabilitation, Nature Reviews Neurology, 12, 513, 2016
 6. Crea, Simona and Nann, Marius and Trigili, Emilio and Cordella, Francesca
    and Baldoni, Andrea and Badesa, Francisco Javier and Catalán, José Maria and
    Zollo, Loredana and Vitiello, Nicola and Aracil, Feasibility and safety of shared
    EEG/EOG and vision-guided autonomous whole-arm exoskeleton control to per-
    form activities of daily living, Scientific reports, 8, 10823, 2018
 7. J. del R. Millán, R. Rüdiger, G. Müller-Putz, R. Murray-Smith, C. Giugliemma,
    M. Tangermann, C. Vidaurre, F. Cincotti, A. Kubler, L. Robert and others, Com-
    bining brain–computer interfaces and assistive technologies: state-of-the-art and
    challenges, Frontiers in neuroscience, 4, 161, 2010
 8. S. Tortora, G. Beraldo, L. Tonin, E. Menegatti, Entropy-based Motion Intention
    Identification for Brain-Computer Interface, 2019 IEEE International Conference
    on Systems, Man and Cybernetics, 2791–2798, 2019
 9. G. Beraldo, N. Castaman, R. Bortoletto, E. Pagello, J. del R. Millán, L. Tonin,
    E. Menegatti, ROS-Health: An open-source framework for neurorobotics, 2018
    IEEE International Conference on Simulation, Modeling, and Programming for
    Autonomous Robots (SIMPAR), 174–179, 2018
10. L. Tonin, G. Beraldo, S. Tortora, L. Tagliapietra, J. del R. Millán and E. Menegatti,
    ROS-Neuro: A common middleware for BMI and robotics. The acquisition and
    recorder packages, 2019 IEEE International Conference on Systems, Man and Cy-
    bernetics, 2767–2772, 2019
11. L. Tonin, R. Leeb, M. Tavella, S. Perdikis, J. del R. Millán, The role of shared-
    control in BCI-based telepresence, 2010 IEEE International Conference on Systems,
    Man and Cybernetics, 1462–1466, 2010
12. G. Beraldo, E. Termine, E. Menegatti, Shared-Autonomy Navigation for mobile
    robots driven by a door detection module, International Conference of the Italian
    Association for Artificial Intelligence, 511–527, 2019
13. G.Beraldo, S.Tortora, E.Menegatti, Towards a Brain-Robot Interface for children,
    2019 IEEE International Conference on Systems, Man, and Cybernetics, 2799–
    2805, 2019