<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Optimising the continuous control of brain-actuated robotic devices</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gloria Beraldo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Forin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Tonin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of cognitive sciences and technologies, National Research Council</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Intelligent Autonomous System Lab, Department of Information Engineering, University of Padova</institution>
          ,
          <addr-line>Padua</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Brain-machine interfaces (BMIs) are alternative communication channels that have allowed healthy and disabled people to control external devices from brain signals. In the last decades, the growing attention towards neurorobotics has led to the proliferation of several BMI-based systems for controlling diferent devices including telepresence robots, powered wheelchairs, robotic arms, and upper/lower-limb exoskeletons. Despite the potentialities of these systems, it has emerged the necessity to create new forms of interaction between the human and the robot in order to increase the granularity of the user's commands which are, in turn, translated into specific robot's actions. In this preliminary work, we present how artificial intelligence can be exploited to design and tune a model able to convert the user's intention into continuous robot's movements.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Brain machine interfaces</kwd>
        <kwd>Brain-actuated devices</kwd>
        <kwd>Human-robot interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Brain-Machine Interfaces (BMIs) provide an alternative interaction channel that does not depend
on the brain’s normal output pathways of peripheral nerves and muscles [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. The purpose of
BMIs is to augment the capabilities of disabled people sufering from severe motor impairments,
by allowing them to communicate and/or interact with external devices according to their
brain activity [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In the previous decades, several studies have shown the feasibility to control
diferent typologies of robots with BMIs including wheelchairs, telepresence robots, exoskeletons
and robotic arms [
        <xref ref-type="bibr" rid="ref10 ref4 ref5 ref6 ref7 ref8 ref9">4, 5, 6, 7, 8, 9, 10</xref>
        ]. In all the applications, BMIs try to detect specific patterns
in the brain signals as a result of stimulation via external stimuli (e.g., exogeneous BMIs) or
the self-paced modulation of the brain rhythms (e.g., endogenous BMIs), that, according to the
specific applications, are then contextualised and converted into a control signal for a device.
Moreover, as represented in Fig. 1, BMIs are characterised by a closed loop, in which the classifier
models the mental activities of the user’s, while the feedback allows the user to learn the task
and adapt to the machine.
      </p>
      <p>
        A key component in BMI systems is the control strategy that determines how to convert the
output of the BMI decoder into signals for the external device [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] (Fig. 1).
      </p>
      <p>
        There are two diferent approaches in the literature [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The first one is the discrete,
accordingly to the name, allows to send discrete high-level commands to the device (e.g., robot’s
rotation, selection of a destination, etc, pick of an object) that can be associated with the brain
response to specific events or resulted from the quantization of the continuous output of the
decoder. For instance, traditionally, the continuous raw probabilities from the decoder are
integrated and compared with a control threshold. In other words, a command is delivered,
when the system is confident enough about the user’s intended command. This strategy allows
to improve the control signal’s stability and to reduce its variability. For these reasons, discrete
control is the most applied for brain-actuated devices.
      </p>
      <p>
        The other approach is named continuous and is designed to increase the precision and
granularity in controlling external devices. However, such a paradigm is more dificult to
implement due to the non-stationary nature of the EEG and the uncertainty of the classifier
output. Indeed, it is less studied than the discrete case. Only a few approaches are proposed in
the literature based on: (a) mapping of the brain activity via linear/quadratic functions into a
continuous control signal for the robot as in [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13, 14, 15</xref>
        ]; (b) sophisticated system designed
to make the BMI classifier more stable by taking into account the nature of the signals as in
[16, 17, 18].
      </p>
      <p>In particular, in this work, we focus on the continuous approach based on dynamical systems
proposed in [18], that, as already demonstrated, has allowed to increase the performance in
controlling a telepresence robot via a motor imagery BMI and increase the coupling between
the user and the devices than the discrete control. However, such an approach relies on multiple
parameters that can be dificult to tune especially for a non-expert operator. The purpose of
this preliminary paper is to investigate how AI can be exploited to detect a relation among the
parameters with the aim of simplifying the control framework and facilitating their tuning.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Technical Background</title>
      <p>For sake of completeness, this section aims to briefly introduce, from a technical point of view,
the two state-of-the-art control strategies for BMI based on two classes motor imagery examined
in this paper.</p>
      <sec id="sec-2-1">
        <title>2.1. Discrete control</title>
        <p>In the discrete approach, the raw probabilities are integrated over time according to an
exponential smoothing [19, 18]. Considering  the posterior probabilities in output from the classifier
at time , the final control signal is computed as:</p>
        <p>=  ·  + (1 −  ) · − 1
 ∈ [0.0, 1.0] is a smoothing factor that determines the weights of the posterior probabilities
at time t than the previous one. To translate the control signal  into discrete high-level
commands, it is compared with respect to a control threshold.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Continuous control</title>
        <p>combination of two forces:
The continuous control based on the dynamical system presented in [18] relies on the linear
Δ =  · [Φ · (− 1) + (1 − Φ) ·  ()]
 is associated with the previous state of the system, while  is calculated on the
current output of the decoder. The sum of the two components aims to reduce the oscillatory
behaviour of the output from the classifier, help the user to deliver commands when intentional
and conversely filter the false positives. Indeed,  is in charge of applying a conservative
contribution when the state is around 0.5, otherwise to push towards one of the two classes.
 is radial symmetrical with respect to 0.5 to handle the two classes in the same way.
Formally, according to the design reported in [18], , represented in Fig. 2, is computed as
follows:
() =
⎨
⎧⎪− ( 0.5−  * )</p>
        <p>−  * (</p>
        <p>* ( − 0.5))
⎪⎩( 0.5−  * ( −  − 0.5))   ∈ [0.5 + , 1]

  ∈ [0, 0.5 − )
  ∈ [0.5 − , 0.5 + ]
while  is equal to:
Please refer to [18] for further details.</p>
        <p>() = 6.4 · ( − 0.5)3 + 0.4 · ( − 0.5)
(1)
(2)
(3)
(4)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Materials and methods</title>
      <p>In this preliminary work, we are focusing on studying the force  in the continuous
framework described in Section 2.2 with the purpose of investigating the relation among its
parameters. As highlighted in Equation 3, the shape of such a force strongly depends on two
main parameters:
•  defines the size of the conservative zone. The bigger is , the higher is the “resistance"
of the system to send a command.
•  influences the transition from the conservative to the pushing behaviours and vice
versa by handling the “amount of resistance/help" from the system. The higher is  , the
more dificult is the change of state in the system.</p>
      <p>Therefore, since both parameters adjust the conservative/pushing behaviours of the examined
dynamical system, we hypothesise that there is a correlation between  and  . In this
preliminary phase, we validate our hypothesis by fixing the other parameters with the values reported
in Table 1 that are set coherently with the previous experiments in [18, 20] to avoid introducing
confounding factors.</p>
      <p>As regards  and  , first, we have applied on a pre-collected dataset a data-driven optimization
that searches the best values of the two parameters by optimising a new cost function, introduced
in Section 3.2, and assessing the resulting performance per each subject and combination. Then,
we have applied a regression analysis on the best achieved values for each subject to find the
relation between the two parameters.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>In this work, we have exploited a pre-collected dataset (140.9170 min in total) related to the
two-classes motor imagery task where the user was asked to imagine the movements of both
Parameters


Φ

ℎ
ℎ</p>
        <p>Values
The values of the examined parameters, where  , Φ, ,  are from the Equation 2, ℎ and ℎ are
related to the Equation 5 used to optimise the parameters  and  and are set according to the control
threshold set in the previous studies [20].
hands vs. both feet and then received the feedback according to the predicted class (e.g., online).
The data were previously collected using the motor imagery protocol available inside ROS
Neuro1 framework [20] with a discrete control strategy. Such a dataset contains the data of
eleven subjects (S1-S11), including in total 325 online trials for both hands and 325 online trials
for both feet. Three subjects (S2, S6, S9) have no previous experience with BMI.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Cost function</title>
        <p>To optimise the values of  and  , we have proposed a new metric namely a cost function that
rewards/penalizes the control signal  by taking into account the following aspects:
threshold.</p>
        <p>score equal to zero.
• It is necessary to maximise the times in which the control signal is repeatedly above the
control threshold. In this way, we want to avoid/limit oscillations over and under the
• To achieve a more stable control, we use a band of interest rather than a single control
threshold. We want to force the control signal to pass the entire band without falling
inside it. Thus, we penalise when the control signal belongs to the band by attributing a
• Given the nature of BMI, as demonstrated in the previous studies, it is infeasible to deliver
intentional command within 1 sec. However, it would be appreciated if the user will send
the wanted commands as fast as possible. Considering such observations, we minimise
the time required to deliver the commands by filtering the impracticable values (&lt; 1 sec).
An illustrative representation of the criteria behind such metric is shown in Fig. 3.</p>
        <p>The cost function assigns a score for each combination of  and  according to the following
three constraints:
1. for each time  ∈ [0, ] with  the duration of the entire trial, we compute the
signed distance between the current control signal and the extremes of the band (see
 ((,  ), ℎ, ℎ) =
⎧0
⎪
⎨</p>
        <p>(,  ) ∈ [ℎ, ℎ]
(,  ) − ℎ</p>
        <p>(,  ) ∈ [0, ℎ)
⎪⎩(,  ) − ℎ  (,  ) ∈ (ℎ, 1]</p>
        <p>3. the optimisation of the time for which the control signal is outside the band of interest.</p>
        <p>Such a condition aims to make the user quickly deliver commands. To manage this
The values of ℎ ℎ are reported in Table 1.
2. the temporal constraint related to the filtering of the unfeasible commands. With this
purpose, we introduce the function  that filters the commands due to the control
signal overcoming the thresholds before 1 sec (see Fig. 3b):
(, , ℎ
, ℎ) =
⎪⎩1 ℎ</p>
        <p>
          ℎ  ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]
⎧⎪0  ∃((,  ) ≤ ℎ ∨ (,  ) ≥ ℎ)
⎨
(5)
(6)
(, , ℎ , ℎ) =
⎪
⎩
        </p>
        <p>∧ * &gt; 1
⎧  ℎ ≤ (,  ) ≤ ℎ ∀ ∈ [0, ]
⎪
⎨*
 ∃(* (,  ) ≤ ℎ ∨ * (,  ) ≥ ℎ)
(7)
(9)</p>
        <p>Thus, the final cost function is achieved by the combination of the Equation 8 and Equation
9. First (e.g., from the two first constrains), we select a set of candidates [,  ] according to:
[,  ] = max(∑︁(∑︁ 1 ∑*︁  ((,  ), ℎ, ℎ))· (, , ℎ , ℎ))
, =1 * =0 * =0
(8)
Then, we choose the best * ,  * among the resulted candidates using the following formula:
* ,  * =</p>
        <p>min ( ∑︁ ∑︁ ((,  ), ℎ, ℎ))
, ∈[, ] =1 =0</p>
        <p>Furthermore, since the dataset includes online runs previously recorded with the discrete
protocol, we use the final output for each trial — hit when the predicted class corresponds to
the requested task vs. miss otherwise — as ground truth for these pseudo-analyses.
aspect, we design the function , that is applied to couples of candidates [,  ] found
according to the previous criteria:</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Preliminary results</title>
      <p>For sake of clarity, Fig. 4 shows an example of application of the cost function described in
the previous section to the control signals related to a both hands trial (upper class) in the
continuous modality. The band threshold used in the proposed metric is indicated in grey. For
graphical reasons, we only report the control signals achieved with four combinations of  and
 , drawn with green, purple, black, cyan colours in the continuous modality. In the same figure,
we also show the control signal in the discrete case achieved via the exponential smoothing
that we use as reference and is represented via dashed red colour. The control threshold for
each class in the discrete case is marked with dashed blue lines. The best combination of 
and  is associated with the green curve. Indeed, the control signal in cyan does not satisfy
the temporal constraint because it overcomes the band before 1 second. The control signal in
black will cause a miss - namely it overcomes the threshold for the other class (lower one). The
control signal in purple is less performing than the green.</p>
      <p>The comparison of the accuracy via the discrete (exponential smoothing) and the continuous
control approaches (optimised dynamical control framework) are highlighted in Table 2 that also
lists the best couples of  and  for each subject. Coherently with the results in [18], overall, all
subjects improve their performance using the optimised dynamical control framework with the
exception of S5, S6, S11. By qualitatively analysing the control signals over the diferent trials,
we noticed that them drop in the [ℎ, ℎ] band suggesting the presence of involuntary
commands. Further analyses will be needed in future.
combinations of  and  in the continuous case marked with the green, purple, black, cyan colours. The
corresponding control signal in the discrete case, taken as reference, is also reported with the dashed
red colour line.</p>
      <p>Subject</p>
      <p>S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
0.2
0.025
0.025
0.475
0.425
0.025
0.35
0.175
0.25
0.4
0.325

0.05
1.00
0.95
0.05
0.05
1.00
0.05
0.20
0.60
0.05
0.05</p>
      <p>Accuracy
discrete case
67.5%
72.5%
73.75%
92.5%
95%
92.5%
65%
66.67%</p>
      <p>80%
81.11%
96.67%</p>
      <p>Accuracy
continuous case,
with discrete prediction
as ground truth
96.25%
92.5%
85%
97.5%
90%
80%
76.67%
68.33%
83.33%
93.33%
91.67%
control framework).</p>
      <p>The best  and  derived from the optimisation per each subject. Comparison of the accuracy in the
discrete case (e.g., via the exponential smoothing) vs. continuous case (e.g., via the optimised dynamical</p>
      <p>Then, from the detected best configurations of the two parameters for each subject, we
perform a regression analysis in order to find a relation between them and verify our hypothesis.
The results are displayed in Fig. 5. We found a second-degree polynomial function equal to
 = 6.6652 · 2 − 5.2772 ·  + 1.0884. To evaluate the goodness of the achieved model, we
use 2 which measures the percentage of the dependent variable variation that our model can
explain. Our model has a high 2 with value greater than 81.67%, hence it confirms that there
is a relation between  and  .</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this preliminary work, we investigate how to optimise the continuous teleoperation of
brainactuated robotic devices based on a dynamical system presented in [18] using AI. With this
purpose, we propose a metric that we exploit as cost-function to optimise the parameters of
the dynamical system to convert the user’s intention into continuous robot’s movements. In
addition, we found a possible relation between  and  to facilitate their tune and simplify
the system. The main limitation of this study is that the analyses were made ofline on the
available dataset without involving new users. Future works will include the validation of such
a hypothesis with an appropriate protocol for driving a powered wheelchair. Furthermore, we
will also investigate the possibilities of keeping the same relation in the case of asymmetrical
free force (diferent  and  for each class).</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledge</title>
      <p>This research was partially supported by MIUR (Italian Minister for Education) under the
initiative “Departments of Excellence" (Law 232/2016), by the Department of Information
Engineering of the University of Padova, with the grant TONI_BIRD2020_01.
[14] A. J. Doud, J. P. Lucas, M. T. Pisansky, B. He, Continuous three-dimensional control of
a virtual helicopter using a motor imagery based brain-computer interface, PloS one 6
(2011) e26322.
[15] S. Tortora, A. Gottardi, E. Menegatti, L. Tonin, Continuous teleoperation of a robotic
manipulator via brain-machine interface with shared control, in: 2022 IEEE 27th International
Conference on Emerging Technologies and Factory Automation (ETFA), IEEE Press, 2022,
p. 1–8. URL: https://doi.org/10.1109/ETFA52439.2022.9921526. doi:10.1109/ETFA52439.
2022.9921526.
[16] A. Satti, D. Coyle, G. Prasad, Continuous eeg classification for a self-paced bci, in: 2009
4th International IEEE/EMBS Conference on Neural Engineering, IEEE, 2009, pp. 315–318.
[17] D. Coyle, J. Garcia, A. R. Satti, T. M. McGinnity, Eeg-based continuous control of a
game using a 3 channel motor imagery bci: Bci game, in: 2011 IEEE Symposium on
Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), IEEE, 2011,
pp. 1–7.
[18] L. Tonin, F. C. Bauer, J. del R. Millán, The role of the control framework for continuous
teleoperation of a brain–machine interface-driven mobile robot, IEEE Transactions on
Robotics 36 (2020) 78–91. doi:10.1109/TRO.2019.2943072.
[19] E. S. Gardner Jr, Exponential smoothing: The state of the art—part ii, International journal
of forecasting 22 (2006) 637–666.
[20] G. Beraldo, S. Tortora, E. Menegatti, L. Tonin, Ros-neuro: implementation of a closed-loop
bmi based on motor imagery, in: 2020 IEEE International Conference on Systems, Man,
and Cybernetics (SMC), 2020, pp. 2031–2037. doi:10.1109/SMC42975.2020.9282968.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. J.</given-names>
            <surname>McFarland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Wolpaw</surname>
          </string-name>
          ,
          <article-title>Brain-computer interfaces for communication and control</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>54</volume>
          (
          <year>2011</year>
          )
          <fpage>60</fpage>
          -
          <lpage>66</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>U.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Birbaumer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramos-Murguialday</surname>
          </string-name>
          ,
          <article-title>Brain-computer interfaces for communication and rehabilitation</article-title>
          ,
          <source>Nature Reviews Neurology</source>
          <volume>12</volume>
          (
          <year>2016</year>
          )
          <fpage>513</fpage>
          -
          <lpage>525</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. d. R.</given-names>
            <surname>Millán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rupp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Mueller-Putz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Murray-Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Giugliemma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tangermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Vidaurre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cincotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kubler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Leeb</surname>
          </string-name>
          , et al.,
          <article-title>Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges, Frontiers in neuroscience (</article-title>
          <year>2010</year>
          )
          <fpage>161</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Tonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Carlson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Leeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. d. R.</given-names>
            <surname>Millán</surname>
          </string-name>
          ,
          <article-title>Brain-controlled telepresence robot by motordisabled people</article-title>
          ,
          <source>in: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society</source>
          , IEEE,
          <year>2011</year>
          , pp.
          <fpage>4227</fpage>
          -
          <lpage>4230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Wu,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <article-title>Control of a wheelchair in an indoor environment based on a brain-computer interface and automated navigation</article-title>
          ,
          <source>IEEE Transactions on Neural Systems and Rehabilitation Engineering</source>
          <volume>24</volume>
          (
          <year>2016</year>
          )
          <fpage>128</fpage>
          -
          <lpage>139</lpage>
          . doi:
          <volume>10</volume>
          .1109/TNSRE.
          <year>2015</year>
          .
          <volume>2439298</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Beraldo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. d. R.</given-names>
            <surname>Millán</surname>
          </string-name>
          , E. Menegatti,
          <article-title>Shared intelligence for robot teleoperation via bmi</article-title>
          ,
          <source>IEEE Transactions on Human-Machine Systems</source>
          <volume>52</volume>
          (
          <year>2022</year>
          )
          <fpage>400</fpage>
          -
          <lpage>409</lpage>
          . doi:
          <volume>10</volume>
          .1109/ THMS.
          <year>2021</year>
          .
          <volume>3137035</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Beraldo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Antonello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cimolato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Menegatti</surname>
          </string-name>
          , L. Tonin,
          <article-title>Brain-computer interface meets ros: A robotic approach to mentally drive telepresence robots</article-title>
          ,
          <source>in: 2018 IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>4459</fpage>
          -
          <lpage>4464</lpage>
          . doi:
          <volume>10</volume>
          . 1109/ICRA.
          <year>2018</year>
          .
          <volume>8460578</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharyya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shimoda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hayashibe</surname>
          </string-name>
          ,
          <article-title>A synergetic brain-machine interfacing paradigm for multi-dof robot control</article-title>
          ,
          <source>IEEE Transactions on Systems, Man, and Cybernetics: Systems</source>
          <volume>46</volume>
          (
          <year>2016</year>
          )
          <fpage>957</fpage>
          -
          <lpage>968</lpage>
          . doi:
          <volume>10</volume>
          .1109/TSMC.
          <year>2016</year>
          .
          <volume>2560532</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Crea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Trigili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cordella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Baldoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Badesa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Catalán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zollo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vitiello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. G.</given-names>
            <surname>Aracil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Soekadar</surname>
          </string-name>
          ,
          <article-title>Feasibility and safety of shared eeg/eog and vision-guided autonomous whole-arm exoskeleton control to perform activities of daily living</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>8</volume>
          (
          <year>2018</year>
          )
          <article-title>10823</article-title>
          . URL: https://doi.org/10.1038/s41598-018-29091-5. doi:
          <volume>10</volume>
          .1038/s41598-018-29091-5.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Perroud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chavarriaga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. d. R.</given-names>
            <surname>Millán</surname>
          </string-name>
          ,
          <article-title>A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>90</volume>
          (
          <year>2017</year>
          )
          <fpage>15</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Tonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. d. R.</given-names>
            <surname>Millán</surname>
          </string-name>
          ,
          <article-title>Noninvasive brain-machine interfaces for robotic devices</article-title>
          ,
          <source>Annual Review of Control, Robotics, and Autonomous Systems</source>
          <volume>4</volume>
          (
          <year>2021</year>
          )
          <fpage>191</fpage>
          -
          <lpage>214</lpage>
          . doi:
          <volume>10</volume>
          .1146/ annurev-control-
          <volume>012720</volume>
          -093904.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jianjun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bekyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Olsoe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Baxter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>6</volume>
          (
          <year>2016</year>
          ). doi:
          <volume>10</volume>
          .1038/srep38565.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>K. LaFleur</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Cassady</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Doud</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Shades</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Rogin</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Quadcopter control in threedimensional space using a noninvasive motor imagery-based brain-computer interface</article-title>
          ,
          <source>Journal of neural engineering 10</source>
          (
          <year>2013</year>
          )
          <fpage>046003</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>