=Paper= {{Paper |id=Vol-3323/paper4 |storemode=property |title=Please ASTRO, Can You Follow Me? Design of a Social Assistive Robot for Monitoring Gait Parameters |pdfUrl=https://ceur-ws.org/Vol-3323/paper4.pdf |volume=Vol-3323 |authors=Alessandra Sorrentino,Niccolò Vezzi,Carlo La Viola,Erika Rovini,Filippo Cavallo,Laura Fiorini |dblpUrl=https://dblp.org/rec/conf/socrob/SorrentinoVVRC022 }} ==Please ASTRO, Can You Follow Me? Design of a Social Assistive Robot for Monitoring Gait Parameters== https://ceur-ws.org/Vol-3323/paper4.pdf
Please ASTRO, can you follow me? Design of a social
assistive robot for monitoring gait parameters
Alessandra Sorrentino1 , Niccolò Vezzi1 , Carlo La Viola1 , Erika Rovini1 ,
Filippo Cavallo1,2 and Laura Fiorini1,2
1
    Department of Industrial Engineering, University of Florence, Via Santa Marta 3, Firenze, 50139, Italy
2
    The BioRobotics Institute, Scuola Superiore Sant’Anna, Viale Rinaldo Piaggio 34, Pontedera (Pisa), 50134, Italy


                                         Abstract
                                         This paper proposes an alternative strategy for the analysis of the gait activity using a socially assistive
                                         robot. This solution aims to be less invasive while guaranteeing an accurate evaluation of the rehabilita-
                                         tion performance. In this work, we implemented a follow-me module to enable ASTRO robot to detect,
                                         track, and follow the patient during walking, adapting to his/her walking speed. The robot detects the
                                         person through a 2D laser sensor and an RGB-D camera. To follow the user at a predetermined distance,
                                         the implemented follow-me module integrates two controllers for handling the linear and angular veloc-
                                         ities, respectively. The controllers’ gains were set according to the maximum speed attainable by the
                                         robot. The extracted gait parameters were compared with the parameters extracted by an inertial sensor
                                         placed on the feet (SensFoot) and analyzed to characterize the best robot configuration for the task of
                                         the gait assessment. Eleven participants were recruited to perform the tests with 3 different values of the
                                         robot’s maximum speed. For each test, 4 parameters were extracted from the laser and 10 parameters
                                         from the wearable sensors. The best configuration was found to be the one with the highest maximum
                                         speed, 0.7 m/s, whose gains from the two linear and angular controllers are 𝐾𝑝 = 1.0, 𝐾𝑑 = 0.4, and 𝐾𝑝
                                         = 1.0, respectively. Qualitative results collected at the end of the test also confirm the 0.7 m/s as the
                                         optimal perceived maximum velocity.

                                         Keywords
                                         follow-me, gait parameters, socially assistive robot




1. Introduction
Nowadays, socially assistive robotics applications are extensive and cover the human life cycle
spectrum of needs and want. It ranges from the field of physical and cognitive disorders
in children [1, 2], to the care and assistance of people suffering from cognitive decline and
associated complications [3, 4]. This wide and complex range of applications foresees the
presence of different stakeholders, i.e. professional caregivers, social service providers, medical
services, etc. Due to recent advances in the field of socially assistive robots, the range of
potential applications has greatly expanded, becoming one of the most promising emerging
technologies devoted to helping and assisting citizens in daily activities at home but also in
relevant healthcare settings. Particularly, overall last few years, several researchers focused

ALTRUIST workshop, held at International Conference on Social Robotics, December 2022, Florence, IT
$ alessandra.sorrentino@unifi.it (A. Sorrentino)
 0000-0003-3187-810X (A. Sorrentino); 0000-0003-1745-0684 (C. L. Viola); 0000-0002-7906-9013 (E. Rovini);
0000-0001-7432-5033 (F. Cavallo); 0000-0001-5784-3752 (L. Fiorini)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
also on the use of social robots for promoting active aging and for being used as a support in
clinical [5] and residential contexts [6, 7, 8]. Indeed, thanks to the embedded sensors, social
robots can “percept” information related to human movements, body postures, and emotions –
among others – that can be linked to the clinical status of the humans thus providing decision
support for the clinicians. For instance, they can use social robots to acquire information while
the patients are doing some commonly used cognitive or physical exercises (e.g. walking). The
current gold standard for clinical motion analysis is represented by optical motion capture
systems performed in the laboratory. Unfortunately, economic, as well as environmental and
time constraints prevent the continued collection of these high-quality data. In addition, motion
data collected in the laboratory may not be able to describe the natural movements of an
individual as in the home environment [6].
   Technologies based on non-wearable sensors (e.g. cameras, footpads) or wearable sensors (or
a combination of both types) are also being used to obtain information about the motor condition
of the patient. However. sometimes they are cumbersome or considered too invasive also from
a privacy perspective. For instance, in [9], machine learning models were developed to predict
clinical gait parameters from the trajectories of 2D poses of the body extracted from videos using
OpenPose [10] thus predicting the gait parameters and clinical decisions relying on machine
learning classifiers (i.e. CNN, Random Forest, and Redge Regression). In [11] a real-time gait
analysis is developed using a Kinect v2 sensor and measuring variables such as step length and
angles between joints. In [12] the approach proposed relied on a deep learning technology (i.e.
Mask R-CNN) to recognize a human subject in a 2D image, then combining 3D data to measure
walking parameters such as the stride length and the walking speed. However, all the proposed
solutions are based on a fixed camera installed in the environments, but what if the users won’t
perform the exercise in front of the camera or the workspace requested by the exercises is bigger
than the camera workspace (e.g. if the user is walking)? The acquired parameters can have
low accuracy, so the clinician can’t rely on them. From the clinical perspective, the monitoring
of gait parameters represents a core point; indeed, alteration in walking can be linked with
some neurological disorders. In this sense, the information acquired by the robot can be used
as biomarkers for monitoring the progression of the disease or for predicting some abnormal
status.
   In this context, this work proposes an alternative solution for gait analysis using a social robot,
equipped with an RGB-D camera and a 2D laser. The idea was to develop a follow-me module
where the robot acquires information on the human’s gait while they are walking together,
so as to be sure that the users will be always in the optimal field of view of the sensor. This
solution allows the person to avoid wearing additional sensors - which can create discomfort
in the patient - while providing the assessment during the rehabilitation activities, which can
be carried out both in hospital facilities and in patient’s homes. Particularly, the aim of this
work was to design, develop and characterize a robot control module to be used for acquiring
reliable gait parameters during the walking activity, without interfering with the user’s “normal”
walking speed and measuring his/her level of technology’s acceptance. Within this aim, we
implemented a follow-me software module that allows the robot to detect, track, and follow
the patient during the walking activity. The RGB-D camera, together with the laser, is used to
locate the person in the environment. The presented work tends to answer to main research
hypothesis:
  (a) RQ1: What should be the optimal robot velocity that the robot should keep guaranteeing
      a reliable gait assessment?

  (b) RQ2: How is the optimal combination perceived by the user in terms of acceptability?


2. Materials and Methods
2.1. ASTRO Robot
The robot used in this study is ASTRO robot, a robotic platform designed to promote the
user’s mobility within ACCRA project [13]. The robot is mounted over a SCITOS-G5 platform,
developed by Metralabs1 . SCITOS-G5 is a differential drive mobile platform with a base of
dimensions 582x7537x617 mm, equipped with two drive wheels and one caster wheel, an EC
motor with high torque, and a bumper sensor with a mechanical emergency stop. The platform is
equipped with a Sick-s300 laser sensor placed at the bottom, and an Astra Orbecc RGB-D camera
placed at the height of the “neck” of the robot, as visible in 1. Since ASTRO is a ROS-based
robot, the CogniDrive module, developed by MetraLabs GmbH, represents the middleware to
receive and send velocity commands at the motors of the SCITOS platform.

2.2. Follow-me module
The follow-me module is composed of a perception and a controller block, respectively. The
perception part is responsible for identifying and tracking the person along the path, by using
the data stream recorded by the camera and the laser. To locate the person using the camera, the
RGB image is fed into the YOLOv3 network [14], which returns the coordinates of the bounding
box of the detected person, i.e. (x,y) coordinates (pixels). The information is then combined with
the camera calibration information to project the centroid of the person on the depth image, i.e.
z coordinate (meters). The centroid is computed as the center of the bounding box. To properly
locate the legs of the person, the leg-tracker ROS package is used to process the laser data in
real time. The data returned at the end of the process belongs to the (x,y) position of the left
and right leg’s centroids, respectively [15].
   The controller part is made of two controllers, for handling the linear and angular velocity of
the robot separately. The former is a Proportional-Derivative (PD) controller that takes as input
the distance of the robot from the person to be followed, defined as the difference between
the actual distance and the ideal distance (i.e. input error). The output of the PD controller
represents the linear velocity to be imparted to the robot to keep it at a fixed distance from the
person. In this study, the fixed distance has been set to 1.5 m, which represents the distance
at which the camera can detect the full body of the person, as well as a proxemic value that
lies outside the personal space of the user [16]. To provide an answer to RQ1, we tuned the PD
controller to guarantee three different maximum linear velocities: 0.3 m/s (lowest velocity); 0.5
m/s (an intermediate velocity); 0.7 m/s (highest velocity). The gains used in this work are 𝐾𝑝 =1
for every velocity, while varying the derivative gain 𝐾𝑑 , namely: 0.2 for the lowest velocity, 0.3
for the intermediate velocity, and 0.4 for the highest velocity.
   1
       https://www.metralabs.com/en/mobile-robotscitos-g5
Figure 1: System Architecture.


   The latter is a Proportional (P) controller, in which the error concerns the distance between
the center of the camera and the centroid of the person (distance intended along the x-axis,
i.e. the one parallel to the camera) and the output signal represents the angular velocity to
be imparted to the robot so as to keep the centroid of the person always in the center of the
camera’s field of view. To guarantee this behavior (i.e. correct the orientation of the robot with
respect to the person to follow), the angular controller was tuned with 𝐾𝑝 =1. Both linear and
angular controllers have been implemented with the simple-PID python API2 .


3. Experimental protocol
The goal of this experimentation is to define the optimal follow-me robot configuration, intended
as the velocity configuration of the robot, that allows to correctly measure the gait parameters
(RQ1) while avoiding the “disturbance” factor introduced by the presence of the robot (RQ2),
that may affect the way in which the patient walks (i.e. naturalness). The “10 meters” clinical
protocol was chosen for this experiment since it is widely used in clinical practice. Within the
test, the user is requested to walk ten meters along a straight path. At the beginning of the
experimental protocol, the user was asked to wear SensFoot on both feet [17], as shown in
Figure 2. These sensors were used only for data comparison, without including them in the final
prototype. The experimental setup was composed of 4 trials, each one dedicated to a particular
combination of robot velocities and controller’s gains (reported in Table ??), as follows:

   1. NM (Not Move): The robot remains stationary at the starting position.

   2. LS (Low Speed): The robot follows the user with a max linear velocity of 0.3 m/s.

   3. MS (Medium Speed): The robot follows the user with a max linear velocity of 0.5 m/s.

   4. HS (High Speed): The robot follows the user with a max linear velocity of 0.7 m/s.

   During each trial, the participant was asked to walk in front of the robot. At the end of each
trial, each participant evaluated how the presence of the robot affected his walk on a 10-Likert
   2
       https://pypi.org/project/simple-pid/
Figure 2: System Architecture.


scale for evaluating (where 1 meant that the presence of the robot did not affect the walking, a
value of 10 meant the robot affected walking a lot). The gait assessment was performed offline
by considering the data recorded by the laser and the SensFoot worn by the participant.


4. Participants
For this experimentation, a total of 11 young participants were recruited from Ph.D. students
and researchers employed at the Department of Industrial Engineering of the University of
Florence. The cohort was composed of 6 men, and 5 women (avg age: 31.27; std: ±8.78). All
the participants signed the informed consent before entering into the study. The tests were
performed in accordance with the Declaration of Helsinki and the data storage is compliant
with the GDPR regulation.


5. Data analysis
Data from the laser and inertial sensors were processed offline, using Matlab®R2020b (The
MathWorks, Inc., USA). Data from the wearable sensors were pre-processed using a fourth-order
low-pass digital Butterworth filter with a 5 Hz cut-off frequency for eliminating high-frequency
noise. Custom-made algorithms [17] were then applied to extract parameters reported in Table
1. The acquired laser data were transformed from the robot’s moving reference system to a
fixed reference system, coincident with the initial position of the robot. This step guarantees
the exact measurement of the gait parameters with a moving system. Then, to properly identify
the centroids of the left and the right legs, the laser data were then separated according to a
threshold along the y-axis (i.e. axis perpendicular to the direction of motion). Finally, the laser
data are segmented from the inertial data, thus identifying the various phases of the walk, as
in [18]. At the end of the feature extraction process, a total of 14 gait-related parameters were
extracted (as shown in Table 1), namely: 4 parameters were extracted from the laser and 10
from SensFoot data.
   To answer RQ1 and RQ2, we compared three different follow-me configurations (i.e. varying
velocity), considering a reference configuration. We considered the stationary condition (NM)
Table 1
Gait parameters extracted from the laser and inertial sensors for gait assessment. Each parameter is
computed for the left and right foot, separately.
      Parameter        Acronym                       Description                  Devices
    Step number        GSTRD                    Number of steps                   Laser/SensFoot
     Step Length        StL                  Average of step lengths              Laser
   Gait Swing Time     GSWT            Average duration of Swing phase            SensFoot
   Gait Stance Time    GSTT            Average duration of Stance phase           SensFoot
   Gait Stride Time   GSTRDT          Average duration of the Stride phase        SensFoot
        Foot lift     GAngExc     Average of the angular excursion of the ankle   SensFoot


as a reference for undisturbed gait performance. When stationary, the robot can not disturb
the patient in any way. However, this condition may affect the reliability of the parameters’
measurements when the user is too far from the robot. In this case, the gait parameters extracted
with the inertial sensors were used as references. To properly identify the differences between
the parameters computed within the moving robot configurations and the stationary robot, the
absolute relative error between the two configurations has been computed as:
                                                    𝜇𝑖 − 𝜇𝑁 𝑀
                                      𝑒𝑃 𝐴𝑅_𝑖 = |             |                                    (1)
                                                       𝜇𝑁 𝑀
  Where PAR is the parameter of interest, and 𝜇 is the average value of the parameter of interest
computed among the trials of the follow-me configuration 𝑖 (i.e. 𝑖 = [𝐿𝑆, 𝑀 𝑆, 𝐻𝑆]).


6. Results
Due to the presence of misclassified laser data, the performances of some participants were
excluded. In the end, the analysis included 8 participants for the trial with the stationary robot
(NM), 10 for the trial with low speed (LS), 8 for the medium speed (MS), and 7 for the high
speed (HS). Comparing the number of steps extracted from the laser with the measurements
estimated by the inertial sensors, it emerged that there was a loss of laser data in the stationary
configuration (NM). Namely, after a certain distance (c.a. 4m), it is not possible to identify
the legs of the users from the laser data. As shown in Figure 3, this phenomenon did not
happen in the remaining trials, where the number of steps coincided with the two devices’
measurements. This result validates the idea of integrating a follow-me module to perform a
valid gait assessment, directly using the data recorded by the robot’s sensors.
   Considering the gait parameters extracted from the inertial data, the results returned that
the higher is the velocity of the follow-me robot, the closer the parameters extracted with the
moving robot configuration and the stationary configuration are. As shown in Figure 4(a)-(b),
the stance and stride duration computed during the HS trials are the closest to the stance and
stride duration estimated during NM. Namely, the measurement error of the stance time between
NM and HS is 0.01, and it increases while considering MS (𝑒𝑆𝑇 _𝑀 𝑆 =0.17) and LS (𝑒𝑆𝑇 _𝐿𝑆 =0.43).
Similarly, the error of the stride time is lower considering the HS trials, and it increased in other
cases, as reported in Table 2. Regarding the swing phase (see 4(c)), high-velocity configuration
                          15                                                             15
 Avg GSTRD (right foot)
                          13                                                             13




                                                                 Avg GSTRD (left foot)
                          11                                                             11
                          9                                                              9
                          7                                                              7
                          5                                                              5
                          3                                                              3
                          1                                                              1
                               NM   LS                 MS   HS                                NM   LS            MS   HS
                                    IMU        LASER                                               IMU   LASER



                                         (a)                                                         (b)
Figure 3: Graph plot of the GSTRD parameter extracted with the two devices: (a) for right foot, (b) for
left foot.




                                         (a)                                                         (b)




                                         (c)                                                         (d)
Figure 4: Graph plots of the gait parameters measured within each robot configuration: (a) stance time;
(b) stride time; (c) swing time; (d) ankle excursion.


reported measurements closer to the stationary configuration (𝑒𝑆𝑊 _𝐻𝑆 = 0.03), with respect to
the MS and LS trials (𝑒𝑆𝑊 _𝑀 𝑆 = 0.22;𝑒𝑆𝑊 _𝐿𝑆 = 0.56). Considering the step length, it emerged
that as the velocity increased, the step length increased as well. As shown in Figure 5, the
average step length was 0.29 m for LS, 0.505 m for the MS, and 0.695 m during HS. Especially
for the high speed, it emerged a very high standard deviation for the step length. It may be due
to a small dataset and/or because the difference in the step length between a taller and a shorter
person, is more noticeable at high velocities than at lower ones.
   Analyzing the user feedback at the end of each trial (Table 3), it resulted that as speed
increases, the perceived naturalness of walking increases as well. Namely, the participants felt
the presence of the robot was a disturbing factor during the LS and MS trials. On the contrary,
Table 2
Mean relative error between the left foot and the right foot of the inertial data versus reference test
with the robot stationary.
                      Configuration             GSTRDT         GSTT             GSTWT   GAngExc
                       LS (0.3 m/s)              1.37              0.43          0.56   0.28
                       MS (0.5 m/s)              0.53              0.17          0.22   0.13
                       HS (0.7 m/s)              0.11              0.01          0.03   0.03

                                           1

                                          0.8

                                          0.6
                                StL [m]



                                          0.4

                                          0.2

                                           0
                                                 LS                MS              HS
                                                      RIGHT FOOT    LEFT FOOT




Figure 5: Step length parameter of the right and left foot with the standard deviation.


Table 3
Average user feedback for the 4 robot configurations.
                        Configuration            Average Score (± standard deviation)
                            NM                                          1 (± 0.76)
                        LS (0.3 m/s)                                    8 (± 0.73)
                        MS (0.5 m/s)                                    6 (± 0.75)
                        HS (0.7 m/s)                                    3 (± 0.80)


the participants did not perceive the robot as a disturbance, during the stationary condition and
the follow-me configuration with high speed. In these two configurations, the average user’s
feedback was 1 and 3, respectively, implying that the presence of the robot did not influence the
walking activity.


7. Conclusion
This work proposed an alternative method for performing gait assessment by using a mobile
robotic platform that follows the patient during the walking activity. This solution represents
a not-invasive choice since the gait parameters could be directly extracted by the sensors
mounted over the robotic platform. Furthermore, it represents a plug-and-play choice since the
robot could be easily moved from one clinical environment to another. The main innovation
of this work relies on the adoption of a follow-me module which increases the autonomy of
the robot while guaranteeing a correct gait assessment. This module has been implemented
by integrating the current state-of-the-art tools for people detection. The multi-modality of
the robot’s perception assures the robustness of the implemented system. The results of the
preliminary tests with 11 participants highlighted that the follow-me module that better fits the
needs of the gait assessment (RQ1) is the one with high velocity (i.e. HS). In fact, the analysis
returned that the configuration characterized by the highest speed most closely approaches
the gold standard configuration, represented by the stationary robot in terms of accuracy of
gait parameters. Considering the user’s feedback, it was found that the HS configuration is also
one that does not compromise the naturalness of the walking performance (RQ2). In fact, it is
the closest to the reference stationary configuration in terms of the perceived experience of
walking at natural velocity. This was a fundamental step to verify before using the robot in a
clinical setting since all the parameters of the robot controller should be properly set up. Using
a mobile robot for gait assessment permits the exploitation of the computation of additional gait
parameters. In this work, we estimated additional spatial parameters (i.e. step length) which
could not be computed by using common wearable sensors, which are used to extract with high
accuracy only temporal parameters (e.g., Stride, Swing, and Stance time).
   In future works, we also would like to increase the number of parameters that could be
extracted with the sensors mounted over the ASTRO robot, by also exploiting the information
coming from the camera, like in [19]. Furthermore, other tests will be planned to verify the
accuracy of the extracted parameters with an optical system, also increasing the number of
participants. This solution has some limitations, mainly related to the context of use. In fact,
this solution is robust only in controlled environments, where only one person is present in
front of the robot. It may need further tuning for crowded environments, where there are more
people in front of the robot, or for clinical tests involving not straight paths. To overcome these
limitations, it will be necessary to integrate the capability of recognizing the target person in
crowded environments and a navigation module to avoid possible obstacles on the path.


References
 [1] S. Jain, B. Thiagarajan, Z. Shi, C. Clabaugh, M. J. Matarić, Modeling engagement in long-
     term, in-home socially assistive robot interventions for children with autism spectrum
     disorders, Science Robotics 5 (2020) eaaz3791.
 [2] S. Rossi, S. J. Santini, D. Di Genova, G. Maggi, A. Verrotti, G. Farello, R. Romualdi, A. Alisi,
     A. E. Tozzi, C. Balsano, et al., Using the social robot nao for emotional support to children
     at a pediatric emergency department: Randomized clinical trial, Journal of Medical Internet
     Research 24 (2022) e29656.
 [3] M. Valentí Soler, L. Agüera-Ortiz, J. Olazarán Rodríguez, C. Mendoza Rebolledo,
     A. Pérez Muñoz, I. Rodríguez Pérez, E. Osa Ruiz, A. Barrios Sánchez, V. Herrero Cano,
     L. Carrasco Chillón, et al., Social robots in advanced dementia, Frontiers in aging neuro-
     science 7 (2015) 133.
 [4] A. Sorrentino, L. Fiorini, C. La Viola, F. Cavallo, Design and development of a social
     assistive robot for music and game activities: a case study in a residential facility for
     disabled people, in: 2022 44th Annual International Conference of the IEEE Engineering
     in Medicine & Biology Society (EMBC), IEEE, 2022, pp. 2860–2863.
 [5] G. D’Onofrio, D. Sancarlo, M. Raciti, M. Burke, A. Teare, T. Kovacic, K. Cortis, K. Murphy,
     E. Barrett, S. Whelan, et al., Mario project: validation and evidence of service robots for
     older people with dementia, Journal of Alzheimer’s Disease 68 (2019) 1587–1601.
 [6] R. Bevilacqua, E. Felici, F. Cavallo, G. Amabili, E. Maranesi, Designing acceptable robots
     for assisting older adults: a pilot study on the willingness to interact, International Journal
     of Environmental Research and Public Health 18 (2021) 10686.
 [7] L. Fiorini, E. Rovini, S. Russo, L. Toccafondi, G. D’Onofrio, F. G. Cornacchia Loizzo,
     M. Bonaccorsi, F. Giuliani, G. Vignani, D. Sancarlo, et al., On the use of assistive technology
     during the covid-19 outbreak: Results and lessons learned from pilot studies, Sensors 22
     (2022) 6631.
 [8] L. Fiorini, A. Sorrentino, M. Pistolesi, C. Becchimanzi, F. Tosi, F. Cavallo, Living with a
     telepresence robot: results from a field-trial, IEEE Robotics and Automation Letters 7
     (2022) 5405–5412.
 [9] Ł. Kidziński, B. Yang, J. L. Hicks, A. Rajagopal, S. L. Delp, M. H. Schwartz, Deep neural
     networks enable quantitative movement analysis using single-camera videos, Nature
     communications 11 (2020) 1–10.
[10] Z. Cao, T. Simon, S.-E. Wei, Y. Sheikh, Realtime multi-person 2d pose estimation using
     part affinity fields, in: Proceedings of the IEEE conference on computer vision and pattern
     recognition, 2017, pp. 7291–7299.
[11] A. de Queiroz Burle, T. B. de Gusmão Lafayette, J. R. Fonseca, V. Teichrieb, A. E. F. Da Gama,
     Real-time approach for gait analysis using the kinect v2 sensor for clinical assessment
     purpose, in: 2020 22nd Symposium on Virtual and Augmented Reality (SVR), IEEE, 2020,
     pp. 144–153.
[12] Y. Li, P. Zhang, Y. Zhang, K. Miyazaki, Gait analysis using stereo camera in daily environ-
     ment, in: 2019 41st Annual International Conference of the IEEE Engineering in Medicine
     and Biology Society (EMBC), IEEE, 2019, pp. 1471–1475.
[13] L. Fiorini, K. Tabeau, G. D’Onofrio, L. Coviello, M. De Mul, D. Sancarlo, I. Fabbricotti,
     F. Cavallo, Co-creation of an assistive robot for independent living: Lessons learned on
     robot design, International Journal on Interactive Design and Manufacturing (IJIDeM) 14
     (2020) 491–502.
[14] J. Redmon, A. Farhadi, Yolov3: An incremental improvement, arXiv preprint
     arXiv:1804.02767 (2018).
[15] A. Leigh, J. Pineau, N. Olmedo, H. Zhang, Person tracking and following with 2d laser
     scanners, in: 2015 IEEE international conference on robotics and automation (ICRA), IEEE,
     2015, pp. 726–733.
[16] E. Hall, The hidden dimension (anchor books a doubleday anchor book), 1966.
[17] E. Rovini, C. Maremmani, F. Cavallo, A wearable system to objectify assessment of motor
     tasks for supporting parkinson’s disease diagnosis, Sensors 20 (2020) 2630.
[18] L. Fiorini, G. D’Onofrio, E. Rovini, A. Sorrentino, L. Coviello, R. Limosani, D. Sancarlo,
     F. Cavallo, A robot-mediated assessment of tinetti balance scale for sarcopenia evaluation in
     frail elderly, in: 2019 28th IEEE International Conference on Robot and Human Interactive
     Communication (RO-MAN), IEEE, 2019, pp. 1–6.
[19] L. Fiorini, L. Coviello, A. Sorrentino, D. Sancarlo, F. Ciccone, G. D’Onofrio, G. Mancioppi,
     E. Rovini, F. Cavallo, User profiling to enhance clinical assessment and human–robot
     interaction: A feasibility study, International Journal of Social Robotics (2022) 1–16.