<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>AI-based Semiautonomous Control Strategy for upper-limb prostheses</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gianmarco Cirelli</string-name>
          <email>gianmarco.cirelli@unicampus.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Tamantini</string-name>
          <email>christian.tamantini@cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luigi Pietro Cordella</string-name>
          <email>cordel@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Cordella</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Italy.</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Upper Limb Prosthesis, Artificial Intelligence, Computer Vision, Semiautonomous Control Strategy</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>21</institution>
          ,
          <addr-line>Rome</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Cognitive Sciences and Technologies, National Research Council of Italy</institution>
          ,
          <addr-line>Via Giandomenico Romagnosi 18a, Rome</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Unit of Advanced Robotics and Human-Centered Technologies, Università Campus Bio-Medico di Roma</institution>
          ,
          <addr-line>Via Alvaro del Portillo</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Naples Federico II</institution>
          ,
          <addr-line>Via Claudio 21, 80125 Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Artificial Intelligence-based Semiautonomous control strategies (AI-based SCS) could significantly improve the reliability and naturalness of prosthetic hand control. The integration of a computer vision system (CVS) and the user motion intention allows the prosthetic hand to autonomously recognize the object to be grasped and to select the appropriate hand posture, with the user in charge of initiating the execution of the grasp.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The loss of an upper limb greatly afects an individual quality of life, and existing prosthetic devices that
rely solely on electromyographic (EMG) signals have limitations in terms of adaptability and control
[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Traditional threshold-based EMG control is sequential and lacks intuitiveness, allowing users to
manage only a limited number of hand gestures, while muscle pattern recognition techniques are often
inconsistent [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], leading to usability issues and increased abandonment of prostheses. Semiautonomous
control strategies (SCS) based on Artificial Intelligence (AI) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] have been proposed to alleviate cognitive
demands and improve the accuracy, and intuitiveness of prosthetic control. More specifically, over the
past decade, the use of visual information from a Computer Vision System (CVS) to assist prosthesis
users in selecting the most appropriate configuration has gained increasing interest within the scientific
community [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Current methods in the literature use a combination of RGB cameras and ultrasonic sensors to classify
hand gestures and determine wrist orientation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], or they incorporate RGB-D cameras [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. However,
the size and weight of these setups hinder their practical integration into prosthetic devices. To optimize
grasp selection and reduce computational overhead, convolutional neural networks (CNNs) have been
applied to classify RGB images [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], but this often comes at the expense of wrist orientation estimation.
      </p>
      <p>
        This study introduces a novel AI-based SCS that takes into account both the user intended motion
and the configuration of the hand and wrist simultaneously [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Unlike previous approaches where the
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org
system autonomously determines the hand gesture, this method allows the user to actively select the
grasp while the vision system adjusts wrist orientation and ofers correction suggestions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and Methods</title>
      <sec id="sec-2-1">
        <title>2.1. Semiautonomous Control Strategy (SCS)</title>
        <p>The image of the scene captured by the CVS is used as input for the Object Detection module. This
module identifies objects in the scene and associates each with a bounding box (x and y coordinates of
the center, width, and height of the bounding box), a class label, and a confidence score, representing
the probability that the object has been correctly classified. This information is structured in a ( , 6)
tensor, where  is the number of detected objects and 6 represents the calculated information. Objects
are sorted by descending confidence scores.</p>
        <p>
          Among the several CNN models in the literature [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], YOLOv5 (You Only Look Once) [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ] in its
small version was selected for object detection due to its balance between classification accuracy and
processing speed, making it ideal for real-time applications. The model was trained on the Microsoft
COCO dataset [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], which contains a wide range of objects in various contexts. The model was
trained on the platform Ultralytics [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], with the following parameters: 100 epochs of training with an
automatically optimized batch size of 32 and an image size of 640 × 640, using the Stochastic gradient
descent (SGD) optimizer (learning rate 0.01, momentum 0.9) and automatic mixed precision enabled.
Data augmentation techniques such as Blur, MedianBlur, Grayscale conversion, and CLAHE (Contrast
Limited Adaptive Histogram Equalization) were applied to enhance image contrast and improve model
performance. It should be noted that the model was not fine-tuned on a bespoke data set of images
of the objects included in the experimental validation of the proposed approach. Instead, the version
available online was used for these analyses.
        </p>
        <p>The vision-based control strategy allows for the assignment of object-specific grasp configurations,
starting from the macro-categories of grasp detected by the EMG classifier. In particular the hand
gestures that the proposed SCS can handle are as follows: Prismatic, Thumb Adducted, Thumb Abducted,
Index Finger Extension, Fixed Hook, Spherical, 2-Digits, 3-Digits, Lateral, Pointing, and Rest for the
hand, Pronation and Supination for the wrist. This approach ensures high classification performance by
limiting the number of EMG classes while enhancing the prosthesis ability to perform specific grasps
based on the detected objects.</p>
        <p>
          The information from the Object Detection module is then fed into the Grasp Selection module,
where it is compared with the hand gestures predicted by the EMG classifier. The module then handles
the following cases: No objects detected, in which the control algorithm returns a ”Rest” gesture.
Non-coherence, where the visual data and EMG output do not match. In this case, priority is given
to the visual data, selecting the gesture for the object with the highest confidence score. Coherence,
in which only objects associated with the grasp recognized by the EMG classifier are considered, and
priority is given to those in the central part of the image. Moreover, if multiple objects are detected in
the central region of the image, only the one with the highest confidence level is selected, because it is
assumed that the user targets the camera towards the object that wants to grab [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          The Orientation Estimation module continuously calculates the wrist P/S angle by segmenting the
Region of Interest (ROI) of the selected object and applying Principal Component Analysis (PCA).
Focusing on the ROI improves segmentation accuracy by isolating the target object and reducing
computational load. The ROI is converted to grayscale and segmented using Otsu’s thresholding
method[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], followed by morphological operations of closure to remove residual noise.
        </p>
        <p>
          Contours are detected, and only those within 5–95% of the ROI area are considered for PCA so that
it is only applied to the segmented object. PCA is used to determine the wrist P/S angle, which is
calculated as the angle between the first Principal Component (PC) of the object and the  ⃗-axis of the
image [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. For objects without a principal axis, like spheres, a default P/S angle of 90° is used. Special
objects, such as a mouse, require a diferent approach where the first PC is aligned with the finger
longitudinal axis.
        </p>
        <p>The proposed SCS recognizes the reaching phase when a 50% increase in the object ROI is detected
and then triggers the prosthesis to execute the selected hand gesture.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Experimental Evaluation</title>
        <p>
          The vision-based control algorithm ran on a Raspberry Pi 4 Model B, a SBC which is suitable for
embedded applications. Its desktop was remotely managed via Virtual Network Computing (VNC) and
powered by a portable USB-C power bank (5V DC, 3A). The Arducam 16MP High-Resolution camera
was chosen as an RGB camera for its 16 MP resolution and autofocus. To achieve a good compromise
between the computational burden of the algorithm and performance, a 320 × 240 pixel image resolution
was chosen [
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ].
        </p>
        <p>For evaluating orientation estimation, the Vicon VERO system, a leading motion capture solution,
was employed. It consists of eight compact infrared cameras with 2.2 MP resolution, set to acquire data
at 100 Hz. The cameras are strategically positioned to track reflective markers on objects, allowing for
accurate 3D motion reconstruction against a predefined reference frame.</p>
        <p>The Experimental Protocol is composed of two sequential sessions, which are detailed in the following.</p>
        <p>The first session focused on evaluating the performance of the proposed control algorithm, in
accurately classifying various objects presented individually, and its computational cost in terms of time
needed to execute the control pipeline. The camera was positioned above the objects at a predefined
angle to simulate real-world conditions, and 16 objects were tested, corresponding to the chosen hand
gestures: Lateral (fork and spoon), Pointing (keyboard and mouse), Spherical (sports ball), 2-digits
Precision (book and cup), 3-digits Precision (scissors and wine glass), Prismatic (cell phone and remote),
Thumb Adducted (bottle), Thumb Abducted (umbrella), Index Finger Extension (knife), and Fixed Hook
(backpack and suitcase). Each object was recorded in 5 trials across diferent configurations, capturing
25 frames per trial for a total of 125 samples. Additionally, two markers were positioned on each object
to provide orientation data (except for ball), along with three markers on the camera module to define
the image plane. In each test, the object was rotated by a predetermined angular displacement to ensure
comparability across diferent objects. The capacity of the proposed control algorithm to estimate wrist
orientation was assessed by comparing its calculated angular displacement with data from the Vicon
system. Marker positions were processed in MATLAB to compute the camera plane and the object
Principal Component.</p>
        <p>Key performance indicators (KPIs) were extracted from the first experimental sessions to evaluate
the proposed approach quantitatively. More in detail, the computed KPI were:
• Accuracy in Object and Grasp Classification (  , ): it is the correspondence between the
real object (or hand gesture) and the predicted one.
• Time to Execute the Control pipeline (  ): it quantifies, in seconds, the computational load of
the algorithm.
• Mean Angular Error (  ): it assesses the performance of the proposed SCS in correctly
estimating angular displacements, where the single Angular Error (AE) is defined as:
where Δ  and Δ  are the angular displacement computed by Vicon VERO and the SCS,
respectively.
• Estimation Stability ( ): it is evaluated as the standard deviations of angles computed by the
proposed algorithm and the Vicon VERO data. Higher total standard deviation values per setup
indicate lower stability</p>
        <p>In the second session of the Experimental Protocol, the system was tested in a more complex and real
scenario, to evaluate the ability of the algorithm to align user intentions with the vision system, both
when there is a discrepancy between the vision system and EMG output, and when multiple object are
framed. Hence, the Success Rate ( ) was considered to evaluate the system in this phase the number
of trials in which a correct grasp is associated with the scene according to the simulated user intention.</p>
        <p>The experimental setup in this part of the Experimental Protocol is shown in Figure 2.
 =
|Δ  − Δ  | ,
(1)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and Discussion</title>
      <p>Figure 3 shows the output from each of the modules of the implemented SCS. As illustrated, the system
can correctly recognize the object, segment it, and estimate its orientation in the image plane.</p>
      <p>Figure 4 reports results obtained in the first session of the experimental protocol for  ,  and
  . Specifically, the system reached 97.98% of accuracy in correctly classifying objects, and 99.81%
for the grasp. It was found that  ≥  , meaning that objects linked to a specific hand gesture
category are sometimes misclassified with those that can be manipulated using the same gesture. This
suggests that the proposed approach is not significantly impacted by object misclassification.</p>
      <p>Regarding the   , it was found that the average time required to execute the control pipeline was
0.483 ± 0.169 . Moreover, Figure 4(C) underlines the diferences among single parts of the proposed SCS:
the Object Detection module is the one that most heavily impacts the algorithm speed, as evidenced by
the three orders of magnitude diference between the times recorded for the object detection module
and the others. Considering the percentage of each single step over the total time to execute the control
pipeline, the Detection, the Selection, the Segmentation, and the Orientation take 97.55%, 0.05%, 1.1%,
1.30%, respectively.</p>
      <p>Concerning the   and  , the proposed SCS grounded on CVS provides promising results, with
an average angular of 16.26 ± 8.62° and 0.2, respectively. The obtained results seems to be acceptable
for the proposed application, since the small angular error can be easily compensated by the other arm
joints.</p>
      <p>In the second session, the ability of the system to respond to two complex scenarios was tested.
Specifically, Figure 5 shows frames acquired by the CVS during these two tests. In the first one, multiple
objects were framed, and as the user intent changed, the proposed SCS was able to select always the
one associated with that macro-category of grasps. Instead, in the second test, the input coming from
the EMG classifier was constant, while the framed object changed. In this case, the system was capable
of correcting the output by considering the visual information.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>In this study, an AI-based SCS for hand-wrist prostheses grounded on CVS was developed and tested.
It combines a convolutional neural network (CNN)-based object detection module, a grasp selection
module, and an automatic thresholding algorithm for grasp selection and wrist orientation estimation.
The YOLO v5s was chosen as Deep Learning model for detection, and it was trained on COCO dataset.
By integrating external visual data from the CVS with user intent through simulated EMG signals, the
system aims to improve prosthesis control. The results demonstrate high accuracy in object and grasp
classification (over 97%), with an average Time to Execute the Control pipeline of 0.483 . The system
enables the identification of additional grasps beyond those detected by EMG, ensuring the appropriate
grasp for diferent objects. The Mean Angular Error and Estimation Stability in wrist orientation were
recorded as 16.26 ± 8.62° and 0.2, respectively. While this error may be deemed acceptable, it will be
validated on a real prosthetic system.</p>
      <p>The strategy successfully performed the tests in the final phase, proving to efectively manages
complex situations. Moreover, the system is designed for portability and interoperability, making it
easily applicable to various prosthetic hands and robotic grippers capable of replicating a variety of
gestures.</p>
      <p>Future eforts should be devoted to integrating the proposed CVS into a prosthetic device, and
testing the user-friendliness, accuracy, and efectiveness of the overall system in reducing the cognitive
workload on a population of users.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work is funded partly by the European Union - Next Generation EU - NRRP M4.C2 - Investment
1.5 Establishing and strengthening of Innovation Ecosystems for sustainability (Project n. ECS00000024
Rome Technopole) and partly by the Italian Ministry of Research, under the complementary actions to
the NRRP “Fit4MedRob - Fit for Medical Robotics” Grant PNC0000007, (CUP: B53C22006990001).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Stefanelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cordella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gentile</surname>
          </string-name>
          , L. Zollo,
          <article-title>Hand prosthesis sensorimotor control inspired by the human somatosensory system</article-title>
          ,
          <source>Robotics</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <fpage>136</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Stefanelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lapresa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cordella</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. D'Accolti</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Cipriani</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Zollo</surname>
          </string-name>
          ,
          <article-title>A hand-wrist control strategy based on human upper limb kinematics</article-title>
          ,
          <source>in: 2024 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1029</fpage>
          -
          <lpage>1034</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Veer</surname>
          </string-name>
          ,
          <article-title>Recent trends and challenges of surface electromyography in prosthetic applications</article-title>
          , Biomed. Eng.
          <string-name>
            <surname>Letters</surname>
          </string-name>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pancholi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Wachs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Duerstock</surname>
          </string-name>
          ,
          <article-title>Use of artificial intelligence techniques to assist individuals with physical disabilities</article-title>
          ,
          <source>Annual Review of Biomedical Engineering</source>
          <volume>26</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gionfrida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Scaramuzza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Farina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. D.</given-names>
            <surname>Howe</surname>
          </string-name>
          ,
          <article-title>Wearable robots for the real world need vision</article-title>
          ,
          <source>Science Robotics</source>
          <volume>9</volume>
          (
          <year>2024</year>
          )
          <article-title>eadj8812</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Došen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cipriani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kostić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Controzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Carrozza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Popović</surname>
          </string-name>
          ,
          <article-title>Cognitive vision system for control of dexterous prosthetic hands: experimental evaluation</article-title>
          ,
          <source>Journal of neuroengineering and rehabilitation 7</source>
          (
          <year>2010</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Castro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dosen</surname>
          </string-name>
          ,
          <article-title>Continuous semi-autonomous prosthesis control using a depth sensor on the hand, Front</article-title>
          . in Neurorobotics 16 (
          <year>2022</year>
          ) (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. C. F.</given-names>
            <surname>Castro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. C.</given-names>
            <surname>Pinheiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Rigolin</surname>
          </string-name>
          ,
          <article-title>A hybrid 3d printed hand prosthesis prototype based on semg and a fully embedded computer vision system, Front</article-title>
          . in Neurorobotics 15 (
          <year>2022</year>
          )
          <fpage>751282</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cirelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tamantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Cordella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cordella</surname>
          </string-name>
          ,
          <article-title>A semiautonomous control strategy based on computer vision for a hand-wrist prosthesis</article-title>
          ,
          <source>Robotics</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <fpage>152</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Boshlyakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Ermakov</surname>
          </string-name>
          ,
          <article-title>Development of a vision system for an intelligent robotic hand prosthesis using neural network technology</article-title>
          ,
          <source>in: ITM Web of Conf.</source>
          , volume
          <volume>35</volume>
          ,
          <string-name>
            <given-names>EDP</given-names>
            <surname>Sciences</surname>
          </string-name>
          ,
          <year>2020</year>
          , p.
          <fpage>04006</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Redmon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Divvala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Girshick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          ,
          <article-title>You only look once: Unified, real-time object detection</article-title>
          ,
          <source>in: Proceedings of the IEEE Conf. on CVPR</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Phadtare</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Choudhari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pedram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vartak</surname>
          </string-name>
          ,
          <article-title>Comparison between yolo and ssd mobile net for object detection in a surveillance drone</article-title>
          ,
          <source>Int. J. Sci. Res. Eng. Manag</source>
          <volume>5</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>T.-Y. Lin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maire</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Belongie</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Hays</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Perona</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ramanan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dollár</surname>
            ,
            <given-names>C. L.</given-names>
          </string-name>
          <string-name>
            <surname>Zitnick</surname>
          </string-name>
          ,
          <article-title>Microsoft coco: Common objects in context</article-title>
          , in: D.
          <string-name>
            <surname>Fleet</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Pajdla</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Schiele</surname>
          </string-name>
          , T. Tuytelaars (Eds.),
          <source>Computer Vision - ECCV 2014</source>
          , Springer International Publishing,
          <year>2014</year>
          , pp.
          <fpage>740</fpage>
          -
          <lpage>755</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jocher</surname>
          </string-name>
          , ultralytics/yolov5:
          <fpage>v3</fpage>
          .
          <fpage>1</fpage>
          -
          <string-name>
            <given-names>Bug</given-names>
            <surname>Fixes</surname>
          </string-name>
          and Performance Improvements, https://github. com/ultralytics/yolov5,
          <year>2020</year>
          . URL: https://doi.org/10.5281/zenodo.4154370. doi:
          <volume>10</volume>
          .5281/zenodo. 4154370.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Flanagan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Terao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Johansson</surname>
          </string-name>
          ,
          <article-title>Gaze behavior when reaching to remembered targets</article-title>
          ,
          <source>J. of neurophysiol. 100</source>
          (
          <year>2008</year>
          )
          <fpage>1533</fpage>
          -
          <lpage>1543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N.</given-names>
            <surname>Otsu</surname>
          </string-name>
          ,
          <article-title>A threshold selection method from gray-level histograms</article-title>
          ,
          <source>IEEE Transactions on Syst., Man, and Cybern</source>
          .
          <volume>9</volume>
          (
          <year>1979</year>
          )
          <fpage>62</fpage>
          -
          <lpage>66</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C.</given-names>
            <surname>Tamantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lapresa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cordella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Scotto di Luzio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lauretti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zollo</surname>
          </string-name>
          ,
          <article-title>A robot-aided rehabilitation platform for occupational therapy with real objects</article-title>
          ,
          <source>in: Converging Clin. and Eng. Res. on Neurorehabilit. IV: 5th ICNR2020</source>
          ,
          <year>2020</year>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>851</fpage>
          -
          <lpage>855</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dosen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cipriani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kostić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Controzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Carrozza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Popović</surname>
          </string-name>
          ,
          <article-title>Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation</article-title>
          ,
          <source>J. of neuroeng. and rehabilit. 7</source>
          (
          <year>2010</year>
          )
          <fpage>42</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Došen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Popović</surname>
          </string-name>
          ,
          <article-title>Transradial prosthesis: artificial vision for control of prehension</article-title>
          ,
          <source>Artif. organs 35</source>
          (
          <year>2011</year>
          )
          <fpage>37</fpage>
          -
          <lpage>48</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>