<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Dynamic Self-Avatar Motion Retargeting in Virtual Reality</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alessandro Clocchiatti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, University of Torino</institution>
          ,
          <addr-line>Corso Svizzera 185, 10149 Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>55</fpage>
      <lpage>60</lpage>
      <abstract>
        <p>In Virtual Reality (VR), users can feel embodied through a virtual body that mimics their physical one in terms of location, behavior, and movement. Several studies have explored users' perception of self-avatar movement retargeting. However, the boundaries of dynamic changes of user movements and the process of adaption to new self-avatar behaviors remain unclear and require additional investigations. Exploring this motion retargeting techniques is crucial to both enhance the availability of this technology and improve its applications in rehabilitation therapies.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Virtual Reality</kwd>
        <kwd>Virtual Embodiment</kwd>
        <kwd>Visuomotor Illusion</kwd>
        <kwd>Motion Retargeting</kwd>
        <kwd>Post-stroke Rehabilitation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        One of the primary advantages of Virtual Reality (VR) is its ability to enable users to explore
virtual environments and interact with digital objects from a first-p erson perspective, providing
an immersive experience that evokes both psychological and physical sensations of being present
in the virtual environment. Many systems provide users with a virtual body, called self-avatar,
that is placed, behaves, and moves like the physical body [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], inducing the so-called Sense
of Embodiment (SoE). Several users may have restricted physical spaces or experience motor
disabilities that reduce their movement capabilities [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In response, various techniques have
been developed to alter users’ movement, aiming to enhance their overall experience within
the virtual environment [
        <xref ref-type="bibr" rid="ref3 ref4">4, 3</xref>
        ]. Nevertheless, it is crucial to investigate how users can adapt to
these new self-avatar movements without compromising both SoE and the accuracy of their
interactions.
      </p>
      <p>In this context, the investigation of motion retargeting techniques becomes crucial to ensure
the widespread accessibility of VR technology. By refining and implementing motion retargeting
approaches, VR experiences can be tailored to the individual needs and capabilities of users,
making virtual reality more inclusive and immersive for all.</p>
      <p>
        Having an embodied self-avatar makes VR a possible alternative to traditional rehabilitation
therapies (See Palacios-Navarro et al. for a review [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]).
      </p>
      <p>
        Typical virtual reality rehabilitation therapies on upper and lower limbs rely on approaches
such as Constraint-induced movement therapy (CIMT) in which the induced motion is provided
by an external robotic tool [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and mirror-therapy exercises, in which afe cted limb movements
are simulated by mirroring the healthy limb ones[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Post-stroke patients can be also treated using a setup made of a screen showing a pre-recorded
movie of a moving hand, placed over the real patient’s arm (Kinesthetic Illusion Induced by
Visual Stimulation, or KiNVIS) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. With this approach, the seen body movements are difer ent
from the physical ones, creating an illusory perception of kinesthesia and improving individual
motor function.
      </p>
      <p>The possibility of retargeting self-avatar movements makes this technology to be a valid
option for rehabilitation therapy based on induced motion illusion. However, according to
the authors’ knowledge, this rehabilitation approach in Virtual Reality is still novel. The
efe ctiveness of this rehabilitation approach in virtual reality may depend on the ability of
the technology to accurately reproduce and alter user’s movements through a virtual body,
hiding the real physical body movements. In this scenario, it is important to understand how
much difer ence between virtual and physical body maximize the induced movements without
afe cting SoE.</p>
      <p>Additionally, introducing a gradual variation in the gap between the virtual and real body,
rather than maintaining a consistent level of alteration throughout the entire exercise, could
prove efe ctive and potentially enhance rehabilitation outcomes.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        In recent years, several works focused on the alteration of the movements of the self-avatar
and the user’s perception of the virtual body behavior. Soccini et al. [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] define d the induced
ifnger movements efe ct as the involuntary physical hand motion induced by the sight of the
alien movement (alien motion) of the self-avatar fingers, spotted only when SoE is present.
Gonzales Franco [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] define d the self-avatar follower efe ct as the involuntary modification
of the physical movements in order to be similar to the virtual body ones. In particular, the
study shows how the introduction of an alteration in self-avatar movements makes users follow
their self-avatar itself, resulting in a drift. The results suggest that participants show a stronger
tendency to follow the avatar when the alteration of movements is introduced gradually, rather
than instantaneously. Furthermore, gradual introduction of movement modifications leads to a
higher Sense of Embodiment compared to instantaneous alterations.
      </p>
      <p>
        Similarly, Burin et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] develop an immersive virtual reality experiment in which user’s
were asked to repeat a specific movement (draw continuous straight vertical lines). The results
showed that when it is introduced a gap between the actual movements of the user and the
self-avatar movements, the motor performance is attracted towards the embodied virtual body
movements. The alteration of the self-avatar movements was used to support the learning of
specific hand positions and movements.
      </p>
      <p>While these works primarily focus on understanding how the alteration of self-avatar
movements afe cts the SoE and real user movements, other studies investigated the user adaptation
to the modifie d behavior of the virtual body.</p>
      <p>
        Soccini et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] investigated users’ adaptation to frequent and abrupt changes in self-avatar
movements. The results suggest that users are able to adapt to the new self-avatar behavior
without nullifying the Sense of Embodiment, even when the alterations occur frequently over
time.
      </p>
      <p>
        Lilija et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] proposed a technique in VR in which the position and pose of the user’s
hand is corrected to be closer to the target movement. The alteration introduces a mismatch
between the actual hand of the user and its VR representation. Despite the results suggesting
that the correction of the self-avatar movements improved the short-term retention of the
trained movements and the motor learning, the optimal amount of alteration of the self-avatar
movements and the duration of the exercise has not been clarifie d. This approach could hold
significant potential in the field of rehabilitation, as it ofers opportunities for targeted motor
training and skill acquisition. By fine-tuning self-avatar movements, it may be possible to
improve motor function and coordination in individuals undergoing rehabilitation programs.
      </p>
      <p>In all the works presented, the alteration of the self-avatar movements does not cancel the
Sense of Embodiment. Despite that, it is not clear how users can adapt to the new self-avatar
movements such as more complex virtual interactions.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Project description</title>
      <p>In a Virtual Reality system, a self-avatar is considered by the users as their own body, and it
is important to further investigate the limits of the SoE and its components, such as Sense of
Ownership and Sense of Agency. Furthermore, understanding human perception and physical
reactions to dynamically retargeted self-avatar movements, which difer from those of the
physical body, is crucial.</p>
      <p>
        So far, we already investigated user’s perception to the frequent changes of movement
remapping and their ability to adapt to the altered self-avatar behavior [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In this study,
participants were asked to fle x/extend their left arm towards two targets while the self-avatar
movements dynamically changed (Figure 1). The goal was to understand how users adapt to
these frequent changes. The analysis of the results confirms that Sense of Embodiment persists
despite the introduction of movement alterations. Furthermore, results suggest that the subjects
get better results of adaptation when the self-avatar behavior changes frequently.
      </p>
      <p>Our initial findings indicate that users can quickly adapt to new self-avatar retargeted
movements. However, in order to consider this technique as a viable option for future movement and
interaction alternatives, it is important to further investigate how motion alteration and users
adaptation to these new behaviors are linked. In order to achieve this goal, it is important to
ifrst answer the following questions:</p>
      <p>Q1: Do users adapt better to gradual self-avatar motion retargeting compared to abrupt
alteration without afe cting embodiment?
Q2: How does the gradual or abrupt motion retargeting influence the adaptation and the
interaction in the virtual environment? How does it afe ct movements and interaction
accuracy?
Q3: Is it possible to enhance the movement and interaction area without compromising both
Sense of Embodiment and accuracy? Can users adapt to a new virtual space temporarily
or permanently?</p>
      <p>
        To address these questions, we will explore them based on the experimental design proposed
by Soccini et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], and will include difer ent retargeting patterns and interactions with
objects. Specifically , the movement tasks used in the previous study will be replaced with
interaction-based tasks, such as whac-a-mole or grab interaction.
      </p>
      <p>
        These future directions hold potential for application in VR rehabilitation scenarios, where
self-avatars serve as guides for users. The gap between virtual body movements and physical
movements determines the potential for induced movements [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Furthermore, the application
of KiNVIS in Virtual Reality is still novel, and further exploration of self-avatar motion
retargeting could contribute to the advancement of this rehabilitation approach. Firstly, in current
rehabilitation therapies based on the induced motion illusion, the degree of alteration between
virtual and physical body is stable and does not change throughout the sessions [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. However,
the variation of the self-avatar behavior during the rehabilitation process may be more efe ctive.
Secondly, there are still no clear criteria to determine the optimal level of self-avatar movement
alteration that can maximize the exercise outcome and rehabilitation efe cts. Additionally, it’s
important to understand how the self-avatar behavior should vary in order to increase the
induction of motion without afe cting the Sense of Embodiment.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>Thanks to the rapid growth and improvement of digital technologies, virtual reality systems and
applications are used in several difer ent areas, including medical and therapeutic fields. While
in the last few years several studies regarding Virtual Reality interaction and VR rehabilitation
therapies have been carried out, there is still much work to do in order to make these systems
widely available and more efe ctive. Latest developments in VR ofer the opportunity to provide
users with virtual bodies, and understanding how self-representation and user’s kinetic reaction
to altered self-avatar movements work becomes essential to give a solid contribution in the
evolution of Virtual Reality technology.</p>
      <p>While my Ph.D. project primarily focuses on post-stroke rehabilitation, the study of
selfavatar motion remapping is important to facilitate interaction in VR to a broader range of people.
In particular, the dynamic retargeting of the self-avatar movements could be analyzed in various
aspects such as interaction with the virtual environment or locomotion. For example, gradually
exaggerating the user’s movements could enable interaction and movement within a larger
virtual space than the physical space available, without afe cting the SoE and the interaction
eficacy .</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kilteni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Groten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Slater</surname>
          </string-name>
          ,
          <article-title>The sense of embodiment in virtual reality</article-title>
          ,
          <source>Presence: Teleoperators and Virtual Environments</source>
          <volume>21</volume>
          (
          <year>2012</year>
          )
          <fpage>373</fpage>
          -
          <lpage>387</lpage>
          . URL: https://doi.org/10.1162/ pres_a_00124. doi:
          <volume>10</volume>
          .1162/pres_a_
          <fpage>00124</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Slater</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Spanlang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Sanchez-Vives</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Blanke</surname>
          </string-name>
          ,
          <article-title>First person experience of body transfer in virtual reality</article-title>
          ,
          <source>PloS one 5</source>
          (
          <year>2010</year>
          )
          <article-title>e10564</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Cohn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maselli</surname>
          </string-name>
          , E. Ofek,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gonzalez-Franco</surname>
          </string-name>
          ,
          <article-title>Snapmove: Movement projection mapping in virtual reality</article-title>
          ,
          <source>in: 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Poupyrev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Billinghurst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Weghorst</surname>
          </string-name>
          , T. Ichikawa,
          <article-title>The go-go interaction technique: non-linear mapping for direct manipulation in vr</article-title>
          ,
          <source>in: Proceedings of the 9th annual ACM symposium on User interface software and technology</source>
          ,
          <year>1996</year>
          , pp.
          <fpage>79</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Palacios-Navarro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hogan</surname>
          </string-name>
          ,
          <article-title>Head-mounted display-based therapies for adults poststroke: A systematic review and meta-analysis</article-title>
          ,
          <source>Sensors</source>
          <volume>21</volume>
          (
          <year>2021</year>
          )
          <article-title>1111</article-title>
          . URL: https: //doi.org/10.3390/s21041111. doi:
          <volume>10</volume>
          .3390/s21041111.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Borstad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Crawfis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Lowes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>McPherson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Siles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Worthen-Chaudhari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. V.</given-names>
            <surname>Gauthier</surname>
          </string-name>
          ,
          <article-title>In-home delivery of constraint-induced movement therapy via virtual reality gaming</article-title>
          ,
          <source>Journal of patient-centered research and reviews 5</source>
          (
          <year>2018</year>
          )
          <article-title>6</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Mekbib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Debeli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Fang,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yang</surname>
          </string-name>
          , J. Han,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          , et al.,
          <article-title>A novel fully immersive virtual reality environment for upper extremity rehabilitation in patients with stroke</article-title>
          ,
          <source>Annals of the New York Academy of Sciences</source>
          <volume>1493</volume>
          (
          <year>2021</year>
          )
          <fpage>75</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Okawada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kaneko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shindo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yoneta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Okuyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Akaboshi</surname>
          </string-name>
          , M. Liu,
          <article-title>Kinesthetic illusion induced by visual stimulation influences sensorimotor event-related desynchronization in stroke patients with severe upper-limb paralysis: A pilot study</article-title>
          ,
          <source>Restorative Neurology and Neuroscience</source>
          <volume>38</volume>
          (
          <year>2021</year>
          )
          <fpage>455</fpage>
          -
          <lpage>465</lpage>
          . URL: https://doi.org/10.3233/ rnn-201030. doi:
          <volume>10</volume>
          .3233/rnn-201030.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Soccini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grangetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Inamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shimada</surname>
          </string-name>
          ,
          <article-title>Virtual hand illusion: The alien finger motion experiment</article-title>
          ,
          <source>in: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)</source>
          , IEEE,
          <year>2019</year>
          . URL: https://doi.org/10.1109/vr.
          <year>2019</year>
          .
          <volume>8798193</volume>
          . doi:
          <volume>10</volume>
          .1109/vr.
          <year>2019</year>
          .
          <volume>8798193</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>A. M. Soccini</surname>
          </string-name>
          ,
          <article-title>The induced finger movements efe ct</article-title>
          ,
          <source>in: SIGGRAPH Asia 2020 Posters, ACM</source>
          ,
          <year>2020</year>
          . URL: https://doi.org/10.1145/3415264.3425448. doi:
          <volume>10</volume>
          .1145/ 3415264.3425448.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gonzalez-Franco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cohn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ofek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Burin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maselli</surname>
          </string-name>
          ,
          <article-title>The self-avatar follower efe ct in virtual reality</article-title>
          ,
          <source>in: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)</source>
          , IEEE,
          <year>2020</year>
          . URL: https://doi.org/10.1109/vr46266.
          <year>2020</year>
          .
          <volume>00019</volume>
          . doi:
          <volume>10</volume>
          .1109/vr46266.
          <year>2020</year>
          .
          <volume>00019</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Burin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kilteni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rabufetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Slater</surname>
          </string-name>
          , L. Pia,
          <article-title>Body ownership increases the interference between observed and executed movements</article-title>
          ,
          <source>PLOS ONE 14</source>
          (
          <year>2019</year>
          )
          <article-title>e0209899</article-title>
          . URL: https://doi.org/10.1371/journal.pone.0209899. doi:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>0209899</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>A. M. Soccini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Clocchiatti</surname>
          </string-name>
          , T. Inamura,
          <article-title>Efe cts of frequent changes in extended selfavatar movements on adaptation performance</article-title>
          ,
          <source>Journal of Robotics and Mechatronics</source>
          <volume>34</volume>
          (
          <year>2022</year>
          )
          <fpage>756</fpage>
          -
          <lpage>766</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>Lilija</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kyllingsbaek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hornbaek</surname>
          </string-name>
          ,
          <article-title>Correction of avatar hand movements supports learning of a motor skill, in: 2021 IEEE Virtual Reality and 3D User Interfaces (VR)</article-title>
          , IEEE,
          <year>2021</year>
          . URL: https://doi.org/10.1109/vr50410.
          <year>2021</year>
          .
          <volume>00069</volume>
          . doi:
          <volume>10</volume>
          .1109/vr50410.
          <year>2021</year>
          .
          <volume>00069</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>A. M. Soccini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Cena</surname>
          </string-name>
          ,
          <article-title>The ethics of rehabilitation in virtual reality: the role of self-avatars and deep learning</article-title>
          ,
          <source>in: 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>324</fpage>
          -
          <lpage>328</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>