<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Transferability of Multi-Agent DRL architecture for a Physics-based Model of a Lower-Limb Amputee Across Varied Locomotion Environments</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lorenza Cotugno</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthia Sabatelli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberta Siciliano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafaella Carloni</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bernoulli Institute for Mathematics</institution>
          ,
          <addr-line>Computer Science and Artificial Intelligence</addr-line>
          ,
          <institution>University of Groningen</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Electrical Engineering and Information Technology, University of Naples Federico II</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Department of Physics, University of Naples Federico II</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This work investigates whether a multi-agent architecture trained on level-ground walking can facilitate learning and improve performance in more challenging locomotion environments. Specifically, the musculoskeletal model is trained using a multi-agent variant of Proximal Policy Optimization, combined with imitation learning, where the healthy part and prosthesis are modeled as separate agents that learn under a shared reward. The level-ground walking policy serves as the starting point for transfer. We ifne-tune it across three more challenging environments, uneven terrain, ramps, and stairs, each equipped with task-specific reward functions and imitation data tailored to their respective locomotion objectives. We compare the fine-tuned policy against a baseline that is trained from scratch. Preliminary results suggest that transfer learning improves initial performance and yields higher rewards throughout the evaluation. When quantified using the area ratio metric ℛ, which compares the area under the learning curve of the pre-trained model to that of the baseline, transfer learning demonstrates a benefit of at least ℛ = 0.50 across all tested environments. Ongoing work explores bidirectional transfer by rotating the source environment to study how task complexity and inter-environment similarity afect performance. Future work will focus on reducing reliance on imitation data and designing more generalizable reward functions to support autonomous, robust adaptation across varied real-world conditions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Reinforcement Learning</kwd>
        <kwd>Transfer Learning</kwd>
        <kwd>Multi-Agent Systems</kwd>
        <kwd>Biomechanical Simulation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial Intelligence (AI) has become increasingly relevant in the field of lower-limb
rehabilitation, ofering new opportunities for enhancing motor recovery and personalized assistance. A
comprehensive literature review [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] highlights the growing integration of AI techniques, ranging
from neural networks to reinforcement learning, in exoskeleton-assisted gait rehabilitation. At
the same time, there is a rising emphasis on the need for intelligent systems to adapt to diverse
terrain conditions. In this regard, [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] demonstrates that deep learning models can efectively
classify ground types and estimate parameters such as ramp inclination or stair height, using
data collected from wearable sensors. Among the various AI approaches, reinforcement learning
(RL) stands out as particularly promising for prosthetic control, enabling systems to learn motor
behaviors through trial-and-error interaction with the environment.
      </p>
      <p>
        Recent studies in assistive mobility have explored RL in various directions, including automatic
tuning of prosthetic control parameters and optimization of state-based control strategies [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ].
RL has also been used in simulated bipedal locomotion tasks, providing a foundation for the
development of algorithms aimed at future prosthetic control and personalization.
Within musculoskeletal simulations for gait modeling, most prior approaches rely on a
singleagent formulation, where a unified policy governs both the healthy part and the prosthetic limbs.
Although this setup can be efective in simulation, it is less suitable for real-world deployment.
In [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], a notable multi-agent alternative was introduced, modeling the human and the prosthesis
as separate agents trained concurrently but independently, thereby fostering collaboration to
achieve adaptive and natural gait patterns. However, this approach explicitly imposed symmetry
between the prosthetic and contralateral limbs.
      </p>
      <p>
        Building on this perspective, previous work [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposed a multi-agent reinforcement learning
(MARL) architecture in which the human and the prosthesis are treated as independent agents.
Each agent is trained using Independent Proximal Policy Optimization (IPPO) combined with
imitation learning, and their coordination is guided by a shared reward. The agents operate in
distinct observation spaces and, unlike previous approaches, no symmetry is enforced. Instead,
coordination and gait patterns are learned through interaction and imitation.
By reusing the neural network weights of a policy previously trained to achieve stable
locomotion on level ground, this work investigates whether transfer learning can facilitate training
and enhance performance in more complex locomotion environments, such as stairs, ramps,
and uneven terrain. Each target environment features its own task-specific reward function
and imitation dataset. We compare the pretrained model against a baseline trained from scratch
in each setting, focusing on initial rewards and convergence speed.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Physics-based Model</title>
      <p>This section presents the physics-based musculoskeletal model of the transfemoral amputee,
which is used as the agent of the MARL, and the contact model, which has been created to
perfom walking on complex terrains.</p>
      <sec id="sec-2-1">
        <title>Musculoskeletal Model</title>
        <p>
          We adopt the gait1415+2 musculoskeletal model [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], developed in OpenSim 4.2, to simulate
a transfemoral amputee. The model features 14 degrees of freedom (DOFs). The intact limb
is driven by 15 Hill-type musculotendon units, while the prosthetic side is actuated by ideal
torque generators in the knee and ankle joints [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], using first-order activation dynamics. This
setup captures key biomechanical features of the human and prosthesis, and allows realistic
simulation of neuromuscular control strategies.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Contact Model</title>
        <p>
          To enable realistic interaction with structured and irregular terrains, we implemented a custom
contact model inspired by [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Although the musculoskeletal model includes anatomical foot
segments, it lacks the explicit contact geometry necessary to simulate physical interactions
with the environment. Therefore, spherical meshes were added to the feet to define discrete
contact regions, enabling the computation of ground reaction forces. Furthermore, OpenSim’s
default Hunt–Crossley model was replaced with an elastic foundation formulation to ensure
stable and biomechanically plausible contact dynamics [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The geometry of the foot meshes
is shown in Figure 1. In addition, Figure 3 shows the full model interacting with stair, ramp,
and uneven terrain surfaces.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The Proposed Methodology</title>
      <p>
        This section outlines the proposed methodology, which employs a collaborative multi-agent
reinforcement learning architecture. The gait1415+2 musculoskeletal model [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is split into two
agents (i.e., the prosthesis and the contralateral healthy human part) which interact within a
shared physics-based simulation. Rather than retraining level-ground walking, we initialize
each target task (i.e, uneven terrain, ramp, and stairs ascent) with the pretrained policy from [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
and compare its performance against policies trained from scratch.
      </p>
      <sec id="sec-3-1">
        <title>Multi-Agent Reinforcement Learning Architecture</title>
        <p>
          This work builds on the MARL architecture introduced in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], which applies Independent
Proximal Policy Optimization (IPPO + imitation data). The healthy human agent receives
information about the full state of the body, including joint positions and velocities of the
pelvis and lower limbs, ground reaction forces (GRFs), and muscle-related variables such as
activation, fiber length, and force. In contrast, the prosthesis agent operates under more realistic
sensing constraints and observes only signals available from on-board sensors: joint angles and
velocities in the prosthetic knee and ankle, GRFs under the prosthetic foot, and internal actuator
states (torque, power, control signal, velocity, activation). This asymmetry in observations
reflects the limited sensory capabilities typically available in robotic prosthetic systems.
The agents also difer in their action spaces:
• The human agent outputs a 15-dimensional vector corresponding to the activation levels
of the musculotendon units on the intact limb.
• The prosthesis agent controls two actuators (in the prosthetic knee and ankle) via discrete
activation commands.
        </p>
        <p>The total reward at each timestep integrates both the imitation and task-level objectives.
 = 0.9 · imit, + 0.1 · goal,
(1)
The imitation component encourages the reproduction of a reference trajectory and consists of
two terms: one penalizing deviations in joint positions, the other in joint velocities. Together,
these terms promote coordinated and biomechanically plausible movements.
The task-related component, in parallel, reinforces the successful execution of the locomotion
objective. Its formulation is environment-specific: for elevation-based tasks, such as stairs and
ramps, the reward emphasizes pelvis velocity to reflect dynamic performance; for level and
irregular ground, it prioritizes pelvis position accuracy.</p>
        <p>By combining these components, the reward function guides the agent toward realistic and
functionally efective motion strategies.</p>
      </sec>
      <sec id="sec-3-2">
        <title>PPO and Imitation Dataset</title>
        <p>
          We used motion capture data to guide imitation learning across tasks. Flat-ground walking was
modeled using a public pediatric gait dataset with kinematic and kinetic data from typically
developing children aged 10–12 [11]. Stairs and ramp ascent trajectories were obtained from
the CMU Motion Capture Database [12], using the same trials as in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Specifically, we used
lower-body motion data from subject 14 (trial 22) for stairs ascent and subject 74 (trial 19)
for ramp ascent. These trials were selected for their completeness, marker quality, and task
consistency. All signals were temporally normalized and scaled to match the OpenSim model.
We compare two training strategies: (i) a baseline approach, in which agents are trained from
scratch with randomly initialized neural network weights on each ℰ ; and (ii) a transfer learning
approach, in which the policy is initialized using weights trained on a simpler source environment
ℰ , corresponding to level-ground walking. In both conditions, training is conducted using
task-specific reward functions and imitation data tailored to each environment. The results
suggest that transferring a well-trained policy from ℰ improves initial performance and yields
higher rewards throughout the evaluation.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Evaluation Metric</title>
        <p>To quantitatively assess the benefit of transfer learning, we compute the area under the learning
curve (AUC) for each training run and define the area ratio metric ℛ, following the approach
in [13], as follows:
Here, the area under the curve is approximated via trapezoidal numerical integration:
ℛ =</p>
        <p>AUCℰ→ℰ − AUCℰ</p>
        <p>AUCℰ
∫︁</p>
        <p>1
 ()  ≈ ( − ) · 2 ( () +  ())
(2)
(3)
with  and  corresponding to the first and last training epochs, respectively. A value of ℛ &gt; 0
indicates a performance gain due to transfer learning.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>This section presents preliminary results evaluating the efectiveness of the transfer learning
strategy in three challenging target environments denoted ℰ : uneven terrain, ramp, and stairs.
This efect is illustrated in Figures 4a, 4b, and 4c, which show the episode reward curves for
both training strategies across all three ℰ settings. In an uneven terrain environment, the
initialized transfer scenario begins with approximately 25 reward points, more than twice the
initial score of the baseline of about 11, and maintains this advantage throughout training. In
the ramp environment, the transfer case starts near 22, compared to just over 6 for the baseline.
After 50 epochs, it achieves nearly 30 reward points, while the baseline reaches just under 16.
In the stairs environment, the baseline starts at approximately 4.5 and the transfer at 11, again
showing an initial benefit. The computed ratio ℛ confirms the advantage of transfer learning
across environments: 0.516 for uneven terrain, 1.421 for ramps, and 0.560 for stairs. All agents
were trained for an equal duration of six hours.</p>
      <p>(a) Uneven (ℛ = 0.516)
(b) Ramp (ℛ = 1.421)
(c) Stairs ( ℛ = 0.560)</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>This study demonstrates the potential of transfer learning to improve eficiency and early
performance in training prosthetic control policies across diverse locomotion environments.
By leveraging a policy trained on level-ground walking as a source, we observed consistent
gains across ramp, uneven terrain, and stairs tasks. These preliminary results are promising,
suggesting that transfer from a simpler source environment can significantly accelerate learning
and yield more stable training dynamics. Ongoing work explores bidirectional transfer by
rotating the source and target environments (e.g., from stairs to ramp and vice versa), allowing us
to assess how task complexity and inter-environment similarity influence transfer efectiveness.
In particular, we aim to investigate whether transfer is symmetric or whether certain
sourcetarget pairs yield more generalizable behavior. A more extensive evaluation, including longer
training runs and additional terrain combinations, will be presented at the conference.
In the current setup, imitation learning plays a key role. Future research will focus on strategies
to gradually reduce the influence of imitation during fine-tuning, fostering more autonomous and
robust behavior. Promoting such autonomy is essential to generalize across broader locomotion
tasks, including variations in ramp steepness, stair height, or surface irregularity. Taken
together, these findings lay the foundation for environment-aware transfer learning frameworks
for embodied prosthetic agents.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The work of Lorenza Cotugno and Roberta Siciliano was supported by the Italian Ministry of
Research, under the complementary actions to the NRRP “Fit4MedRob - Fit for Medical Robotics”
Grant PNC0000007, (CUP: B53C22006990001). The work of Rafaella Carloni was supported
by the European Commission’s Horizon 2020 Programme as part of the project MyLeg under
grant no. 780871.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Grammarly in order to check grammar
and spelling, paraphrase and reword. After using this tool/service, the authors reviewed and
edited the content as needed and take full responsibility for the publication’s content.
[11] M. H. Schwartz, A. Rozumalski, J. P. Trost, The efect of walking speed on the gait of
typically developing children, Journal of Biomechanics 41 (2008) 1639–1650.
[12] CMU Graphics Lab, Cmu graphics lab motion capture database, http://mocap.cs.cmu.
edu/, 2005–2006. Motion capture recordings from over 140 subjects; data used in many
biomechanics and AI studies. If published, please cite source and send notification to
jkh%2Bmocap@cs.cmu.edu.
[13] M. Sabatelli, Contributions to Deep Transfer Learning: From Supervised to Reinforcement
Learning, Ph.D. thesis, University of Liège, Faculté des Sciences Appliquées, 2022.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Coser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tamantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Soda</surname>
          </string-name>
          , L. Zollo,
          <article-title>Ai-based methodologies for exoskeleton-assisted rehabilitation of the lower limb: a review</article-title>
          ,
          <source>Frontiers in Robotics and AI</source>
          <volume>11</volume>
          (
          <year>2024</year>
          )
          <fpage>1341580</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Coser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tamantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tortora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Furia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sicilia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zollo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Soda</surname>
          </string-name>
          ,
          <article-title>Deep learning for human locomotion analysis in lower-limb exoskeletons: A comparative study</article-title>
          ,
          <source>arXiv preprint arXiv:2503.16904</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Si</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Wearer-prosthesis interaction for symmetrical gait: A study enabled by reinforcement learning prosthesis control</article-title>
          ,
          <source>IEEE Transactions on Neural Systems and Rehabilitation Engineering</source>
          <volume>28</volume>
          (
          <year>2020</year>
          )
          <fpage>904</fpage>
          -
          <lpage>913</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Si</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Knowledge-guided reinforcement learning control for robotic lower limb prosthesis</article-title>
          ,
          <source>in: IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>754</fpage>
          -
          <lpage>760</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Si</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Reinforcement learning enabled automatic impedance control of a robotic knee prosthesis to mimic the intact knee motion in a co-adapting environment</article-title>
          ,
          <source>arXiv preprint arXiv:2101.03487</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wallace</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Si</surname>
          </string-name>
          ,
          <article-title>Human-robotic prosthesis as collaborating agents for symmetrical walking</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems (NeurIPS) 35</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>27306</fpage>
          -
          <lpage>27320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Petrillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Siciliano</surname>
          </string-name>
          , ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sabatelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Carloni</surname>
          </string-name>
          <article-title>, Multi-agent reinforcement learning control for an osseointegrated transfemoral amputee physics-based model, 2025</article-title>
          . In review.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Carloni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Luinge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Raveendranathan</surname>
          </string-name>
          , The gait1415+
          <article-title>2 opensim musculoskeletal model of transfemoral amputees with a generic bone-anchored prosthesis</article-title>
          ,
          <source>Medical Engineering &amp; Physics</source>
          <volume>123</volume>
          (
          <year>2024</year>
          )
          <fpage>104091</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Raveendranathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Kooiman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Carloni</surname>
          </string-name>
          ,
          <article-title>Musculoskeletal model of osseointegrated transfemoral amputees in opensim</article-title>
          ,
          <source>PLOS ONE 18</source>
          (
          <year>2023</year>
          )
          <article-title>e0288864</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. J. C.</given-names>
            <surname>Adriaenssens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Raveendranathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Carloni</surname>
          </string-name>
          ,
          <article-title>Learning to ascend stairs and ramps: Deep reinforcement learning for a physics-based human musculoskeletal model</article-title>
          ,
          <source>Sensors</source>
          <volume>22</volume>
          (
          <year>2022</year>
          )
          <article-title>8479</article-title>
          . doi:
          <volume>10</volume>
          .3390/s22218479.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>