<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>XAI for supporting Gait Analysis of Patient with Schizophrenia</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, University of Salerno</institution>
          ,
          <addr-line>Fisciano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Neuroscience, Reproductive Science and Odontostomatology,University Federico II</institution>
          ,
          <addr-line>Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The early identification and continuous observation of individuals diagnosed with schizophrenia can significantly enhance their quality of life. This document serves as an initial overview of a specific task within an ongoing project titled SPECTRA - Supporting Schizophrenia PatiEnts' Care wiTh aRtificiAl intelligence. The project's objective is to aggregate patient data from diverse sources, encompassing speech, emotional responses, and locomotion patterns, with a particular focus on gait analysis in this study. We utilize advanced deep learning algorithms to identify distinctive movement patterns and apply explainable AI techniques to elucidate the decision-making processes of the models. Our proposed methodology unfolds in two main stages: the gathering and pre-processing of data, followed by the classification of gait patterns with corresponding explanations. To classify gait, we employ spatiotemporal transformer models, and to elucidate these classifications, we generate visual explanations using SHAP (SHapley Additive exPlanations) images.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Schizophrenia</kwd>
        <kwd>Gait Analysis</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>eXplainable Artificial Intelligence</kwd>
        <kwd>Spatio-temporal transformers</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Schizophrenia (SZ) is a complex mental disorder that afects approximately 1 in 300
individuals (0.32%) globally, a statistic reported by the World Health Organization [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] in 2024. This
prevalence rate escalates to 1 in 222 (0.45%) within the adult demographic.
      </p>
      <p>
        Patients with SZ often experience a substantial decline in their Quality of Life (QoL), as the
disorder is associated with significant social and occupational challenges [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Early diagnosis
can lead to highly efective outcomes through pharmacological interventions, enhancing the
QoL for many patients. Diagnosing SZ necessitates clinicians to engage in extensive interviews
and observational sessions with patients, which is notably time-consuming. The repetitive
nature of these assessments can induce a learning efect on patient responses. Furthermore,
continuous patient monitoring is essential to mitigate the risk of sudden relapses, although
regular medical evaluations are impractical due to their associated time and costs. The scarcity
of mental healthcare providers is a pervasive issue worldwide, more so in developing nations,
underscoring the urgent need for innovative tools to assist clinicians in the screening and
monitoring of SZ patients. It has been documented that up to 80% of individuals with SZ exhibit
Genuine Motor Abnormalities (GMA), which are also prevalent among ultra-high risk (UHR)
groups and unafected first-degree relatives possessing a genetic predisposition for SZ [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Consequently, gait analysis emerges as a valuable source of data for aiding the diagnosis and
continuous monitoring of SZ. Although numerous gait analysis methodologies incorporating
body sensors have been proposed, they can be obtrusive. Recent studies [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] have successfully
implemented non-intrusive techniques using one or multiple digital cameras.
      </p>
      <p>This paper introduces ongoing research within the SPECTRA project, which focuses on
aiding the diagnosis of Treatment-Resistant Schizophrenia (TRS) patients, who do not respond
to anti-psychotic medications. The project aims to integrate various data sources to provide a
comprehensive patient profile. Specifically, this paper discusses our proposed methodology for
diagnosing and monitoring SZ through gait analysis. We further elucidate the deep
learningbased classification process with visual explanations to foster clinician trust in our techniques.</p>
      <p>The structure of the paper is as follows: Section 2 provides an overview of the relevant
literature and background information. Section 3 delineates the objectives of the SPECTRA
Project. Section 4 details the proposed approach, and Section 5 concludes the paper with a
discussion on the implications, final observations, and prospective future research directions..</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>In this section, we delve into the fundamental aspects of gait analysis and examine its relevance
in the detection and continuous monitoring of Schizophrenia (SZ).</p>
      <p>
        Gait analysis emerges as a critical instrument for decoding human locomotion patterns and
their implications on a spectrum of health disorders. The interplay between gait characteristics
and mental health conditions, notably including depression [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], Alzheimer’s disease [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
and SZ [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], has garnered increasing attention in recent research. This method ofers a
nonintrusive and cost-efective avenue for mental health assessment, enabling the identification of
nuanced deviations in walking patterns potentially indicative of mental health symptoms. It
may be also used for detecting human cooperation behavior through video surveillance, which
further illustrates the diverse applications of computer vision in healthcare and psychological
studies [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Traditionally, gait analysis systems rely on data gathered by specialized equipment,
such as body-worn sensors [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] or 3D sensing technologies [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. These approaches, however,
often require controlled environments, which can be costly and intrusive. An alternative and
more accessible methodology was introduced by Miao et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], utilizing a digital camera
to capture footage of participants’ gait. To facilitate the computational analysis of human
locomotion in these recordings, OpenPose [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], an open-source toolkit developed by Carnegie
Mellon University, was employed to detect and monitor bodily movements. In a significant study
by Martin et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], gait patterns were analyzed through comprehensive movement assessments
of 20 patients with SZ and 20 control participants. Utilizing motion capture technology, the
research team evaluated mental health conditions using established scales such as the Positive
And Negative Syndrome Scale (PANSS), Brief Psychiatric Rating Scale (BPRS), and Neurological
Soft Signs (NSS), successfully identifying 16 quantifiable movement markers associated with SZ.
      </p>
      <p>
        To augment understanding and interpretability of model decisions, explainable AI (XAI)
techniques have been applied. These methods, particularly post-hoc approaches like the Interpretable
Model-agnostic Explanations (LIME)[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and SHapley Additive exPlanations (SHAP)[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
enhance model transparency by elucidating the impact of specific input features on the predictions.
A notable application of these methodologies is illustrated in the work by Mishra et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ],
where a multi-layer perceptron was adopted to classify people wearing two types of Knee Ankle
Foot Orthosis. LIME emphasized the features that are more relevant for the model.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The SPECTRA project</title>
      <p>
        Within the spectrum of schizophrenia (SZ), there are patients whose significant symptoms
persist despite receiving adequate antipsychotic treatment. These patients are classified as
having Treatment Resistant Schizophrenia (TRS) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. TRS is a severe condition that afects
nearly 30% of individuals diagnosed with schizophrenia. Unfortunately, TRS is often diagnosed
late in the course of the disorder, which hinders the timely switch to more efective treatments
(such as clozapine) and non-pharmacological therapeutic approaches. This delay results in
considerable individual sufering and substantial economic costs for communities. Early and
accurate diagnosis of TRS is crucial. It allows clinicians to recommend more suitable
pharmacological and non-pharmacological therapies, potentially improving the Quality of Life (QoL) for
TRS patients and conserving valuable economic resources. Researchers have explored the use
of sensors for mental disease detection and patient monitoring [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        The SPECTRA Project aims to create a Decision Support System (DSS) using Artificial
Intelligence (AI) techniques and cutting-edge IoT technologies. By combining standard clinical
screening procedures with ICT-based assessment techniques driven by Machine Learning
algorithms, SPECTRA seeks to enable early TRS diagnosis. The project involves a field study
with real patients from the Unit for Treatment-Resistant Psychosis at the University “Federico
II” of Naples. Eligible patients are categorized as either TRS or non-TRS, and the SPECTRA
team collects historical clinical data (including Magnetic Resonance Images, questionnaire
scores, demographic information, and geographical data) alongside real-time data from IoT
sensors (such as ECG, temperature, EEG, and audio/video signals) during patient screenings.
Conventional data and information obtained through ICT technologies are utilized to train
Machine Learning and Deep Learning models for the early detection of Treatment Resistant
Schizophrenia (TRS) patients. However, a significant challenge in adopting AI solutions for
healthcare support lies in clinicians’ lack of trust in the black-box nature of these systems. To
address this issue, another project aim is to develop interaction models that assist clinicians
during TRS diagnosis by leveraging eXplainable Artificial Intelligence (XAI) methods [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. These
XAI techniques should highlight how AI black-box models generate predictions, potentially
enhancing clinicians’ confidence in these novel technologies. Additionally, the project will
create a dataset distinguishing TRS from non-TRS cases, which will be valuable for scientific
research and experimentation in Machine Learning-based TRS diagnosis for individuals with
schizophrenia
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. The proposed approach</title>
      <p>In this study, our objective is to explore the gait characteristics of individuals diagnosed with
schizophrenia by conducting a visual analysis of their walking patterns. The research is
structured into two main stages: (i) data acquisition and preprocessing, and (ii) classification along
with explanation. The sequential flow of this process is illustrated in Figure 1.</p>
      <p>The initial phase involves recording two distinct groups: individuals diagnosed with
schizophrenia (SZ) and a matched control group, utilizing comprehensive setup of multiple calibrated
mobile cameras. Participants in both groups provide informed consent, with the assurance
that their data will be anonymized and included in a publicly accessible dataset. We apply a
machine learning algorithm designed for body detection to derive skeletal joint data from the
captured video footage, resulting in 2D coordinates of the joints for each video frame. The
dataset creation employs a setup of three cameras to ensure a comprehensive data collection,
encompassing a diverse range of data samples. However, for practical application in real-world
scenarios, we propose the use of a single-camera setup for gait analysis to facilitate ease of
implementation in various settings.</p>
      <p>
        Inspired by the spatial-temporal transformers model approaches in literature for motion
analysis and recognition as VideoPose[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] and MotionBert[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], we leveraged a dual transformer
network to analyze gait patterns, generate 3D skeletal representations, and classify participants.
To explain the classification result, we employed the SHAP algorithm to investigate the
decisionmaking process of our deep classification model.
1–11
The network processes the 2D skeleton sequence as input, yielding both a classification of
the subjects and 3D reconstructions of their corresponding skeletal movements. The 3D
reconstruction facilitate the extraction of critical movement features, such as step count and
walking speed, which are relevant for the clinician. The analysis provides these movement
features in conjunction with SHAP visualizations of the key frames and joints with respect
to the classification, ofering domain experts enhanced insights to clarify the classification
outcomes.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Data collection and preprocessing</title>
        <p>Data are collected using a setup comprising three calibrated mobile cameras, as shown in Figure
2. We select a quiet and bright environment. Before starting the recording participants freely
walk in the environment. Their task consists in walking forward and backward for two minutes
on a carpet long 5 meters and large one meter. The walk is recorded by three cameras situated
as follows: a frontal camera , a central camera and the other at 45°.</p>
        <p>
          The recorded video is then processed as follows:
• body detection, performed by using a machine learning algorithm, specifically Alphapose
[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. The format of the human body joints used coincides with that of the Human3.6
dataset [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] and is shown in Figure 3a.
• skeletal joint coordinates extraction, performed for each frame in the videos, providing
2D relative joints sequence information (Figure 3b).
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Gait Analysis</title>
        <p>
          The network provides a classification of the patients. To enhance the performance of our
algorithm, we plan to conduct fine-tuning procedures on the Transformer netowork using
our collected dataset, this may be done by adding a linear layer or a multilayer perceptron
(MLP). The transformer also reconstructs the 3D representation of the patient gaits from the
recorded videos. This reconstruction resulted in a sequence of skeletal representations capturing
the motion dynamics of each individual. This data will be used by the clinician for further
analysis. It should be considered that the networks have been pre-trained on very large dataset
and exibhiting a very robust embedded representation of human body movement dynamic.
Due to the restricted availability of video recordings and annotations—a common challenge
in scenarios characterized by a scarcity of public datasets—our approach leverages one-shot
learning techniques and supervised contrastive learning. These methods have previously
demonstrated promise in tasks constrained by such dificulties, as evidenced by Sabater et al.
[
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] and Khosla et al. [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. These studies validate the potential of employing these advanced
learning strategies in environments with limited data.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. The XAI interface</title>
        <p>The SHapley Additive exPlanations (SHAP) algorithm plays a pivotal role in demystifying the
decision-making process of our deep learning classification model. This approach is visually
represented by the matrix in Figure 4, where the SHAP explanation on a sample video is shown.
In particular, the columns represent frame numbers and the rows represent the joints under
consideration, such as the right wrist. The matrix cells are colored according to the SHAP values
which represent the influence of various body parts over time in a video classification model. In
this analysis the colored cells have the following meaning:
• Neutral areas, depicted in white on the heatmap, indicate points on the body that are not
significantly relevant to the model’s classification decision throughout the video frames.
• Red regions, representing body movements that have positively contributed to the
model’s decision-making process,strongly supporting the classification according to the
model. This suggests that they are characteristic of the predicted class.
• Blue regions, indicating body movements that potentially misled the model’s
classification, negatively impacting the decision, implying that the presence of these specific
motions might be atypical for the predicted class or more common in other classes.</p>
        <p>This SHAP based visual representation allows for an intuitive understanding of which body
parts and their movements over time are considered decisive or inconsequential by the model,
thereby ofering insights into the model’s behavior and its reliance on specific features for
making classifications. These emphasized frames serve as a focal point for clinicians, enabling
them to delve deeper into the analysis and gain a clearer comprehension of the classifier’s
results. Such an approach facilitates a more nuanced understanding of the diagnostic outcomes.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Further Patient’s feature for the domain expert</title>
        <p>
          As an additional value, the network produces the 3D reconstructions of the skeletal sequences
by using the entire sequence of skeletal representations as 2D points. This representation is
inputted to the gait feature extraction. These features may be adopted by the clinicians for
further analysis. In particular, based on the results collected by Martin et al.[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], we selected
the following features related to movement analysis, essential for understanding movement
patterns, assessing gait, and identifying abnormalities or changes.
        </p>
        <p>• Head posture. This refers to the angle between the head and the clavicle. It helps assess
alignment and balance during movement.
• Number of steps. Simply put, it’s the count of steps taken while walking. It’s a
fundamental measure of mobility.
• Walking speed. This parameter indicates how fast an individual walks. It’s usually
measured in meters per second (m/s) or kilometers per hour (km/h).
• Arm sway. Arm sway assesses the variability of the angle of the armpit or the variations in
the elbow angle during movement. It provides insights into arm movement coordination.
• Presence/absence of facial movements. This parameter observes whether there are
any noticeable facial expressions or movements during walking. It can be relevant for
assessing overall coordination and comfort.
• Step length. Step length measures the distance covered by a single step. It’s typically
calculated from heel strike to the next heel strike of the same foot.
• Variability of step length. This parameter evaluates how consistent or variable step
lengths are during walking. It can indicate gait stability and symmetry.</p>
        <p>The analysis of these measures may provide further support to the clinician in the detection of
anomalous patterns.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Preliminary assessment</title>
        <p>
          The SPECTRA project lasts 2 years, starting from November 2023, and patients’ data are going
to be collected during the first year. Thus, to set up the appropriate pipeline while waiting
for the patients’ data we start to use the data of the PsyMo dataset [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], consisting of walking
sequences provided by recording 312 people accompanied by 6 psychological questionnaires
they filled in.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, we described the methodology we plan to adopt in the ongoing SPECTRA project
for performing gait analysis to detect people sufering from schizophrenia. We performed the
setup of the models and described the techniques we adopted.</p>
      <p>It is important to highlight that we propose to use a single-camera setup for gait analysis.
Results may be limited when compared to setups utilizing multiple cameras. However, this
approach may be needed because it may be easier to take the patient’s gait in a setting such as the
patient’s home or the rehabilitation center. This may be useful for supporting patient monitoring
and the prediction of relapse. In the future, we plan to compare the detection performance of
the two approaches (with multiple and single camera). In addition, the reliance on video footage
for data acquisition may introduce challenges related to data quality, consistency, and privacy
concerns, potentially afecting the robustness and generalizability of the findings. Concerning
privacy issues, the research will follow the GDPR on patient data. In particular, the patient’s
data will be anonymized. Figure 3 shows an example of a blurred patient’s face.</p>
      <p>We also plan to experiment with the proposed deep learning pipeline on existing datasets
and the data of patients sufering from SZ, on control samples, and of TRS and non-TRS patient</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This project has been financially supported by the European Union NEXTGenerationEU project
and by the Italian Ministry of the University and Research MUR, a Research Projects of
Significant National Interest (PRIN) 2022 PNRR, project n. D53D23017290001 entitled "Supporting
schizophrenia PatiEnts Care wiTh aRtificiAl intelligence (SPECTRA)".</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>World Health Organization.</surname>
          </string-name>
          , Schizophrenia.,
          <year>2022</year>
          . URL: https://www.who.int/news-room/ fact-sheets/detail/schizophrenia.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bobes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Garcia-Portilla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Bascaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Saiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bouzoño</surname>
          </string-name>
          ,
          <article-title>Quality of life in schizophrenic patients, Dialogues in clinical neuroscience 9 (</article-title>
          <year>2007</year>
          )
          <fpage>215</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kubera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Troje</surname>
          </string-name>
          , T. Fuchs,
          <article-title>Movement markers of schizophrenia: a detailed analysis of patients' gait patterns</article-title>
          ,
          <source>European Archives of Psychiatry and Clinical Neuroscience</source>
          <volume>272</volume>
          (
          <year>2022</year>
          )
          <fpage>1347</fpage>
          -
          <lpage>1364</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Miao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Automatic mental health identification method based on natural gait pattern</article-title>
          ,
          <source>Psych J 10</source>
          (
          <year>2021</year>
          )
          <fpage>453</fpage>
          -
          <lpage>464</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zago</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Luzzago</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Marangoni</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. De Cecco</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tarabini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Galli, 3D Tracking of Human Motion Using Visual Skeletonization and Stereoscopic Vision</article-title>
          ,
          <source>Front Bioeng Biotechnol</source>
          <volume>8</volume>
          (
          <year>2020</year>
          )
          <fpage>181</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Belvederi Murri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Triolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Coni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tacconi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Nerozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Escelsior</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Respino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Neviani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bertolotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bertakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chiari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zanetidou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Amore</surname>
          </string-name>
          ,
          <article-title>Instrumental assessment of balance and gait in depression: A systematic review</article-title>
          ,
          <source>Psychiatry Res</source>
          <volume>284</volume>
          (
          <year>2020</year>
          )
          <fpage>112687</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <article-title>A gait assessment framework for depression detection using kinect sensors</article-title>
          ,
          <source>IEEE Sensors Journal</source>
          <volume>21</volume>
          (
          <year>2021</year>
          )
          <fpage>3260</fpage>
          -
          <lpage>3270</lpage>
          . doi:
          <volume>10</volume>
          .1109/JSEN.
          <year>2020</year>
          .
          <volume>3022374</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V. R.</given-names>
            <surname>Varma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ghosal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Hillel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Volfson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Urbanek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Hausdorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Zipunnikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Watts</surname>
          </string-name>
          ,
          <article-title>Continuous gait monitoring discriminates community-dwelling mild Alzheimer's disease from cognitively normal controls</article-title>
          , Alzheimers
          <string-name>
            <surname>Dement</surname>
          </string-name>
          (N Y)
          <volume>7</volume>
          (
          <year>2021</year>
          )
          <article-title>e12131</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tonna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lucarini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lucchese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Presta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paraboschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Marsella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. D.</given-names>
            <surname>Daniel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vitale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marchesi</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Gobbi, Posture, gait and self-disorders: An empirical study in individuals with schizophrenia</article-title>
          ,
          <source>Early Interv Psychiatry</source>
          <volume>17</volume>
          (
          <year>2023</year>
          )
          <fpage>447</fpage>
          -
          <lpage>461</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Freire-Obregón</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Castrillón-Santana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bisogni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nappi</surname>
          </string-name>
          ,
          <article-title>An attention recurrent model for human cooperation detection</article-title>
          ,
          <source>Computer Vision and Image Understanding</source>
          <volume>197</volume>
          -
          <fpage>198</fpage>
          (
          <year>2020</year>
          )
          <article-title>102991</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.cviu.
          <year>2020</year>
          .
          <volume>102991</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ahmadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Destelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Unzueta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Monaghan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Teresa</given-names>
            <surname>Linaza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Moran</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. E.</surname>
          </string-name>
          <article-title>O'Connor, 3d human gait reconstruction and monitoring using body-worn inertial sensors and kinematic modeling</article-title>
          ,
          <source>IEEE Sensors Journal</source>
          <volume>16</volume>
          (
          <year>2016</year>
          )
          <fpage>8823</fpage>
          -
          <lpage>8831</lpage>
          . doi:
          <volume>10</volume>
          .1109/JSEN.
          <year>2016</year>
          .
          <volume>2593011</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bijalwan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Semwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Mandal</surname>
          </string-name>
          ,
          <article-title>Fusion of multi-sensor-based biomechanical gait analysis using vision and wearable sensor</article-title>
          ,
          <source>IEEE Sensors Journal</source>
          <volume>21</volume>
          (
          <year>2021</year>
          )
          <fpage>14213</fpage>
          -
          <lpage>14220</lpage>
          . doi:
          <volume>10</volume>
          .1109/JSEN.
          <year>2021</year>
          .
          <volume>3066473</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-E.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sheikh</surname>
          </string-name>
          ,
          <article-title>Realtime multi-person 2d pose estimation using part afinity fields</article-title>
          ,
          <source>in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>" why should i trust you?" explaining the predictions of any classifier</article-title>
          ,
          <source>in: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A unified approach to interpreting model predictions</article-title>
          , in: I. Guyon,
          <string-name>
            <given-names>U. V.</given-names>
            <surname>Luxburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fergus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vishwanathan</surname>
          </string-name>
          , R. Garnett (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>30</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2017</year>
          . URL: https://proceedings.neurips.cc/paper_files/paper/2017/file/ 8a20a8621978632d76c43dfd28b67767-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shetkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Bapat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ojha</surname>
          </string-name>
          , T. T. Verlekar,
          <article-title>Xai-based gait analysis of patients walking with knee-ankle-foot orthosis using video cameras</article-title>
          ,
          <source>arXiv preprint arXiv:2402.16175</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Iasevoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Avagliano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Altavilla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ciccarelli</surname>
          </string-name>
          , L.
          <string-name>
            <surname>D'Ambrosio</surname>
            ,
            <given-names>D. N.</given-names>
          </string-name>
          <string-name>
            <surname>Francesco</surname>
            , E. Razzino,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Fornaro</surname>
          </string-name>
          , A. de Bartolomeis,
          <article-title>Evaluation of a few discrete clinical markers may predict categorization of actively symptomatic non-acute schizophrenia patients as treatment resistant or responders: A study by roc curve analysis and multivariate analyses</article-title>
          ,
          <source>Psychiatry research 269</source>
          (
          <year>2018</year>
          )
          <fpage>481</fpage>
          -
          <lpage>493</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>I. Amaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Francese</surname>
          </string-name>
          , G. Tortora,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tucci</surname>
          </string-name>
          , L.
          <string-name>
            <surname>D'Errico</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Stafa</surname>
          </string-name>
          ,
          <article-title>Supporting schizophrenia patients' care with robotics and artificial intelligence</article-title>
          , in: Q.
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>V. G.</given-names>
          </string-name>
          <string-name>
            <surname>Dufy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Antona</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          Stephanidis (Eds.),
          <source>HCI International 2023 - Late Breaking Papers - 25th International Conference on Human-Computer Interaction, HCII</source>
          <year>2023</year>
          , Copenhagen, Denmark,
          <source>July 23-28</source>
          ,
          <year>2023</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          , volume
          <volume>14055</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2023</year>
          , pp.
          <fpage>482</fpage>
          -
          <lpage>495</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Cantone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P.</given-names>
            <surname>Perillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sebillo</surname>
          </string-name>
          , G. Vitiello,
          <article-title>Enhancing elderly health monitoring: Achieving autonomous and secure living through the integration of artificial intelligence, autonomous robots, and sensors</article-title>
          ,
          <source>Electronics</source>
          <volume>12</volume>
          (
          <year>2023</year>
          ). URL: https://www.mdpi.com/2079-9292/12/18/3918. doi:
          <volume>10</volume>
          .3390/electronics12183918.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Francese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Risi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Tortora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Salle</surname>
          </string-name>
          ,
          <article-title>Thea: empowering the therapeutic alliance of children with ASD by multimedia interaction, Multim</article-title>
          .
          <source>Tools Appl</source>
          .
          <volume>80</volume>
          (
          <year>2021</year>
          )
          <fpage>34875</fpage>
          -
          <lpage>34907</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>A. M. Antoniadi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Guendouz</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Wei</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Mazo</surname>
            ,
            <given-names>B. A.</given-names>
          </string-name>
          <string-name>
            <surname>Becker</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Mooney</surname>
          </string-name>
          ,
          <article-title>Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: a systematic review</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <fpage>5088</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>D.</given-names>
            <surname>Pavllo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Feichtenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Grangier</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Auli, 3d human pose estimation in video with temporal convolutions and semi-supervised training</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>7753</fpage>
          -
          <lpage>7762</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          , L. Liu, W. Wu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Motionbert: A unified perspective on learning human motion representations</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF International Conference on Computer Vision</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>15085</fpage>
          -
          <lpage>15099</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>H.-S.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lu</surname>
          </string-name>
          , Alphapose:
          <article-title>Whole-body regional multi-person pose estimation and tracking in real-time</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Papava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Olaru</surname>
          </string-name>
          , C. Sminchisescu,
          <year>Human3</year>
          . 6m:
          <article-title>Large scale datasets and predictive methods for 3d human sensing in natural environments</article-title>
          ,
          <source>IEEE transactions on pattern analysis and machine intelligence</source>
          <volume>36</volume>
          (
          <year>2013</year>
          )
          <fpage>1325</fpage>
          -
          <lpage>1339</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sabater</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Santos-Victor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bernardino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Montesano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Murillo</surname>
          </string-name>
          ,
          <article-title>One-shot action recognition in challenging therapy scenarios</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>2777</fpage>
          -
          <lpage>2785</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>P.</given-names>
            <surname>Khosla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Teterwak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Isola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maschinot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Krishnan</surname>
          </string-name>
          ,
          <article-title>Supervised contrastive learning</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>18661</fpage>
          -
          <lpage>18673</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cosma</surname>
          </string-name>
          , E. Radoi,
          <article-title>Psymo: A dataset for estimating self-reported psychological traits from gait</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>4603</fpage>
          -
          <lpage>4613</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>