<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Mental Health⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marija Stanojevic</string-name>
          <email>mstanojevic118@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cambridge Cognition</institution>
          ,
          <addr-line>Toronto, ON</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>With a COVID-19 magnified mental health crisis and growing old population (10.7% of population aged over 65 is diagnosed with Alzheimer's disease and 18% is diagnosed with mild cognitive impairment (MCI)) there is an immediate need for developing systems that can better understand and characterize cognitive and mental health (CMH) by tracking various biomarkers from functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), speech, electronic health record (EHR), movement, cognitive surveys, wearable devices, structured, genomic, and epigenomic data. One of the core technical opportunities for accelerating the computational analysis of CMH lies in multimodal (MM) ML: learning representations that model the heterogeneity and interconnections between diverse input signals. MM is particularly important in CMH primarily due to the presence of noisy labels and subjectivity inherent in surveys. The utilization of multiple signals and modalities ofers a potential solution to overcome these challenges. In addition, it is imperative to emphasize the necessity for increased data sharing and enhanced collaboration within the CMH research community. As we endeavor to tackle the multifaceted challenges posed by cognitive and mental health disorders, a collective efort is essential to facilitate access to high-quality datasets and promote collaborative initiatives. By promoting transparency and facilitating the exchange of insights and methodologies, we can accelerate progress and drive innovation in CMH research. This workshop serves as a platform for fostering such collaboration, inviting participants to contribute their expertise and insights towards the shared goal of advancing our understanding and treatment of cognitive and mental health disorders. Together, through open dialogue and shared resources, we can chart a path towards a brighter future for individuals afected by CMH conditions.</p>
      </abstract>
      <kwd-group>
        <kwd>Mental health crisis</kwd>
        <kwd>Cognitive health</kwd>
        <kwd>Biomarkers</kwd>
        <kwd>Multimodal Learning</kwd>
        <kwd>Deep learning</kwd>
        <kwd>Multilingual clinical data</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Recently, major progress has been made in pre-trained
deep and MM learning from text, speech, images, video,
signals, and structured data [
        <xref ref-type="bibr" rid="ref14">1, 2, 3, 4</xref>
        ], and there has
also been initial success towards using deep learning
and MM streams to improve prediction of patient status
or response to treatment in CMH applications [
        <xref ref-type="bibr" rid="ref11 ref13 ref2 ref5 ref7 ref9">5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15, 16</xref>
        ]. However, there remains
computational and theoretical challenges that need to
be solved in machine learning for CMH, spanning
1. collecting and sharing quality data for moderate
and severe patients,
signals,
learning,
2. learning from many diverse and understudied
3. theoretically understanding the natural way of
modality connections and interactions in MM
other,
This workshop has three primary goals:
      </p>
      <sec id="sec-2-1">
        <title>1. bring together experts from multiple disciplines</title>
        <p>working on ML and CMH to learn from each</p>
      </sec>
      <sec id="sec-2-2">
        <title>2. encourage the development of shared goals and</title>
        <p>approaches across these communities, and</p>
      </sec>
      <sec id="sec-2-3">
        <title>3. stimulate creation of better MM technologies for</title>
        <p>real-world CMH impact.</p>
        <p>To achieve these goals, this workshop includes a
diverse lineup of invited speakers across fields associated
with ML and CMH, hosting experts from computer
vision (CV), natural language processing (NLP), MM
learning, signal processing, human-computer interaction,
neuroscience, psychiatry, and psychology. To encourage
discussion and further collaboration toward the
advancement of ML for CMH, the workshop combines
invited talks, contributed papers and posters, and panel
4. real-world deployment concerns such as safety, discussion. In addition, organizers hosted a mentorship
robustness, interpretability, and collaboration
with various stakeholders, and
program with help of mentors from the program
committee, similar to mentorship program of ACL-SRW1,
5. extending models to low resource and multilin- in order to increase reach and to help researchers from
gual environments.</p>
        <p>Machine Learning for Cognitive and Mental Health Workshop
(ML4CMH), AAAI 2024, Vancouver, BC, Canada
https://winterlightlabs.github.io/ml4cmh2024/
∗Corresponding author.
nEvelop-O
across the world who are new to this field to improve the
quality of their papers before the submission time.</p>
      </sec>
      <sec id="sec-2-4">
        <title>This workshop contributes to the diversity of the field</title>
        <p>and increases collaboration between machine learning,
psychiatry, psychology, and neuroscience researchers. It</p>
        <sec id="sec-2-4-1">
          <title>Prof. Peter Foltz</title>
        </sec>
        <sec id="sec-2-4-2">
          <title>Dr. Sunny Tang</title>
          <p>Time
9:00 - 9:05 am
encourages collaboration to solve critical CMH tasks and
create new datasets and resources to foster CMH research.
In addition, it encourages multilingual and multimodal
research. The organizers put an efort to invite keynote
speakers, panelists, and program committee members
from diverse backgrounds, involving both academia and
industry. Specifically, organizers made concerted eforts
to involve underrepresented groups, so speakers include
LGBTQ people, and 50% of female. Moreover, program
committee comprises researchers come from 12 countries
across 5 continents.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Workshop Structure</title>
      <p>The workshop will take place at Vancouver Convention
Centre - West Building, Room 205, on February 26th,
2023. It features six keynote speakers, oral sessions,
poster sessions, and panel discussion, and networking
lunch. From 20 submitted papers, six were selected for
oral and poster presentation and additional nine papers
were selected for poster presentation only. Acceptance
2https://scholar.google.com/citations?user=UwQSEOkAAAAJ
3https://scholar.google.com/citations?user=Avse5gIAAAAJ
4https://scholar.google.com/citations?user=pQZaTGEAAAAJ
5https://scholar.google.com/citations?user=E_Ug5tsAAAAJ
rate was therefore 75%. See detailed schedule in Table 2. 6https://scholar.google.com/citations?user=QLaCxaoAAAAJ
Further details about the workshop can be accessed at 7https://scholar.google.com/citations?user=ar-oFSwAAAAJ
https://winterlightlabs.github.io/ml4cmh2024/.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Keynote Speakers</title>
    </sec>
    <sec id="sec-5">
      <title>4. Panel Speakers</title>
      <p>NeurIPS 2020, NAACL 2021, and NAACL 2022, and was
a workflow chair for ICML 2019. Program Co-chair.
1. Peter Foltz8, University of Colorado, Boulder, Pro- Jelena Curcic20, Ph.D. is a Senior Data Scientist at
fessor, Cognitive Science &amp; Computational Psychi- Novartis Institutes for Biomedical Research with the
exatry pertise in development, deployment, and advanced
ana2. Paola Pedrelli9, Harvard Medical School, Assis- lytics of digital endpoints and biomarkers in neuroscience
tant Professor, ML for Psychology disease area. Her topics of interest are cognition and
neu3. Frank Rudzicz10, Dalhousie University, Vector ropsychiatric symptoms in neurodegenerative and mood
Institute, CIFAR, Associate Professor, ML for disorders. Publication Chair.</p>
      <p>Healthcare Zining Zhu21 is an Assistant Professor at Stevens
4. Jekaterina Novikova11, Winterlight Labs, ML Di- Institute of Technology. He is interested in building
inrector, NLP &amp; Speech, ML for CMH terpretable and trustworthy systems with deep neural
5. Vikram Ramanarayanan12, Modality.AI, CSO, networks. His researches apply the developments of deep</p>
      <p>Speech &amp; Image Processing for CMH neural network (DNN)-based systems to the detection of
6. Xiaoxiao Li13, University of British Columbia, cognitive impairments using data from multiple
modaliUniversity of British Columbia, Trustworthy AI ties. Mentorship Chair.</p>
      <p>Malikeh Ehghaghi22 is a machine learning research
scientist at Arcee.ai. She graduated with a Master of
Organizers Science in Applied Computing from the University of
Toronto. She has over 4 years of research experience
Organization Team in applied data science and machine learning,
particularly interested in natural language processing, speech
Marija Stanojevic14, Ph.D. is an Applied Machine Learn- processing, multimodal machine learning for health, and
ing Scientist at Winterlight Labs. She focuses on repre- interpretability. Program Co-chair.
sentation learning, multimodal, multilingual, and trans- Ali Akram23 is a Machine Learning Engineer at
Camfer learning for cognitive and mental health. She was bridge Cognition, and graduated from the Systems
Dea virtual chair of ICLR 2021 and ICML 2021 and main sign Engineering program at the University of
Waterorganizer of the 9th Mid-Atlantic Student Colloquium loo. Interested in the eficient orchestration of machine
on Speech, Language and Learning (MASC-SLL 2022). learning models, and applications of multimodal machine
General chair. learning which leverage speech as the modality of choice.</p>
      <p>Elizabeth Shriberg15, Ph.D. specializes in the com- Technical Chair.
putational modeling of speech and language. She is
currently CSO at Ellipsis Health, a start-up developing
speech-based mental health screening technologies for 5. Program Committee
clinical applications. She previously held Senior
Principal Scientist roles at Amazon, SRI International, and 1) Brandon M Booth, University of Colorado;
Microsoft. She is a Fellow of ISCA 16, SRI17, and AAIA18, 2) Kathleen C. Fraser, National Research Council Canada;
and has over 300 publications and patents in speech tech- 3) Wilson Y. Lee, HubSpot;
nology and related fields. Speaker &amp; Panel Chair. 4) Ashutosh Modi, Indian Institute of Technology Kanpur;</p>
      <p>Paul Pu Liang19 is a PhD student at CMU. He re- 5) Albert Ali Salah, Utrecht University;
searches foundations of multimodal machine learning 6) Roland Goecke, University of Canberra;
with applications in socially intelligent AI, understanding 7) Andreas Triantafyllopoulos, University of Augsburg;
human and machine intelligence, natural language pro- 8) Daniele Riboni, University of Cagliari;
cessing, healthcare, and education. He organized work- 9) Korbinian Riedhammer, Technische Hochschule
Nürnshops on multimodal learning at ACL 2018, ACL 2020, berg;
10) Paula A. Perez-Toro, Friedrich-Alexander Universitat;
11) Torsten Wörtwein, Carnegie Mellon University;
12) Loukas Ilias, National Technical University of Greece;
13) Arun Das, University of Pittsburgh Medical Center;
14) Jingqi Chen, Fudan University;
15) Eloy Geenjaar, Georgia Institute of Technology;
8https://scholar.google.com/citations?user=UwQSEOkAAAAJ
9https://scholar.google.com/citations?user=E_Ug5tsAAAAJ
10https://scholar.google.ca/citations?user=elXOB1sAAAAJ
11https://scholar.google.com/citations?user=C75JskwAAAAJ
12https://scholar.google.com/citations?user=mUm8U2IAAAAJ
13https://scholar.google.com/citations?user=sdENOQ4AAAAJ
14https://scholar.google.com/citations?user=pAyfhIkAAAAJ
15https://scholar.google.com/citations?user=nRZJYPIAAAAJ
16https://www.isca-speech.org/iscaweb/
17https://www.sri.com/about-us/
18https://www.aaia-ai.org/
19https://scholar.google.com/citations?user=pKf5LtQAAAAJ
20https://scholar.google.com/citations?user=Se8a2b8AAAAJ
21https://scholar.google.ca/citations?user=Xr_hCJMAAAAJ
22https://scholar.google.com/citations?user=les29Z8AAAAJ
23https://www.akramsystems.com/</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgement References</title>
      <p>24https://winterlightlabs.com/
25https://cambridgecognition.com/
We would like to thank you to the following people for
their help and support during workshop preparation: 1)
Aparna Balagopalan PhD Student at MIT; 2) Thomas
Hartvigsen, PhD, Assistant Professor at University of
Virginia; and 3) William Jarrold, Trade Desk.</p>
      <p>We would like to express our sincere gratitude to
Winterlight Labs24, Canada and Cambridge Cognition25, UK
companies for their generous support and contribution
to the success of this event. We are deeply
appreciative of their support and partnership, which has been
instrumental in making this event possible.
Paper Title
[Long] Knowledge-enhanced Memory Model for
Emotional Support Conversation
[Long] Learning to Generate Context-Sensitive Backchannel
Smiles for Embodied AI Agents with Applications
in Mental Health Dialogues
[Short] A Pretrained Language Model for
Mental Health Risk Detection
[Short] PMC: Paired Multi-Contrast MRI Dataset at 1.5T
and 3T for Supervised Image2Image Translation
[Short] Dance of the Neurons: Unraveling
Sex from Brain Signals
[Abstract] Mental Health Stigma across Diverse
Generative Large Language Models - Abstract
Paper Title
[Long] ConversationMoC: Encoding Conversational Dynamics
using Multiplex Network for Identifying Moment of
Change in Mood and Mental Health Classification
[Short] A Privacy-Preserving Unsupervised Speaker
Disentanglement Method for Depression Detection from Speech
[Long] Ordinal Scale Evaluation of Smiling Intensity
using Comparison-Based Network
[Long] Natural Language Explanations
for Suicide Risk Classification
Using Large Language Models
[Long] Deploying AI Methods for Mental Health
in Singapore: From Mental Wellness to
Serious Mental Health Conditions
[Short] Investigating Bias in Afective State
Detection Using Eye Biometrics
[Long] Towards Remote Diferential Diagnosis of Mental
and Neurological Disorders using Automatically Extracted
Speech and Facial Features
[Short] Prediction of Relapse in Adolescent Depression
using Fusion of Video and Speech Data
[Long] Toward A Reinforcement-Learning-Based System for
Adjusting Medication to Minimize Speech Disfluency</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>arXiv:2204.00088</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Chatzianastasis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ilias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Askounis</surname>
          </string-name>
          , M. Vazir-
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>in: ICASSP</source>
          <year>2023</year>
          -2023 IEEE International Confer-
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>(ICASSP)</source>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Diep</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stanojevic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Novikova</surname>
          </string-name>
          , Multi-modal
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>detection</surname>
          </string-name>
          ,
          <source>arXiv preprint arXiv:2212.14490</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ehghaghi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rudzicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Novikova</surname>
          </string-name>
          , Data-driven
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>arXiv preprint arXiv:2210.03303</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Golovanevsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Eickhof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          , Multimodal
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>Informatics Association</source>
          <volume>29</volume>
          (
          <year>2022</year>
          )
          <fpage>2014</fpage>
          -
          <lpage>2022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ilias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Askounis</surname>
          </string-name>
          , Multimodal deep learning
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>transcripts</surname>
          </string-name>
          , Frontiers in Aging Neuroscience 14
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hong</surname>
          </string-name>
          , Automatic depres[1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baevski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.-N.</given-names>
            <surname>Hsu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Babu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Auli, sion detection via learning and fusing features from</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <article-title>Data2vec: A general framework for self-supervised visual cues</article-title>
          ,
          <source>IEEE Transactions on Computational</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <article-title>learning in speech, vision and language</article-title>
          , in: Inter- Social
          <string-name>
            <surname>Systems</surname>
          </string-name>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>national Conference on Machine Learning</source>
          , PMLR, [14]
          <string-name>
            <surname>P.-C. Wei</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Roitberg</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J</given-names>
          </string-name>
          . Zhang,
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <year>2022</year>
          , pp.
          <fpage>1298</fpage>
          -
          <lpage>1312</lpage>
          . R. Stiefelhagen, Multi-modal depression estima[2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Girdhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>El-Nouby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <article-title>V. tion based on sub-attentional fusion</article-title>
          , in: Computer
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Alwala</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Joulin</surname>
            ,
            <given-names>I. Misra</given-names>
          </string-name>
          , Imagebind: One em- Vision
          <string-name>
            <surname>-ECCV 2022 Workshops: Tel Aviv</surname>
          </string-name>
          , Israel, Oc-
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <article-title>bedding space to bind them all</article-title>
          ,
          <source>in: Proceedings of tober 23-27</source>
          ,
          <year>2022</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>VI</given-names>
          </string-name>
          , Springer,
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>the IEEE/CVF Conference on Computer Vision and 2023</source>
          , pp.
          <fpage>623</fpage>
          -
          <lpage>639</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>Pattern</given-names>
            <surname>Recognition</surname>
          </string-name>
          ,
          <year>2023</year>
          , pp.
          <fpage>15180</fpage>
          -
          <lpage>15190</lpage>
          . [15]
          <string-name>
            <surname>A.-M. Bucur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Cosma</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>L. P.</given-names>
          </string-name>
          <string-name>
            <surname>Dinu</surname>
          </string-name>
          , It's [3] OpenAI, Gpt-4
          <source>technical report</source>
          ,
          <article-title>arXiv preprint just a matter of time: Detecting depression with</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <source>arXiv:2303.08774</source>
          (
          <year>2023</year>
          ).
          <article-title>time-enriched multimodal transformers</article-title>
          ,
          <source>Advances</source>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Akkus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Djakovic</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Jauch-Walser, in Information Retrieval</article-title>
          .
          <source>ECIR 2023. Lecture Notes</source>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Koch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Loss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marquardt</surname>
          </string-name>
          , M. Moldovan, in Computer Science (
          <year>2023</year>
          )
          <fpage>200</fpage>
          -
          <lpage>215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Sauter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schneider</surname>
          </string-name>
          , et al.,
          <source>Multimodal</source>
          <volume>deep</volume>
          [16]
          <string-name>
            <surname>D. M. Jacobs</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sano</surname>
            , G. Dooneief,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Marder</surname>
            ,
            <given-names>K. L.</given-names>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>learning</surname>
          </string-name>
          ,
          <source>arXiv preprint arXiv:2301.04856</source>
          (
          <year>2023</year>
          ). Bell,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Stern</surname>
          </string-name>
          , Neuropsychological detection and [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          , J. Han,
          <article-title>D-vlog: Multi- characterization of preclinical alzheimer's disease,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <article-title>modal vlog dataset for depression detection</article-title>
          ,
          <source>in: Neurology</source>
          <volume>45</volume>
          (
          <year>1995</year>
          )
          <fpage>957</fpage>
          -
          <lpage>962</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Intelligence</surname>
          </string-name>
          , volume
          <volume>36</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>12226</fpage>
          -
          <lpage>12234</lpage>
          . [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <source>communications 13</source>
          (
          <year>2022</year>
          )
          <fpage>3404</fpage>
          . [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Fara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Molimpakis</surname>
          </string-name>
          , N. Cummins,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>