<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Conceptual Framework for Emotion-Aware Monitoring of Sundown Syndrome</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Qianru Xu</string-name>
          <email>psy.qianru.xu@outlook.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Penglan Liu</string-name>
          <email>penglan.p.liu@jyu.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chaoxiong Ye</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Psychology, University of Jyväskylä</institution>
          ,
          <addr-line>40014</addr-line>
          ,
          <country country="FI">Finland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Education, Anyang Normal University</institution>
          ,
          <addr-line>Anyang 455000</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Sundown syndrome (SS) is a late-day neuropsychiatric condition in older adults with dementia, marked by worsening cognitive, emotional, and behavioral disturbances. Current detection relies on caregiver reports and retrospective questionnaires, which are subjective and often miss brief or subtle episodes. Video sensing naturally provides multimodal data, including video, audio, and derived contextual and physiological signals, making it well-suited to capture the complexity of SS and support personalized profiles. This paper presents a conceptual framework for continuous, emotion-aware monitoring of SS that centers on automatic facial expression recognition combined with environmental and physiological sensing. Deep-learning models detect micro- and macro-expressions under natural conditions, while passive sensors track light, noise, motion, and time-of-day patterns to provide contextual cues for timely alerts and intervention. We discuss key challenges for real-world deployment, including privacy protection, algorithmic fairness for older faces, system reliability, and user acceptance in care settings. By integrating ethical safeguards and adaptive feedback, the proposed approach shifts SS monitoring from subjective, delayed assessment toward a proactive, individualized dementia care.</p>
      </abstract>
      <kwd-group>
        <kwd>sundown syndrome</kwd>
        <kwd>facial expression recognition</kwd>
        <kwd>multimodal sensing</kwd>
        <kwd>dementia care</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Sundown syndrome (SS) is a complex neuropsychiatric condition observed mainly in older adults with
dementia. It typically emerges in the late afternoon or evening, when patients display a cluster of
symptoms that may include a wide range of cognitive, emotional, and behavioural changes [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ].
The onset and severity of these symptoms can vary from day to day and difer among individuals,
and their unpredictable timing places heavy emotional and physical demands on family members
and professional caregivers. Despite its clinical importance, SS remains poorly understood. Reported
prevalence ranges from as low as 2.4% to over 60%, largely because of inconsistent diagnostic criteria
and diferences in care settings [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Proposed contributing factors include sleep disturbance, circadian
rhythm disruption, altered melatonin secretion, and environmental triggers such as dim light or changes
in evening routines, suggesting a complex interplay of neurobiological, clinical, and environmental
or social factors [
        <xref ref-type="bibr" rid="ref4">4, 5</xref>
        ]. These uncertainties complicate early detection and timely intervention and
highlight the need for more objective and continuous monitoring approaches.
      </p>
      <p>Currently, SS detection relies primarily on caregiver observations and questionnaire-based
assessments such as the Neuropsychiatric Inventory [6]. These methods provide valuable clinical information
but depend heavily on caregiver reporting and are inherently subjective and retrospective. As a result,
brief or subtle episodes may be missed, reducing opportunities for timely intervention. With the rapid
development of artificial intelligence and sensor technology, multimodal approaches are emerging as
promising solutions for risk identification, diagnosis, monitoring, and treatment. Integrating wearable
devices, environmental sensors, and advanced analytic methods can provide a more comprehensive and
https://frejaxu.github.io/ (Q. Xu); https://www.jyu.fi/en/people/chaoxiong-ye (C. Ye)</p>
      <p>CEUR
Workshop</p>
      <p>
        ISSN1613-0073
objective picture of symptom development, especially given the complexity of SS [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Among emerging approaches for objective and continuous monitoring, video-based sensing is
particularly promising. It can capture visual, auditory, and other contextual information to provide a
comprehensive view of patient status and, with recent advances, can even extract certain physiological
signals directly from video [7, 8]. Specifically, automatic facial expression recognition (FER, for reviews
see [9, 10]) uses computer vision and machine learning techniques to infer emotional states from facial
movements, ofering a direct way to detect the subtle afective changes that often precede SS episodes.
To explore this potential, our recent pilot study tested the use of video signals for detecting SS states and
confirmed that information extracted from both facial and bodily features, especially emotional cues
from facial expressions, is a feasible indicator [11]. Building on these findings, this article highlights
the limitations of current monitoring methods, presents a conceptual framework that integrates FER
with environmental and physiological data for early detection and intervention in dementia care, and
examines the key ethical and practical considerations for implementation.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Current challenges in sundown syndrome monitoring</title>
      <sec id="sec-2-1">
        <title>2.1. Definition and diagnostic ambiguity</title>
        <p>
          Unlike many psychiatric or neurological disorders, SS lacks a standardized definition and formal
diagnostic criteria and is not included in major classification systems such as the DSM-5[ 12]. Even so,
the term is widely used in clinical practice to describe a typical pattern of late-day behavioural changes
[
          <xref ref-type="bibr" rid="ref1">1, 13</xref>
          ]. Symptoms such as mood changes, agitation, pacing, disorientation, and psychosis are dificult
to distinguish from the usual progression of dementia or from other conditions such as delirium [14].
Moreover, the severity and combination of these symptoms can vary not only between patients but
also from one day to the next in the same individual [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], which makes the condition especially hard to
define and detect.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Dependence on subjective caregiver observation</title>
        <p>Although recent approaches have begun to incorporate objective measures such as motion sensors to
record activity changes during sundowning [15, 16], most assessments still rely heavily on
caregivercompleted checklists or behavioural logs. These tools remain retrospective and subjective, depending on
the caregiver’s attentiveness, memory, and interpretation of behaviour, which makes them vulnerable to
bias [13]. Brief or low-intensity episodes may go unnoticed or unreported, especially in under-resourced
care settings, which can heighten caregiver burden and lead to more serious consequences.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Lack of real-time, objective indices</title>
        <p>
          Even in well-equipped facilities, SS detection often occurs only after symptoms become severe. One
major reason is the absence of a gold-standard index for real-time, continuous measurement. Objective
physiological indicators such as heart-rate variability, skin conductance, or brain activity could serve as
potential biomarkers for SS, yet they have been largely overlooked in SS research. Motion detectors
and ambient sensors can monitor behaviour, but their signals lack specificity and may not reliably
distinguish SS from other sources of restlessness unless additional contextual information or intensive
manual filtering is applied (for a detailed discussion, see [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]).
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Temporal complexity and environmental sensitivity</title>
        <p>
          SS typically follows a diurnal rhythm, emerging in the late afternoon or evening, but the timing is
not always consistent. Symptoms may be triggered by subtle environmental changes such as reduced
light, lower caregiver availability, increased background noise, or deviations from routine [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. These
temporal and environmental fluctuations add further variability to symptom expression and make
reliable detection particularly challenging.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conceptual framework for emotion-aware monitoring of sundown syndrome</title>
      <sec id="sec-3-1">
        <title>3.1. Framework for Emotion-Aware Monitoring</title>
        <p>
          Figure 1 illustrates the conceptual framework of our emotion-aware monitoring system for sundown
syndrome (SS). At its core is a robust facial expression recognition (FER) module that leverages recent
advances in deep learning [10]. Modern networks trained on both macro- and micro-expressions can
detect subtle muscle movements and rapid emotional shifts in naturalistic settings while maintaining
stable performance under varying lighting, head positions, and background conditions [9, 17, 18]. The
FER module is complemented by passive environmental sensors that monitor ambient light, noise levels,
time of day, and other key external factors to capture the relevant environmental information [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ]. To
provide additional context and reliable ground truth, wearable devices can be incorporated to track
physical activity and physiological signals. These multimodal streams are integrated within a central
sensor-fusion and context-modeling layer, which continuously evaluates risk patterns. The goal of the
system is to support, not replace, human judgment. When early signs of SS are detected, the system can
issue real-time caregiver alerts, adjust the home environment through smart home control, for example
by modifying lighting or sound, and securely transmit data to clinicians via cloud services for remote
supervision and integration with telehealth, enabling contactless monitoring in aging and dementia
care [19].
        </p>
        <p>It should be noted that the system must be designed for older users and real-world deployment [19, 20].
In this sense, sensors should work unobtrusively with minimal physical or cognitive burden. Algorithms
should take into account age-related features such as reduced muscle tone, slower movements, and
cognitive fatigue by applying noise-tolerant and time-smoothed models. Visual feedback, such as
dashboard summaries or simple app-based alerts, should be easy to understand and fit naturally into
caregivers’ routines. Equally important, the system must include ethical safeguards from the start,
which are discussed in more detail in Section 4.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Workflow integration in real-world care settings</title>
        <p>A key strength of the proposed system is its ability to adapt to the diverse presentations of SS, which vary
across individuals in onset time, environmental triggers, and symptoms. Instead of relying on a fixed
decision tree, the system should learn personalized profiles over time, providing a tailored alternative
to traditional trial-and-error methods [21]. To illustrate the system’s capacity for personalization, we
describe three representative real-world scenarios.</p>
        <p>In one case, a resident with mild Alzheimer’s disease shows signs of increased social withdrawal and
depressive afect during the late afternoon. While such shifts may not develop into overt behavioral
disturbances, an emotion-aware system can detect early afective signals such as facial tension, reduced
expressiveness, prolonged gaze avoidance, or subtle signs of distress like brow furrowing and reduced
blink rate. Previous studies link these features to depressive and anxious states in older adults,
particularly those with cognitive impairment [22, 23]. Detecting these cues early makes it possible to initiate
low-intensity interventions such as music therapy, environmental light adjustment, or structured social
interaction before symptoms intensify.</p>
        <p>
          In another scenario, a resident may display agitated behavior such as physical or verbal aggression,
excessive pacing, or restlessness in a communal dining area during sundown hours. These high-stimulus
settings challenge FER because facial cues can be obscured by motion artifacts or occlusion. Here,
multimodal sensor fusion becomes essential, combining facial afect signals with environmental audio
(e.g., shouting or loud speech), locomotor patterns, and physiological indicators such as heart-rate
variability (HRV) to improve accuracy [
          <xref ref-type="bibr" rid="ref3">24, 3</xref>
          ]. Physical movement has been shown to correlate with
agitation across multiple spatiotemporal scales [25], while HRV has been proposed as a biomarker
of agitation risk in Alzheimer’s disease [26]. Longitudinal studies further suggest that day-to-day
changes in environmental factors or sleep may help predict agitation episodes in advance [
          <xref ref-type="bibr" rid="ref3">27, 3</xref>
          ]. In
such circumstances, detection accuracy is especially critical because numerous external distractions
can interfere with assessment and the complex environment may lead to more serious outcomes, such
as conflicts between residents. Timely and reliable identification of even minor emotional changes
is therefore essential to allow caregivers to intervene early, for example by separating individuals or
moving the agitated resident to a calmer and safer setting.
        </p>
        <p>In a third scenario, residents in advanced dementia may lose most or all verbal abilities, making it
dificult for caregivers to recognize emotional distress before it becomes overt. In such cases, facial
expression serves as a critical surrogate marker for internal states, especially when self-report is no
longer possible [28]. The proposed system is designed to detect fleeting macro- and micro-expressions
that may signal silent distress. Evidence shows that even individuals with severe cognitive impairment
exhibit facial responses to pain or fear that human observers often overlook but that automated FER
systems can capture reliably [28, 29]. These observations support the use of FER, particularly when
combined with physiological sensing, as a powerful tool to access the emotional experiences of
nonverbal dementia patients and to anticipate neuropsychiatric symptoms before they worsen.</p>
        <p>By continuously logging both afective signals and caregiver responses, the system is designed to
create an adaptive feedback loop that identifies meaningful patterns of deterioration and separates them
from temporary emotional fluctuations. It should prioritize individualized behavioral profiles, which
is especially important in SS where symptoms vary widely across people and environments. Rather
than relying on static thresholds or predefined triggers, the system should evolve with each resident,
refining its predictive model through ongoing real-world interaction. This direction also aligns with the
broader movement in aging care toward proactive, precision health strategies that consider behavioral
diversity and respect the agency of individuals living with cognitive decline [20, 21].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Ethical, practical, and deployment considerations</title>
      <p>Despite these opportunities, substantial ethical and practical challenges remain. They must meet general
standards of afective-computing and AI ethics [ 30, 31, 32, 33] while addressing the specific needs of
older adults, who often have lower digital literacy and age-related cognitive changes and therefore
require greater assistance with technology use and data security [19]. Here we propose three key issues
that should be specially take into consideration of the SS emotion-aware system design.</p>
      <sec id="sec-4-1">
        <title>4.1. Privacy and data protection</title>
        <p>Continuous video or sensor monitoring raises significant privacy risks, particularly in private living
spaces and among people with dementia who may not fully understand the technology. Systems should
adopt Privacy by Design principles [34] and comply with regulations such as the General Data Protection
Regulation in Europe and the Health Insurance Portability and Accountability Act in the United States.
Technical safeguards include edge computing for local data processing [35], federated learning to avoid
central data aggregation [36], and face de-identification methods that preserve emotional content while
removing identity [37], all of which help protect user privacy and data security. Special attention is
needed for individuals who cannot provide fully informed consent, especially regarding the balance
between safety, privacy, and personal autonomy [38].</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Reliability and algorithmic fairness</title>
        <p>An SS monitoring system must provide reliable and unbiased detection across diverse older populations.
Studies have shown that FER algorithms trained mainly on younger adults often underperform and
produce inaccurate predictions when applied to older faces, particularly in individuals with neurological
conditions [39, 40, 41]. Age-related factors such as subtler facial expressions and facial changes caused
by wrinkles and reduced muscle elasticity difer from those of younger faces and can lower the accuracy
of automated FER systems [39]. Therefore, fairness-aware learning methods and the inclusion of
ageand ethnically diverse clinical datasets are essential. Long-term reliability also requires robust hardware,
regular calibration, and transparent reporting of system limitations.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Acceptance and integration in care settings</title>
        <p>Even a privacy-conscious and technically reliable system will fail if it is not accepted by older adults and
their caregivers. Therefore, the design should carefully consider the needs of older users by keeping
technical demands low, ensuring ease of use, fitting smoothly into daily routines, and promoting
enjoyment and a sense of safety. It should also support new forms of interaction and provide adequate
assistance for both patients and caregivers during real-world use [42]. In addition, older adults often
need clear communication, tailored guidance, and suficient time to build trust. Care staf may worry
about being evaluated, and families may resist what feels like intrusive surveillance. Addressing these
concerns is essential, and accessible educational materials should be developed to enhance digital
literacy among older adults and their caregivers [43]. Achieving broad acceptance, however, will require
coordinated eforts from healthcare providers, community organizations, and the wider society.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>SS remains a major source of distress for older adults, especially those with dementia and for their
caregivers, yet current detection methods are reactive, subjective, and often too late for timely intervention.
Recent advances in FER, multimodal sensing, and machine learning ofer a path to shift from
retrospective observation to proactive, continuous, and personalized monitoring. In this paper, we outlined a
framework that integrates FER with contextual data streams such as light, noise, and physiological
signals to detect subtle emotional changes that may precede SS episodes, and we discussed the ethical
and technical challenges that must be addressed for real-world deployment. This integrated approach
is essential not only for SS but also for managing broader behavioral disturbances in older adults, and
earlier recognition and response can help preserve the dignity, safety, and emotional well-being of both
patients and caregivers. Successful implementation, however, will require sustained cooperation among
researchers, clinicians, caregivers, ethicists, and policy makers so that these advances move beyond
proof-of-concept and become part of everyday dementia care.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was supported by the start-up research fund of Anyang Normal University (2025BSKYQD054),
the Research Council of Finland (formerly the Academy of Finland) through the Academy Research
Fellow project (355369), the Finnish Cultural Foundation (00231373), and the Finnish Cultural Foundation
for Säätiöiden post doc -pooli grant (00240135).</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-5 and Copilot to check grammar and spelling.
In addition, GPT-5 was used to generate an illustrative patient image for Figure 1. All content was
subsequently reviewed and edited by the authors, who take full responsibility for the final text and
ifgures.
[5] G. Gnanasekaran, “sundowning” as a biological phenomenon: current understandings and future
directions: an update, Aging Clinical and Experimental Research 28 (2016) 383–392.
[6] J. L. Cummings, M. Mega, K. Gray, S. Rosenberg-Thompson, D. A. Carusi, J. Gornbein, The
neuropsychiatric inventory: comprehensive assessment of psychopathology in dementia, Neurology
44 (1994) 2308–2308.
[7] R. J. Lee, S. Sivakumar, K. H. Lim, Review on remote heart rate measurements using
photoplethysmography, Multimedia Tools and Applications 83 (2024) 44699–44728.
[8] U. A. Khan, Q. Xu, Y. Liu, A. Lagstedt, A. Alamäki, J. Kauttonen, Exploring contactless techniques
in multimodal emotion recognition: insights into diverse applications, challenges, solutions, and
prospects, Multimedia Systems 30 (2024) 115.
[9] S. Li, W. Deng, Deep facial expression recognition: A survey, IEEE transactions on afective
computing 13 (2020) 1195–1215.
[10] X. Guo, Y. Zhang, S. Lu, Z. Lu, Facial expression recognition: a review, Multimedia tools and
applications 83 (2024) 23689–23735.
[11] Q. Xu, M. Wei, H.-Q. Khor, F. V. Lin, G. Zhao, Sunvid: A curated online video database for sundown
syndrome research, in: Scandinavian Conference on Image Analysis, Springer, 2025, pp. 18–31.
[12] F. Edition, et al., Diagnostic and statistical manual of mental disorders, Am Psychiatric Assoc 21
(2013) 591–643.
[13] M. Toccaceli Blasi, M. Valletta, A. Trebbastoni, F. D’Antonio, G. Talarico, A. Campanelli,
M. Sepe Monti, E. Salati, M. Gasparini, S. Buscarnera, et al., Sundowning in patients with dementia:
Identification, prevalence, and clinical correlates, Journal of Alzheimer’s Disease 94 (2023) 601–610.
[14] G. Cipriani, C. Lucetti, C. Carlesi, S. Danti, A. Nuti, Sundown syndrome and dementia, European</p>
      <p>Geriatric Medicine 6 (2015) 375–380.
[15] L. M. Ghali, R. W. Hopkins, P. Rindlisbacher, Temporal shifts in peak daily activity in alzheimer’s
disease, International journal of geriatric psychiatry 10 (1995) 517–521.
[16] R. Trumpf, P. Haussermann, W. Zijlstra, T. Fleiner, Circadian aspects of mobility-related behavior
in patients with dementia: An exploratory analysis in acute geriatric psychiatry, International
journal of geriatric psychiatry 38 (2023) e5957.
[17] Y. Li, J. Wei, Y. Liu, J. Kauttonen, G. Zhao, Deep learning for micro-expression recognition: A
survey, IEEE Transactions on Afective Computing 13 (2022) 2028–2046.
[18] M. Jampour, M. Javidi, Multiview facial expression recognition, a survey, IEEE Transactions on</p>
      <p>Afective Computing 13 (2022) 2086–2105.
[19] C. Chen, S. Ding, J. Wang, Digital health for aging populations, Nature medicine 29 (2023)
1623–1630.
[20] S. Zhou, Y. Liu, A. Turnbull, C. Tapparello, E. Adeli, F. V. Lin, Personalized cognitive enhancement
for older adults: An aging-friendly closed-loop human-machine interface framework, Ageing
Research Reviews (2025) 102877.
[21] D. A. Ziegler, J. A. Anguera, C. L. Gallen, W.-Y. Hsu, P. E. Wais, A. Gazzaley, Leveraging technology
to personalize cognitive enhancement methods in aging, Nature aging 2 (2022) 475–483.
[22] R. Zeghari, A. König, R. Guerchouche, G. Sharma, J. Joshi, R. Fabre, P. Robert, V. Manera, et al.,
Correlations between facial expressivity and apathy in elderly people with neurocognitive disorders:
exploratory study, JMIR formative research 5 (2021) e24727.
[23] Y. Zhou, W. Han, X. Yao, J. Xue, Z. Li, Y. Li, Developing a machine learning model for detecting
depression, anxiety, and apathy in older adults with mild cognitive impairment using speech and
facial expressions: A cross-sectional observational study, International journal of nursing studies
146 (2023) 104562.
[24] H. Davidof, A. Van Kraaij, E. Lutin, L. Van den Bulcke, M. Vandenbulcke, N. Van Helleputte,
M. De Vos, C. Van Hoof, M. Van Den Bossche, Environmental triggers of specific subtypes of
agitation in people with dementia: Observational study, JMIR Formative Research 9 (2025) e60274.
[25] J. K. Deters, S. Janus, J. A. L. Silva, H. J. Wörtche, S. U. Zuidema, Sensor-based agitation prediction
in institutionalized people with dementia a systematic review, Pervasive and Mobile Computing
98 (2024) 101876.
[26] K. Y. Liu, E. A. Whitsel, G. Heiss, P. Palta, S. Reeves, F. V. Lin, M. Mather, J. P. Roiser, R. Howard,
Heart rate variability and risk of agitation in alzheimer’s disease: The atherosclerosis risk in
communities study, Brain Communications 5 (2023) fcad269.
[27] W.-T. M. Au-Yeung, L. Miller, Z. Beattie, H. H. Dodge, C. Reynolds, I. Vahia, J. Kaye, Sensing a
problem: Proof of concept for characterizing and predicting agitation, Alzheimer’s &amp; Dementia:
Translational Research &amp; Clinical Interventions 6 (2020) e12079.
[28] P. A. Beach, J. T. Huck, M. M. Miranda, K. T. Foley, A. C. Bozoki, Efects of alzheimer disease on
the facial expression of pain, The Clinical journal of pain 32 (2016) 478–487.
[29] L.-Y. Chen, T.-H. Tsai, A. Ho, C.-H. Li, L.-J. Ke, L.-N. Peng, M.-H. Lin, F.-Y. Hsiao, L.-K. Chen,
Predicting neuropsychiatric symptoms of persons with dementia in a day care center using a facial
expression recognition system, Aging (Albany NY) 14 (2022) 1280.
[30] R. Cowie, 24 ethical issues in afective computing, The Oxford handbook of afective computing
(2015) 334.
[31] L. Stark, J. Hoey, The ethics of emotion in artificial intelligence systems, in: Proceedings of the
2021 ACM conference on fairness, accountability, and transparency, 2021, pp. 782–793.
[32] A. Katirai, Ethical considerations in emotion recognition technologies: a review of the literature,</p>
      <p>AI and Ethics 4 (2024) 927–948.
[33] D. Barker, M. K. R. Tippireddy, A. Farhan, B. Ahmed, Ethical considerations in emotion recognition
research, Psychology International 7 (2025) 43.
[34] A. Cavoukian, et al., Privacy by design: The 7 foundational principles, Information and privacy
commissioner of Ontario, Canada 5 (2009) 12.
[35] W. Wan, R. Kubendran, C. Schaefer, S. B. Eryilmaz, W. Zhang, D. Wu, S. Deiss, P. Raina, H. Qian,
B. Gao, et al., A compute-in-memory chip based on resistive random-access memory, Nature 608
(2022) 504–512.
[36] N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A.</p>
      <p>Landman, K. Maier-Hein, et al., The future of digital health with federated learning, NPJ digital
medicine 3 (2020) 119.
[37] W. Khan, L. Topham, U. Khayam, S. Ortega-Martorell, P. Heather, D. Ansell, D. Al-Jumeily, A.
Hussain, Person de-identification: A comprehensive review of methods, datasets, applications, and
ethical aspects along-with new dimensions, IEEE Transactions on Biometrics, Behavior, and
Identity Science (2024).
[38] F. Meiland, A. Innes, G. Mountain, L. Robinson, H. van der Roest, J. A. García-Casal, D. Gove,
J. R. Thyrian, S. Evans, R.-M. Dröes, et al., Technologies to support community-dwelling persons
with dementia: a position paper on issues regarding development, usability, efectiveness and
cost-efectiveness, deployment, and ethics, JMIR rehabilitation and assistive technologies 4 (2017)
e6376.
[39] G. Guo, R. Guo, X. Li, Facial expression recognition influenced by human aging, IEEE Transactions
on Afective Computing 4 (2013) 291–298.
[40] E. Kim, D. Bryant, D. Srikanth, A. Howard, Age bias in emotion detection: An analysis of facial
emotion recognition performance on young, middle-aged, and older adults, in: Proceedings of the
2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 638–644.
[41] B. Taati, S. Zhao, A. B. Ashraf, A. Asgarian, M. E. Browne, K. M. Prkachin, A. Mihailidis, T.
Hadjistavropoulos, Algorithmic bias in clinical populations—evaluating and improving facial analysis
technology in older adults with dementia, IEEE access 7 (2019) 25527–25534.
[42] B. Thordardottir, A. Malmgren Fänge, C. Lethin, D. Rodriguez Gatta, C. Chiatti, Acceptance and use
of innovative assistive technologies among people with cognitive impairment and their caregivers:
a systematic review, BioMed research international 2019 (2019) 9196729.
[43] A. B. Friedman, C. Pathmanabhan, A. Glicksman, G. Demiris, A. R. Cappola, M. S. McCoy,
Addressing online health privacy risks for older adults: a perspective on ethical considerations and
recommendations, Gerontology and Geriatric Medicine 8 (2022) 23337214221095705.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khachiyants</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Trinkle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Son</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Sundown syndrome in persons with dementia: an update</article-title>
          ,
          <source>Psychiatry investigation 8</source>
          (
          <year>2011</year>
          )
          <fpage>275</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Boronat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Ferreira-Maia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Sundown syndrome in older persons: a scoping review</article-title>
          ,
          <source>Journal of the American Medical Directors Association</source>
          <volume>20</volume>
          (
          <year>2019</year>
          )
          <fpage>664</fpage>
          -
          <lpage>671</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. V.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . Zhao,
          <article-title>Bridging gaps in sundown syndrome research: a scoping review and roadmap for future multimodal approaches, Archives of Clinical Neuropsychology (</article-title>
          <year>2025</year>
          )
          <article-title>acaf062</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Canevelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Valletta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Trebbastoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sarli</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. D'Antonio</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Tariciotti</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. De Lena</surname>
          </string-name>
          , G. Bruno,
          <article-title>Sundowning in dementia: clinical relevance, pathophysiological determinants, and therapeutic approaches</article-title>
          ,
          <source>Frontiers in medicine 3</source>
          (
          <year>2016</year>
          )
          <fpage>73</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>