<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>K. A. Hof, M. Bashir, Trust in automation:integrating empirical evidence on factors that influence
trust, Human Factors</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3389/fpsyg.2021.604977</article-id>
      <title-group>
        <article-title>Adaptive XAI in High Stakes Environments: Modeling Swift Trust with Multimodal Feedback in Human AI Teams</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nishani Fernando</string-name>
          <email>nlfernando11@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bahareh Nakisa</string-name>
          <email>bahar.nakisa@deakin.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Adnan Ahmad</string-name>
          <email>adnan.a@deakin.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohammad Naim Rastgoo</string-name>
          <email>naim.rastgoo@monash.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Deakin University</institution>
          ,
          <addr-line>Geelong, Victoria</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Monash University</institution>
          ,
          <addr-line>Melbourne, Victoria</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>57</volume>
      <issue>2015</issue>
      <fpage>64</fpage>
      <lpage>70</lpage>
      <abstract>
        <p>Efective human-AI teaming heavily depends on swift trust, particularly in high-stakes scenarios such as emergency response, where timely and accurate decision-making is critical. In these time-sensitive and cognitively demanding settings, adaptive explainability is essential for fostering trust between human operators and AI systems. However, existing explainable AI (XAI) approaches typically ofer uniform explanations and rely heavily on explicit feedback mechanisms, which are often impractical in such high-pressure scenarios. To address this gap, we propose a conceptual framework for adaptive XAI that operates non-intrusively by responding to users' real-time cognitive and emotional states through implicit feedback, thereby enhancing swift trust in high-stakes environments. The proposed adaptive explainability trust framework (AXTF) leverages physiological and behavioral signals, such as EEG, ECG, and eye tracking, to infer user states and support explanation adaptation. At its core is a multi-objective, personalized trust estimation model that maps workload, stress, and emotion to dynamic trust estimates. These estimates guide the modulation of explanation features enabling responsive and personalized support that promotes swift trust in human-AI collaboration. This conceptual framework establishes a foundation for developing adaptive, non-intrusive XAI systems tailored to the rigorous demands of high-pressure, time-sensitive environments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Adaptive Explainability</kwd>
        <kwd>Human-Machine Teams</kwd>
        <kwd>Swift Trust</kwd>
        <kwd>Implicit Feedback</kwd>
        <kwd>Afective Interaction</kwd>
        <kwd>Dynamic Environments</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In high stakes domains such as emergency response [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and military operations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], human AI teams
are often formed on the fly and operate under extreme time pressure, high cognitive workload, and
rapidly evolving situational demands. These environments are characterized by rapid decision-making,
elevated emotional intensity, and limited opportunities for explicit communication or coordination.
Failures in such contexts can lead to significant safety, ethical, or operational consequences [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. As a
result, efective human-AI teaming in such scenarios hinges on the development of swift trust and the
ability to support human operators through adaptive, context-sensitive system behavior. Swift trust,
originally introduced in the context of temporary human teams [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], describes a form of trust that arises
rapidly out of necessity, without the benefit of prolonged interaction or prior history. In high-stakes
environments, humans are often compelled to place immediate trust in AI systems simply because there
is no time to build it gradually. However, this initial trust is fragile and often vulnerable to performance
errors, lack of transparency and high cognitive load [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Sustaining trust in such conditions demands
that AI systems must be capable of communicating efectively and adapting responsively to the human’s
evolving cognitive and emotional state.
      </p>
      <p>
        Explainability has emerged as a central mechanism for cultivating trust in AI, enabling humans to
understand and anticipate AI behavior [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, existing explainable AI (XAI) approaches [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
are often static and uniform, providing generic explanations that overlook situational awareness and
fail to adapt to the dynamic nature of the environment, which directly influences the user’s real-time
cognitive and emotional state. Furthermore, these approaches typically rely on explicit human feedback,
such as verbal queries or stated preferences, which are often impractical in high-pressure, cognitively
demanding environments where users are overloaded and time is constrained. Therefore, more advanced
explainable systems are essential for high-stakes environments. Such systems must be capable of rapid
adaptation not only to the human operator’s state but also to contextual variables like task urgency,
system reliability, and environmental uncertainty.
      </p>
      <p>
        To overcome these gaps, incorporating implicit human feedback is essential for advancing
explainable AI (XAI) systems, especially in high-stakes environments. Non-invasive technologies, such as
wearable sensors [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], provide a promising means of capturing physiological and behavioral signals
that reflect a user’s internal state. These signals may include Electroencephalography (EEG) [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ],
Electrocardiography (ECG) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and eye tracking [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], serving as real-time proxies for trust,
cognitive workload, and emotional state. However, efective explanation adaptation must also account
for AI system performance and situational context[
        <xref ref-type="bibr" rid="ref13">13, 14</xref>
        ], which collectively shape human-AI trust
dynamics[15]. A comprehensive, adaptive XAI framework must therefore integrate these diverse signals
to provide personalized, context-aware support.
      </p>
      <p>This work presents adaptive explainability trust framework (AXTF), a conceptual framework designed
to advance human-AI teaming in high-stakes, time-sensitive domains by enabling adaptive, non-intrusive
explainability driven by multi-objective trust estimation model. It outlines a foundational approach
that combines implicit human feedback, AI performance metrics, and situational awareness to infer the
evolving trust state of the user. At the core of this framework is a personalized trust inference model that
integrates the user’s cognitive and emotional state along with situational awareness to infer dynamic
trust levels. These estimates guide the adaptive modulation of explanation features such as timing,
granularity, content, and presentation mode, enabling dynamic, context-aware explanation strategies
that foster swift trust. Importantly, this approach reflects a neurosymbolic paradigm by combining
low-level physiological sensing with interpretable symbolic reasoning via fuzzy rules. This conceptual
approach lays the foundation for future research and the practical development of trust-sensitive,
non-intrusive XAI systems tailored to the demands of time-critical, high-pressure domains.</p>
      <p>Unlike task specific or opaque AI models [ 16], our framework is generalizable across high stakes
domains, interpretable by design, and adaptable in real time. It supports collaboration by recognizing
the cognitive and afective constraints of human operators and responding accordingly closing the loop
between trust inference, explanation adaptation, and mission performance. In the following sections, we
review prior work, detail our conceptual model, and outline its application to real-world high-pressure
settings such as emergency response.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <p>This section introduces key foundations for our proposed framework by synthesizing prior work across
four core areas. First, we examine the role of implicit feedback in high-stakes human-AI collaboration,
emphasizing the need for implicit, real-time cues such as physiological and behavioral signals to
support decision-making under stress and cognitive load. Next, we explore the construct of swift trust,
outlining its determinants, reliability, predictability, competence, transparency, and adaptability and
their sensitivity to fluctuating user states. We then review the limitations of adaptive explainability,
highlighting gaps in existing XAI systems that fail to adjust explanations to changing user conditions.
Finally, we discuss user-centric and afect-aware XAI, emphasizing emerging evidence for modeling
trust through real-time physiological inference, and outlining the need for integrated models that
dynamically adapt explanation features to maintain trust and cognitive eficiency in time-sensitive
contexts. This background frames the motivation and design of our proposed conceptual model.</p>
      <sec id="sec-2-1">
        <title>2.1. Human AI Collaboration and the Role of Implicit Feedback</title>
        <p>
          In high stakes domains, human AI collaboration is often task critical, requiring both agents to operate
in close coordination under intense cognitive and temporal pressure. The efectiveness of this teaming
rests heavily on the human operator’s ability to trust the AI system to understand its role, predict its
behavior, and rely on its outputs in moments of uncertainty. Studies across domains such as emergency
response and autonomous operations show that trust in AI systems enhances decision making eficiency,
reduces cognitive burden, and improves overall team performance [
          <xref ref-type="bibr" rid="ref5">5, 17, 18</xref>
          ]. However, traditional
models of trust formation often assume explicit communication between humans and AI agents, such
as requests for clarification, preference adjustment, or corrective feedback. In practice, such explicit
feedback is limited or infeasible in high stakes situations [19]. Human operators are typically focused
on the task at hand, operating under cognitive overload, and have minimal capacity to verbally assess
or tune their interaction with the AI system. This constraint necessitates an alternative trust support
mechanism, one that can function implicitly, adaptively, and in real time.
        </p>
        <p>
          Implicit feedback, measured through physiological and behavioral signals, ofers a promising
foundation for adaptive support in human-machine teams (HMTs). Unlike explicit feedback, implicit indicators
are passively observable and continuous, providing a non-intrusive method for assessing the user’s
cognitive and emotional state. A growing body of research supports the viability of using such signals
for trust estimation. For instance, EEG has been shown to correlate with cognitive load and attention
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Similarly, ECG and galvanic skin response (GSR) are reliable indicators of physiological arousal and
stress, which have been linked to trust erosion under pressure [
          <xref ref-type="bibr" rid="ref8">20, 8, 21, 22, 23, 24</xref>
          ]. Moreover, gaze
patterns, facial expressions, and vocal features reflect emotional valence and engagement, serving as
real-time proxies for user afect and trust [ 25]. These findings suggest that trust-relevant mental states
can be inferred in real time from sensor data, enabling AI systems to detect when a user is confused,
overloaded, disengaged, or stressed without requiring explicit articulation.
        </p>
        <p>By grounding our model in these implicit signals, we enable AI systems to dynamically assess the
operator’s cognitive and emotional state and adapt their behavior accordingly. In the context of XAI,
this means tailoring explanations to be more concise, timely, or expressive depending on the inferred
user state. For example, a spike in physiological arousal following an AI action may indicate confusion
or concern, prompting the system to proactively issue a clarifying explanation. Similarly, indicators of
high trust and low load might invite more detailed, exploratory explanations to support learning or
calibration. In this way, implicit feedback enables real time, user sensitive adaptation, forming a critical
bridge between human trust dynamics and machine explainability. It allows the AI system to act as an
responsive teammate not just explaining what it does, but choosing when and how to explain based on
the operator’s needs. This perspective forms the foundation for the next section, where we examine the
elements of swift trust, and how they intersect with user state and explainability in high stake teams.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Swift Trust and Its Determinants</title>
        <p>
          In high stakes human AI collaboration, trust must be formed quickly often in the absence of prolonged
interaction or past performance history. This phenomenon, referred to as swift trust [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], is essential
for enabling rapid coordination in dynamic environments such as disaster response or critical medical
care. Unlike traditional trust, which emerges gradually through relationship building, swift trust
is assumed provisionally based on contextual cues like system role, professionalism, and perceived
competence. However, swift trust is inherently fragile. It can erode rapidly when system behavior is
unclear, inconsistent, or perceived as unreliable. Maintaining and calibrating this trust is a non-trivial
challenge especially under conditions of high workload and emotional strain, where human perception
of system behavior becomes volatile. A breakdown in trust can lead to over reliance (complacency) or
under reliance (disuse), both of which are detrimental to team performance.
        </p>
        <p>
          A large body of empirical work (e.g.[17, 26, 18]) confirms that trust in AI systems is closely linked to
perceived performance, system reliability and predictability, as well as the user’s workload, stress, and
emotional state. Hancock et al. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] found that performance was the strongest predictor of trust, with
workload and environmental risk also contributing significantly. More recent work shows that stress and
cognitive overload reduce perceived trust, while positive emotional states such as engagement promote
confidence and trust [
          <xref ref-type="bibr" rid="ref3">3, 27, 18</xref>
          ]. These variables are not only correlational but causal, influencing how
operators interpret and respond to AI recommendations in real time. For example, when workload is
high and system behavior is unclear, trust may drop even if the AI is performing correctly. Conversely,
under calm conditions with transparent AI behavior, trust can remain stable even after minor failures.
Trust, therefore, operates as a feedback variable, modulated by both system performance and the
human’s cognitive and emotional state. Hof and Bashir’s [ 28] three-level trust model highlights that
initial trust or disposition to trust is shaped by individual traits, prior experiences, and cultural factors,
serving as a baseline for interaction with automation systems. While these dispositional influences
are important, our work focuses on the dynamic adaptation of trust during interaction, particularly
how AI systems can respond to evolving cognitive and emotional states to support trust formation and
calibration in high-stakes environments.
        </p>
        <sec id="sec-2-2-1">
          <title>Key Elements of Swift Trust</title>
          <p>To model and support swift trust efectively, it is useful to decompose it into key elements, as outlined
in Table 1 and commonly cited in the literature [28, 29, 19]. Among these, adaptability plays a critical
role in high-stakes settings where user state and task demand shift rapidly. It reflects the AI system’s
responsiveness to physiological, cognitive, and contextual signals, allowing it to adjust its explanations
(e.g., simplifying content during stress) and behaviors (e.g., increasing feedback frequency during
uncertainty) to maintain trust. As Cho et al. [29] and Seong and Bisantz [30] note, adaptive systems
promote more accurate and timely trust calibration, allowing the user to rapidly align trust with the
actual performance of AI.</p>
          <p>These elements are not static; rather, they are dynamically influenced by both user state and system
behavior. An efective trust-supporting system must monitor changes in stress, workload, and emotional
valence and adjust its communication strategy accordingly. Further, these elements can be modulated
by explainability features, but only if the system is responsive to the underlying user state. For instance,
low predictability can be improved by proactive explanations. Low transparency can be mitigated with
“why” explanations about intent. Low reliability perception under stress may be best addressed with
brief confidence statements (e.g., “High certainty: obstacle detected”) [ 19]. In the next section, we
explore how explainability mechanisms can be leveraged to modulate these elements and support the
dynamic formation and maintenance of swift trust.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Adaptive Explainability for Trust Formation</title>
        <p>Explainability has long been recognized as a mechanism for fostering trust in AI systems. However,
most existing approaches are static and context agnostic, ofering fixed explanations that do not adjust
to the user’s state or task environment. Recent advances have begun to explore adaptive explanation
strategies for instance, using reinforcement learning or partially observable Markov decision processes
(POMDPs) to tailor explanations based on user type or task progression [31]. However, these approaches
often rely on predefined user profiles or require explicit feedback, making them dificult to apply in
high stakes, real time environments. Model reconciliation approaches [31] align AI explanations with
human mental models, but typically assume static trust misalignment and do not account for fluctuating
physiological or afective states. Floyd et al. [ 32] introduced trust guided transparency, where the AI
modulates its behavior based on estimated user trust, but their system relied on explicit interaction logs
and performance scores, rather than physiological signals.</p>
        <p>
          Recent research has demonstrated that trust related cognitive and emotional states can be inferred
using physiological and behavioral signals such as heart rate variability (HRV), electrodermal activity
(EDA), facial expressions, and gaze [
          <xref ref-type="bibr" rid="ref3">25, 3</xref>
          ]. Fuzzy and neuro fuzzy models have been employed to
classify trust states in real time, providing interpretable trust metrics for adaptive systems. However,
these models have rarely been connected to explanation generation, leaving a gap in integrating trust
estimation with communication behavior.
        </p>
        <p>
          Moreover, trust calibration aligning user trust with system capability is especially critical in high-risk
or time sensitive domains. Studies in aviation, medicine, and robotics have shown that miscalibrated
trust leads to automation bias or disuse [
          <xref ref-type="bibr" rid="ref5">5, 33</xref>
          ]. Meanwhile, cognitive load plays a central role in
explainability: too much detail can overwhelm the user, while too little can induce confusion or mistrust.
Paleja et al. [18] found that tailoring the granularity of the explanation benefits novice users under load
but may frustrate experts, reinforcing the need for adaptive strategies that consider real-time workload
and user expertise.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. User Centric and Afective XAI</title>
        <p>
          Early work on user aware XAI [16] focused on clustering users by behavior patterns to personalize
explanations. However, most approaches lack real time responsiveness, and few incorporate afective
signals to dynamically adjust content. Ali et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] emphasize that explanations tailored to user
context and emotional state enhance perceived competence and empathy, but current systems lack
the infrastructure to connect afective inference with explanation logic. As summarized in Table 2,
existing work highlights the individual importance of trust, explainability, and physiological modeling.
However, few frameworks bring these together into a cohesive, real-time trust adaptive explainability
model that operates efectively in high-stakes human AI teams.
Low predictability Hinders user ability to anticipate system
        </p>
        <p>behavior [17]
Unfamiliar task (Exper- Increases mental efort; risks trust erosion
tise) [18]
Disengagement</p>
        <sec id="sec-2-4-1">
          <title>2.4.1. Trust Modeling through Explainability Cues</title>
          <p>
            In Human Machine Teams (HMTs), trust and explainability are dynamically interlinked, unfolding as
a sequence of cause-and-efect interactions. AI behaviors whether task related actions, feedback, or
navigation decisions directly influence the human operator’s physiological state, emotional response,
and cognitive processing. These changes, in turn, shape the operator’s perception of trust, impacting
collaboration quality and task performance [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. These suggest how real-time adaptation of AI
explanations, grounded in these causal pathways, can mitigate cognitive and emotional strain while reinforcing
trust in high stakes and time sensitive environments.
          </p>
          <p>
            AI Behavior and Trust Perception. The observable behavior of AI such as decision making, task
execution, or error handling shapes the operator’s perception of its competence, reliability, and intent.
When the AI behaves transparently and contextually, users are more likely to perceive it as trustworthy.
In contrast, opaque or inconsistent behaviors introduce uncertainty and distrust. Shin [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ] reports that
causability and explainability account for over 58% of variance in trust perceptions, underscoring the
importance of clarity and communicative alignment in fostering trust.
          </p>
          <p>Emotional and Physiological Responses. Trust perceptions are further mediated by afective
responses, which manifest physiologically through changes in heart rate (HR), heart rate variability
(HRV), electrodermal activity, or neural activity (EEG) [35]. For example, unexpected or ambiguous AI
actions may trigger stress or frustration, while cooperative and predictable behavior fosters engagement
and calm. These physiological markers serve as real time, implicit indicators of trust state [19], enabling
continuous user monitoring without explicit intervention.</p>
          <p>Cognitive Load and Information Processing. AI outputs that are complex, ambiguous, or mistimed
can impose a high cognitive burden, impairing the user’s ability to process information and make
timely decisions. This efect is especially pronounced in emergency response scenarios, where attention
is divided, time is limited, and errors are costly. Prior studies [18, 19] show that cognitive overload
negatively afects both trust and performance in collaborative human AI systems. Adaptive explainability
can address this by modulating explanation complexity, timing, and content helping reduce overload,
maintain attention, and recalibrate trust in real time.</p>
          <p>However, most XAI systems do not modulate their explanations based on implicit human signals such
as stress, workload, or emotion, nor do they account for rapidly changing contextual cues. This limitation
is particularly consequential in high-stakes environments, where cognitive overload and uncertainty
diminish the operator’s ability to process static or overly generic explanations. There remains a
significant gap in designing closed-loop adaptive XAI systems that leverage real-time physiological
and contextual data to both interpret user states and dynamically tailor explanation strategies. Such
systems would not only reflect the user’s cognitive and emotional state but also influence it over time
supporting trust formation, cognitive eficiency, and collaborative resilience under pressure.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Adaptive Explainability Trust Framework</title>
      <p>The findings presented in Section 2 lead to a critical insight: explainability is not merely a communication
tool, but a dynamic mechanism for shaping and stabilizing swift trust. To support this, we propose,
Adaptive Explainability Trust Framework (AXTF), a conceptual framework designed to enhance human
decision-making within human-AI teams by improving swift trust and team performance through
dynamic adaptation of AI explanations based on real-time assessments of user states and environmental
factors. Specifically, our framework enables AI explanations to be continuously tailored according to
the user’s cognitive load, stress, and emotional state, as well as the task context, including urgency,
goals, and environmental complexity, particularly in high-stakes domains such as emergency response.</p>
      <p>To support swift trust and efective human AI teaming in high stakes environments, our proposed
conceptual model links real time physiological and behavioral indicators of human state to adaptive
explainability mechanisms. The model integrates three interconnected components: (1) multimodal
feedback sensing and inference, (2) multi-objective trust modeling, and (3) explanation feature adaptation,
with environmental inputs to ensure contextual relevance.</p>
      <p>The proposed framework (Fig. 1) forms a closed-loop pipeline that integrates real-time physiological
and behavioral signals (e.g., EEG, ECG, heart rate variability—HRV) with environmental data such as
task goals, urgency, and state to assess the user’s cognitive load (W), stress (S), and emotional valence
(E) levels [18, 36]. Based on these user states and environmental performance metrics (e.g., task errors,
success rates), dynamic trust estimation is performed using a multi-objective neurofuzzy rule-based
inference engine. The framework then adapts key explanation features, including timing, duration,
granularity, and mode of delivery, by mapping these trust estimates and contextual knowledge to
reduce cognitive overload, enhance trust, and guide the subsequent human action (+1). For example,
in scenarios characterized by low trust and high cognitive load, short, moderate steps, and reactive
explanations are more efective for fostering trust compared to lengthy, detailed ones. While some
lfuctuations in cognitive load ( ) and trust () are expected, the ultimate goal of these adaptive
explanations is to increase trust (T) and decrease cognitive load (W) over time, thereby promoting
swift trust and improving situational awareness, decision-making, and overall team performance. This
feedback-driven framework is designed to support temporal situational awareness, workload balancing,
and trust resilience in high-pressure environments where explicit communication may be limited, but
implicit signals provide actionable insight. The following subsections detail each component of the
framework, including multimodal feedback sensing and inference, multi-objective trust modeling, and
explanation feature adaptation.</p>
      <sec id="sec-3-1">
        <title>3.1. Real-Time Multimodal Inference of Human State</title>
        <p>The system continuously monitors a range of physiological and behavioral signals to infer latent
cognitive and emotional states that are critical for trust assessment, including:
• EEG and pupillometry for evaluating cognitive workload,
• ECG, galvanic skin response (GSR), and heart rate variability (HRV) to detect stress and arousal
levels,
• Facial expressions, gaze tracking, and voice features to infer emotional valence and user
engagement.</p>
        <p>
          Physiological signals are first processed using personalized machine learning models that learn
discriminative representations directly from raw or minimally preprocessed data [
          <xref ref-type="bibr" rid="ref10">10, 37</xref>
          ]. These models
are trained to detect individual-specific stress patterns and emotional states, producing outputs in
trustrelevant categories such as “High Workload” or “Low Valence”. The resulting estimates form the basis
of trust estimation, aligning trust levels with temporal situational awareness, environmental conditions,
and the behavior of other AI agents in the environment. Building on this, a fuzzy rule–based system
integrates these trust estimates with additional situational cues such as cognitive load, urgency, and
human performance indicators to refine trust as a continuous value. This fuzzy reasoning captures the
uncertainty and gradations inherent in human states (e.g., “moderately high stress” resulting in a partial
reduction of trust rather than a complete breakdown). Finally, in contexts such as emergency response,
this integrated trust estimation and contextual modeling enable the system to adapt explainability
features, ensuring that the reasoning provided is both appropriate and supportive of human
decisionmaking.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Fuzzy Logic Based Trust Inference</title>
        <p>To translate these human state metrics into actionable trust estimates, we suggest a multiobjective
neurofuzzy inference method. This hybrid approach exemplifies a neurosymbolic architecture, where
neural signal processing informs symbolic reasoning. It combines the perceptual strength of
physiological sensing with interpretable symbolic decision logic to support transparent trust calibration. While
current fuzzy membership functions are fixed for conceptual clarity, the framework is designed to
support personalization and future learning-based adaptation where parameters can be tuned per user
or task. This approach serves as an interpretable guardrail that encodes literature-grounded, rule-based
mappings (see Table 3), helping filter and structure physiological and contextual signals before any
downstream reasoning (e.g., via large language models) is performed.</p>
        <p>Trust is estimated as a categorical variable with three levels Low, Medium, or High based on a
combination of user physiological and afective states and system performance metrics. The model
enables interpretable reasoning that links observed human state and AI behavior to well-established
trust dimensions such as reliability, competence, predictability, transparency, and adaptability. The
model takes the following inputs:
• Input Variables
– Workload (W): categorized as Low, Medium, or High; derived from EEG features, gaze
data, or behavioral task-switching patterns.
– Stress level (S): categorized as Low, Medium, or High; inferred from ECG, GSR, and HRV
metrics.
– Emotion valence (E): categorized as Negative, Neutral, or Positive; estimated from facial
expressions, tone of voice, or afective models.
– System performance score (P): a normalized score between 0 and 1; computed from task
metrics such as success rate and error frequency, allowing generalization across domains.
• Trust Output</p>
        <p>– Trust (T): classified as Low, Medium, or High.</p>
        <p>The fuzzy trust inference system maps normalized input variables into fuzzy linguistic categories
using membership functions. Each input (e.g., workload, stress, emotion valence, performance) is
associated with three fuzzy sets: Low, Medium, and High, except for emotion valence, which uses
Negative, Neutral, and Positive.</p>
        <p>1. Workload (W), Stress (S)</p>
        <p>
          Both workload and stress are defined over the domain [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] and share the same triangular membership
structure:
        </p>
        <p>⎧⎪1,  ≤ 0.2
 Low() = ⎨0.05.−3 , 0.2 &lt;  ≤ 0.5</p>
        <p>⎪⎩0,  &gt; 0.5
2. Emotion Valence (E)</p>
        <p>⎧⎪0,  ≤ 0.2 or  ≥ 0.8
 Medium() = ⎨−00..23 , 0.2 &lt;  ≤ 0.5
⎪⎩0.08.−3 , 0.5 &lt;  &lt; 0.8</p>
        <p>
          ⎧⎪0,  ≤ 0.5
 High() = ⎨−00..53 , 0.5 &lt;  ≤ 0.8
⎪⎩1,  &gt; 0.8
Emotion valence is modeled over the range [
          <xref ref-type="bibr" rid="ref1">−1, 1</xref>
          ]:
        </p>
        <p>
          These functions are derived from literature indicating that increased workload and stress reduce
trust in automation [
          <xref ref-type="bibr" rid="ref5">5, 19</xref>
          ], while positive valence and higher performance promote trust. At present,
the rules are implemented as heuristics to demonstrate the framework conceptually. Future work will
focus on validating rule thresholds and evaluating their generalization across diferent users, tasks,
and noise conditions. The personalized machine learning models provide individualized stress and
emotion features, which are then transformed by the fuzzy rules into graded trust categories. This
mapping of continuous physiological inputs into interpretable fuzzy categories provides a transparent
and adaptable mechanism for real-time trust modeling. The fuzzy trust inference model estimates
trust levels based on these real-time assessments of workload, stress, emotional valence, and system
performance. Using a set of fuzzy rules (Table 3) derived from empirical research and domain knowledge
(see Table 1), the model captures complex interactions among these factors to produce a unified trust
estimate  classified as Low, Medium, or High. This approach enables interpretable reasoning and
supports adaptive AI behavior aligned with the operator’s current cognitive-afective state, improving
human-AI collaboration.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Trust Sensitive Explanation Adaptation</title>
        <p>
          The model focuses on seven core explainability features that shape user experience and collaboration.
We define these key features as follows:
1. Timing: Timely delivery of explanations is vital in high-pressure situations. Explanations can
be delivered proactively, ahead of an AI action, to minimize potential confusion (e.g., "Avoiding
unstable debris ahead"), or reactively, triggered by user hesitation or unexpected AI behavior.
In an emergency context, timing must align with both the task phase and the user’s attentional
bandwidth to avoid distraction or delay [
          <xref ref-type="bibr" rid="ref6">18, 19, 6</xref>
          ].
2. Duration: The length of explanations should be carefully adapted to the time sensitivity of the
situation and the user’s cognitive capacity. Under high cognitive load or time pressure, brief
explanations, typically lasting 2–3 seconds, are more efective in preserving situational focus
and preventing distraction. On the other hand, when cognitive load is lower, longer or layered
explanations can ofer deeper insight without overwhelming the user [ 18, 19]. For instance,
during search and rescue triage, the system may initially provide short verbal alerts, then follow
up with optional elaboration once the situation stabilizes.
3. Granularity: Explanation granularity refers to the level of detail provided in the explanation.
        </p>
        <p>High-level summaries, such as "Scanning lower level first," help reduce the user’s information
processing demands. In contrast, detailed step-by-step explanations, for example, "Entering sector
B → mapping → thermal anomaly detected," are better suited for experienced users or situations
where trust is high. The granularity should be adapted based on factors such as user familiarity,
workload, and trust levels [18, 33, 38] to ensure explanations remain cognitively accessible while
still informative.
4. Content: Explanation content is chosen based on task relevance and the user’s current focus.</p>
        <p>Contextual or local explanations, such as "Rerouting due to obstacle," emphasize immediate
actions or environmental conditions. In contrast, hierarchical explanations, for example,
"Prioritizing lower floors due to heat signature density," communicate broader planning strategies. In
emergency scenarios, providing contextual content enhances temporal situational awareness and
responsiveness [39, 30].
5. Transparency: Transparency shapes the user’s understanding of the AI’s decision-making
process. "How" transparency, such as "Based on heatmap and terrain risk, path updated," helps
users evaluate the system’s methods. "Why" transparency, for example, "Avoiding risk to maximize
coverage," clarifies the AI’s intent and goal reasoning. Both types support mental model alignment
and trust; however, the appropriate level and form of transparency should be adapted based on
the user’s state and the context [40, 28, 32].
6. Adaptability: Adaptability acts as the central mechanism that dynamically modulates all other
explainability features in real time. This capability allows AI systems to selectively tailor
explanations to align with the user’s current state and task objectives. Under conditions of high workload
or stress, the system simplifies explanations and employs lower-detail modes to reduce cognitive
load. Conversely, when users are calm and trust levels are high, the system can provide more
complex and interactive explanations. This adaptability ensures that explanations remain both
informative and sustainable, especially in high-pressure environments [41, 28, 32, 16].
7. Mode of Delivery: The medium used to deliver explanations significantly influences user
comprehension and cognitive load. Visual formats such as maps, trajectory overlays, or alert
icons are ideal when the user’s auditory attention is available. Textual explanations, including
on-screen summaries or status updates, are more suitable during quieter moments or post-task
phases. Auditory delivery through spoken instructions works best when the user’s hands and
eyes are occupied. Combining multiple channels in a multimodal approach—such as spoken plus
visual alerts—enhances system resilience and inclusivity. Research by Adadi and Berrada [42]
demonstrates that multimodal explanations improve user understanding, particularly when users
are multitasking or experiencing stress.</p>
        <p>The inferred trust level is used to modulate these key explainability features. This trust adaptive
mechanism supports continuous calibration, allowing the system to respond not just to performance,
but to how the human feels and functions during collaboration.</p>
        <p>While explainability is often discussed as a means of improving user understanding or meeting
regulatory requirements, in high stakes human AI teaming, it serves a deeper role: calibrating trust in
real time. As trust is highly sensitive to changes in workload, stress, and afect, static or misaligned
explanations may unintentionally erode confidence or overload the user. In contrast, adaptive
explainability tuned to the user’s current cognitive and emotional state can serve as a powerful tool for
trust repair, reinforcement, and regulation. Consider a search and rescue drone system operating in
post-disaster environments. A human operator under high stress and workload may be overwhelmed
by frequent decision updates. The system detects high HRV, elevated EEG load, and negative valence,
infers low trust, and adapts by issuing short, confidence-framed explanations through audio (“Clear
path detected. High certainty.”). If later states show reduced load and increased engagement, it shifts to
detailed, interactive visualizations for planning and collaboration.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Future Directions</title>
      <p>While this work establishes a conceptual foundation for adaptive explainability in high-stakes human-AI
teaming, several important avenues remain for future investigation. First, two-way communication
between human and AI teammates should be more explicitly modeled not only to adapt AI explanations
to user states, but also to enable reciprocal influence where the AI dynamically adjusts its behavior
based on user reactions and evolving task demands. Such bi-directional interaction will allow the
AI to both respond to human needs and evaluate and refine its own performance, closing the loop
between perception, explanation, and behavior. Second, cultural and dispositional factors play a
foundational role in trust formation but remain underexplored in adaptive XAI. Future research should
investigate how traits such as uncertainty avoidance, communication preferences, and prior experience
influence trust dynamics and explanation preferences, enabling more inclusive and culturally-aware
explanation strategies. Third, implementation and evaluation in interactive, dynamic environments,
such as simulation-based emergency response scenarios, will be essential to validate the framework.
These settings ofer controllable, high-fidelity contexts for assessing how real-time physiological and
behavioral feedback impacts explanation efectiveness, trust calibration, and team performance under
pressure. Fourth, advances in generative agent simulations [43] open opportunities for large-scale
validation using synthetic populations embedded with memory, social reasoning, and behavioral
diversity. These agent-based testbeds can be used to examine long-term trust trajectories and
crossprofile adaptation strategies in simulated high-stakes team settings. Finally, to ensure AI safety in
high-pressure decision environments, future work must incorporate safeguards that prevent explanation
misuse or cognitive overload. Adaptive explanation systems must remain transparent, interpretable,
and bounded by safety constraints that prevent miscalibration of trust—especially under uncertainty or
stress. Embedding safety-aware logic into adaptation rules (e.g., thresholds on explanation complexity
or delivery timing) will help maintain alignment with human cognitive capacity, trust boundaries, and
ethical standards in mission-critical operations.</p>
      <p>In conclusion, advancing adaptive explainability through bi-directional interaction, cultural
awareness, scalable simulation, and embedded safety principles will help realize the next generation of
trust-sensitive, cognitively aligned, and ethically grounded human-AI systems. While this work is
conceptual, validation eforts are underway. Preliminary pilot studies have shown that latency in
explanation delivery can negatively impact trust[17], reinforcing the need for real-time adaptation. The
use of multimodal signals (EEG, ECG, GSR, etc.) allows the system to remain robust even if individual
channels degrade. Future work will explore how conflicting or missing signals are resolved through
confidence-based weighting and ensemble logic.</p>
      <p>Finally, ethical considerations are critical when designing adaptive explanation mechanisms. While
tailoring explanations to user states can improve usability and decision support, there is also a risk
that such adaptations could artificially inflate user trust beyond what the system’s actual competence
warrants. For example, if the system simplifies or frames its reasoning too persuasively under
highstress conditions, users may place undue reliance on it, leading to overtrust and potential harm in
safety-critical settings.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Contribution</title>
      <p>This work contributes to afective, situated, and trustworthy AI by introducing a conceptual framework
for real-time adaptation of explainability to support swift trust and efective teamwork in high-stakes
environments. The framework integrates multimodal implicit feedback physiological, behavioral,
environmental, and contextual signals to infer user states such as workload, stress, and emotional
valence. These inferred states inform the dynamic adjustment of explanation features (e.g., timing,
granularity, modality), enabling alignment with the user’s cognitive and afective demands in pursuit
of time-critical task goals. Explainability is reframed as a multi-objective adaptive function balancing
transparency, cognitive eficiency, and trust calibration. The framework is designed to be model-agnostic
and extensible, supporting the integration of diverse trust modeling and learning mechanisms. It enables
AI systems to act as responsive teammates fostering trust, maintaining collaboration under pressure,
and supporting decision-making when explicit communication is limited. This lays a foundation
for generalizable real-world deployment of adaptive XAI in high-stakes domains such as emergency
response, medical operations, and mission-critical decision support, where human-machine teaming
must remain transparent, afect-aware, and cognitively eficient under uncertainty.</p>
      <p>Unlike existing XAI models that rely on predefined user profiles or static inference pipelines[ 18,
31], our framework incorporates real-time, user-specific trust estimation and links these inferences
directly to explanation adaptation. It addresses a critical gap in adaptive XAI literature by treating
explainability as a dynamic, multi-objective function responsive to workload, stress, and afective cues.
This personalized and continuous adjustment diferentiates our approach from prior eforts that lack
temporal responsiveness or user-state integration.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgments</title>
      <p>This research was funded by the Asian Ofice of Aerospace Research and Development through Grant
23IOA087.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used GPT-4 in order to: Grammar and spelling check.
After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed and take(s)
full responsibility for the publication’s content.
for culturally adaptive emotional intelligence in hci, 2025. URL: https://arxiv.org/abs/2506.14166.
arXiv:2506.14166.
[14] X. Hao, B. Nakisa, M. N. Rastgoo, G. Pang, Bcr-drl: Behavior- and context-aware reward for deep
reinforcement learning in human-ai coordination, 2025. URL: https://arxiv.org/abs/2408.07877.
arXiv:2408.07877.
[15] D. Spina, J. Gwizdka, K. Ji, Y. Moshfeghi, J. Mostafa, T. Ruotsalo, M. Zhang, A. Ahmad, S. F. D.</p>
      <p>Al Lawati, N. Boonprakong, N. Fernando, J. He, O. Hoeber, G. Jayawardena, B.-G. Lee, H. Liu,
M. Pike, A. Pirmoradi, B. Nakisa, M. N. Rastgoo, F. D. Salim, F. Scott, S. Sun, H. Tang, D. Towey,
M. L. Wilson, Report on the 3rd workshop on neurophysiological approaches for interactive
information retrieval (neurophysiir 2025) at sigir chiir 2025, SIGIR Forum 59 (2025) 1–26.
[16] U. Soni, S. Sreedharan, S. Kambhampati, Not all users are the same: Providing personalized
explanations for sequential decision making problems, in: 2021 IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS), 2021, pp. 6240–6247. URL: https://doi.org/10.1109/
IROS51168.2021.9636331. doi:10.1109/IROS51168.2021.9636331.
[17] S. Milivojevic, M. Sobhani, N. Webb, Z. Madin, J. Ward, S. Yusuf, C. Baber, E. R. Hunt, Swift
trust in mobile ad hoc human-robot teams, 2024. URL: https://doi.org/10.1145/3686038.3686057.
doi:10.1145/3686038.3686057.
[18] R. Paleja, M. Ghuy, N. R. Arachchige, R. Jensen, M. Gombolay, The utility of explainable ai
in ad hoc human-machine teaming, 2021. URL: https://dl.acm.org/doi/10.5555/3540261.3540308,
https://github.com/CORE-Robotics-Lab/Utility-of-Explainable-AI-NeurIPS2021.
[19] M. R. Endsley, Supporting human-ai teams:transparency, explainability, and situation awareness,
Computers in Human Behavior 140 (2023) 107574. URL: https://www.sciencedirect.com/science/
article/pii/S0747563222003946. doi:https://doi.org/10.1016/j.chb.2022.107574.
[20] F. Shafer, J. P. Ginsberg, An overview of heart rate variability metrics and norms, Front
Public Health 5 (2017) 258. doi:10.3389/fpubh.2017.00258, 2296-2565 Shafer, Fred Ginsberg,
J P Journal Article Review Switzerland 2017/10/17 Front Public Health. 2017 Sep 28;5:258. doi:
10.3389/fpubh.2017.00258. eCollection 2017.
[21] H. N. Green, T. Iqbal, Using physiological measures, gaze, and facial expressions to model human
trust in a robot partner, 2025. URL: https://arxiv.org/abs/2504.05291.
[22] A. Aygun, H. Ghasemzadeh, R. Jafari, Robust interbeat interval and heart rate variability estimation
method from various morphological features using wearable sensors, IEEE J Biomed Health Inform
24 (2020) 2238–2250. doi:10.1109/jbhi.2019.2962627, 2168-2208 Aygun, Ayca Ghasemzadeh,
Hassan Jafari, Roozbeh R01 EB028106/EB/NIBIB NIH HHS/United States Journal Article Research
Support, N.I.H., Extramural Research Support, U.S. Gov’t, Non-P.H.S. United States 2020/01/04
IEEE J Biomed Health Inform. 2020 Aug;24(8):2238-2250. doi: 10.1109/JBHI.2019.2962627. Epub
2019 Dec 27.
[23] M. N. Rastgoo, B. Nakisa, R. Andry, M. Frederic, , V. Chandran, Driver stress levels detection
system using hyperparameter optimization, Journal of Intelligent Transportation Systems 28
(2024) 443–458. URL: https://doi.org/10.1080/15472450.2022.2140046. doi:10.1080/15472450.
2022.2140046, doi: 10.1080/15472450.2022.2140046.
[24] B. Nakisa, M. N. Rastgoo, D. Tjondronegoro, V. Chandran, Evolutionary computation algorithms
for feature selection of eeg-based emotion recognition using mobile sensors, Expert Systems
with Applications 93 (2018) 143–155. URL: https://www.sciencedirect.com/science/article/pii/
S0957417417306747. doi:https://doi.org/10.1016/j.eswa.2017.09.062.
[25] H. M. Khalid, L. W. Shiung, P. Nooralishahi, Z. Rasool, M. G. Helander, L. C. Kiong, C. Ai-vyrn,
Exploring psycho-physiological correlates to trust:implications for human-robot-human
interaction, Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60 (2016)
697–701. URL: https://journals.sagepub.com/doi/abs/10.1177/1541931213601160. doi:10.1177/
1541931213601160.
[26] S. C. Kohn, E. J. de Visser, E. Wiese, Y. C. Lee, T. H. Shaw, Measurement of trust in automation: A
narrative review and reference guide, Front Psychol 12 (2021) 604977. URL: https://doi.org/10.3389/
fpsyg.2021.604977. doi:10.3389/fpsyg.2021.604977, 1664-1078 Kohn, Spencer C de Visser,
ezproxy-f.deakin.edu.au/10.1109/ACCESS.2023.3294569. doi:10.1109/ACCESS.2023.3294569.
[41] J. D. Lee, K. A. See, Trust in automation: Designing for appropriate reliance, Human Factors
46 (2004) 50–80. URL: https://journals.sagepub.com/doi/abs/10.1518/hfes.46.1.50_30392. doi:10.
1518/hfes.46.1.50_30392.
[42] A. Adadi, M. Berrada, Peeking inside the black-box: A survey on explainable artificial intelligence
(xai), IEEE Access 6 (2018) 52138–52160. URL: https://doi.org/10.1109/ACCESS.2018.2870052.
doi:10.1109/ACCESS.2018.2870052.
[43] J. S. Park, C. Q. Zou, A. Shaw, B. M. Hill, C. Cai, M. R. Morris, R. Willer, P. Liang, M. S. Bernstein,
Generative agent simulations of 1,000 people, 2024. URL: https://arxiv.org/abs/2411.10109.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y. X.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <article-title>Cognitive Human-Machine Interfaces and Interactions for Avionics Systems</article-title>
          ,
          <source>Phd thesis</source>
          , RMIT University,
          <year>2021</year>
          . URL: https://doi.org/10.25439/rmt.27601791. doi:
          <volume>10</volume>
          .25439/rmt. 27601791.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Dehais</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Duprès</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Blum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Drougard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Scannella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lotte</surname>
          </string-name>
          ,
          <article-title>Monitoring pilot's mental workload using erps and spectral power with a six-dry-electrode eeg system in real flight conditions</article-title>
          ,
          <source>Sensors</source>
          (Basel, Switzerland)
          <volume>19</volume>
          (
          <year>2019</year>
          ). URL: https://api.semanticscholar.org/CorpusID: 83462067.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Rodriguez Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E. Bustamante</given-names>
            <surname>Orellana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Chiou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cooke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <article-title>A review of mathematical models of human trust in automation</article-title>
          , Frontiers in Neuroergonomics Volume 4
          <article-title>-</article-title>
          <year>2023</year>
          (
          <year>2023</year>
          ). URL: https://www.frontiersin.org/journals/neuroergonomics/articles/10. 3389/fnrgo.
          <year>2023</year>
          .
          <volume>1171403</volume>
          . doi:
          <volume>10</volume>
          .3389/fnrgo.
          <year>2023</year>
          .
          <volume>1171403</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Meyerson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Weick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Kramer</surname>
          </string-name>
          ,
          <article-title>Swift trust and temporary groups, Sage Publications</article-title>
          , Inc, Thousand Oaks, CA, US,
          <year>1996</year>
          , pp.
          <fpage>166</fpage>
          -
          <lpage>195</lpage>
          . URL: https://doi.org/10.4135/9781452243610.n9. doi:
          <volume>10</volume>
          .4135/9781452243610.n9.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Hancock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Billings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Schaefer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Y. C.</given-names>
            <surname>Chen</surname>
          </string-name>
          , E. J. de Visser,
          <string-name>
            <given-names>R.</given-names>
            <surname>Parasuraman</surname>
          </string-name>
          ,
          <article-title>A meta-analysis of factors afecting trust in human-robot interaction</article-title>
          ,
          <source>Human Factors</source>
          <volume>53</volume>
          (
          <year>2011</year>
          )
          <fpage>517</fpage>
          -
          <lpage>527</lpage>
          . URL: https://journals.sagepub.com/doi/abs/10.1177/0018720811417254. doi:
          <volume>10</volume>
          .1177/ 0018720811417254.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <article-title>The efects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai</article-title>
          ,
          <source>International Journal of Human-Computer Studies</source>
          <volume>146</volume>
          (
          <year>2021</year>
          )
          <article-title>102551</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S1071581920301531. doi:https: //doi.org/10.1016/j.ijhcs.
          <year>2020</year>
          .
          <volume>102551</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Abuhmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>El-Sappagh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Muhammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Alonso-Moral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence</article-title>
          ,
          <source>Information Fusion</source>
          <volume>99</volume>
          (
          <year>2023</year>
          )
          <article-title>101805</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S1566253523001148. doi:https: //doi.org/10.1016/j.inffus.
          <year>2023</year>
          .
          <volume>101805</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Akash</surname>
          </string-name>
          , W.-L. Hu,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Reid</surname>
          </string-name>
          ,
          <article-title>A classification model for sensing human trust in machines using eeg and gsr</article-title>
          ,
          <source>ACM Trans. Interact. Intell. Syst</source>
          .
          <volume>8</volume>
          (
          <issue>2018</issue>
          )
          <article-title>Article 27</article-title>
          . URL: https://doi.org/10. 1145/3132743. doi:
          <volume>10</volume>
          .1145/3132743.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Choo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Nam</surname>
          </string-name>
          ,
          <article-title>Detecting human trust calibration in automation: A convolutional neural network approach, 2022</article-title>
          . URL: https://research.ebsco.com/linkprocessor/plink?id
          <source>= adc76bfc-bd10-3828-ad61-8d1d02d56aab. doi:10</source>
          .1109/THMS.
          <year>2021</year>
          .
          <volume>3137015</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nakisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Rastgoo</surname>
          </string-name>
          ,
          <article-title>Robust emotion recognition via bi-level self-supervised continual learning</article-title>
          ,
          <year>2025</year>
          . URL: https://arxiv.org/abs/2505.10575.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , X. Wu,
          <article-title>Dynamic and quantitative trust modeling and real-time estimation in human-machine co-driving process</article-title>
          ,
          <source>Transportation Research Part F: Trafic Psychology and Behaviour</source>
          <volume>106</volume>
          (
          <year>2024</year>
          )
          <fpage>306</fpage>
          -
          <lpage>327</lpage>
          . URL: https://www.sciencedirect.com/science/ article/pii/S1369847824002006. doi:https://doi.org/10.1016/j.trf.
          <year>2024</year>
          .
          <volume>08</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Hulle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Aroca-Ouellette</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Ries</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brawer</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Roncone,</surname>
          </string-name>
          <article-title>Eyes on the game: Deciphering implicit human signals to infer human proficiency, trust, and intent</article-title>
          , arXiv.org (
          <year>2024</year>
          ). URL: https://arxiv.org/abs/2407.03298.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N.</given-names>
            <surname>Pussadeniya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nakisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Rastgoo</surname>
          </string-name>
          ,
          <article-title>Afective-cara: A knowledge graph driven framework</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>