<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Systems and Software 149
(2019) 101-137. URL: https://doi.org/10.1016/j.jss.2018.11.041. doi:10.1016/j.jss.2018.11.041.
[31] European Parliament</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1007/978</article-id>
      <title-group>
        <article-title>Better, Proactive, Adaptable and Symbiotic Conversational Agents for Digital Accessibility</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Esposito</string-name>
          <email>andrea.esposito@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rosa Lanzilotti</string-name>
          <email>rosa.lanzilotti@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Piccinno</string-name>
          <email>antonio.piccinno@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Conversational Agents, Accessibility, Explanation User Interfaces, Conversational Explanations, Symbiotic AI</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Bari Aldo Moro</institution>
          ,
          <addr-line>Via E. Orabona 4, 70125 Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>15713</volume>
      <issue>300</issue>
      <fpage>161</fpage>
      <lpage>170</lpage>
      <abstract>
        <p>Conversational Artificial Intelligence (AI) holds significant promise for promoting digital inclusion, ofering natural language access to information and services for citizens who face barriers with traditional, visually oriented web interfaces. Yet current chatbots often stop at retrieving or displaying content, providing little support for interpretation, comparison, or decision-making. This position paper, developed within the context of the PROTECT project (imPROving ciTizEn inClusivity Through Conversational AI), argues that future conversational agents must go beyond basic information delivery to become better, proactive, and symbiotic. Building on recent work on Explanation User Interfaces (XUIs) and Human-Centered AI, we explore how explanations can be integrated into dialogue to foster trust, transparency, and collaboration. We outline opportunities for conversational explanation interfaces (ConvXUIs), identify research challenges-including evaluation metrics, cognitive load, personalization, integration with web architectures, and ethical regulation-and discuss how symbiotic interaction can empower users as active participants in digital life. By advancing conversational agents that explain, justify, and co-construct meaning, we envision a path toward more inclusive, trustworthy, and efective digital accessibility.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Digital inclusion is increasingly recognized as a fundamental right, enshrined in the EU Directive (UE)
2019/882, which requires full and efective participation in society for all citizens, including people with
disabilities [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Yet, significant digital barriers remain: in Italy, four out of ten people still do not use
the Internet regularly, and more than half of the population lacks basic digital skills [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These gaps
prevent equal access to knowledge, services, and opportunities.
      </p>
      <p>
        Conversational Artificial Intelligence (AI) has emerged as a promising solution to bridge these barriers
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Chatbots and voice-based agents can mediate access to digital services, supporting citizens with
visual impairments, elderly users, and other fragile populations who struggle with traditional,
visuallycentric web interfaces. The PROTECT project (imPROving ciTizEn inClusivity Through Conversational
AI) tackles this challenge by envisioning a paradigm of Conversational Web Browsing, enabling users
to access, navigate, and understand the Web through natural language interaction [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>However, conversational agents for digital accessibility currently face several limitations. They often
stop after retrieving or opening a website, leaving users without guidance to interpret, compare, or act
upon the information presented. Moreover, their reasoning processes remain opaque, risking mistrust
and exclusion rather than empowerment.</p>
      <p>
        This position paper argues that overcoming these challenges requires rethinking the role of chatbots
for digital accessibility. They should not only deliver information but also explain, justify, and collaborate
with their users. We propose that chatbots need to become adaptable [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], proactive [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and symbiotic
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. They should follow human-centered principles, anticipate when explanations are needed, and
COL-SAI 2025: Workshop on COllaboration and Learning through Symbiotic Artificial Intelligence, in conjunction with the 16th
https://ivu.di.uniba.it/people/piccinno (A. Piccinno)
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org
engage in forms of collaboration that help users achieve their goals of accessing digital content. To do
this, they must incorporate lessons from the literature on Explanation User Interfaces (XUIs) and align
with the broader perspective of Human-Centered AI, where explainability and trust are essential for
meaningful interaction.</p>
      <p>The rest of the paper is organized as follows. Section 2 reviews related work on Explanation User
Interfaces (XUIs) and places them in the context of Human-Centered AI, with particular attention to
the role of explainability in fostering trust and trustworthiness. Section 3 considers the application
of these ideas to conversational agents, outlining the opportunities and challenges of embedding
explanations into dialogue. Section 4 develops the main position of the paper, presenting three design
imperatives for the next generation of chatbots: they should be better, proactive, and symbiotic. Section
5 highlights research challenges and open questions, including evaluation methods, adaptivity, and
ethical considerations. Finally, Section 6 concludes by summarizing the contribution and reflecting on
how the PROTECT project can contribute to the development of inclusive conversational systems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background: Explanation User Interfaces and Human-Centered AI</title>
      <p>
        The need for AI systems to provide meaningful explanations has become a central topic in recent
years [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Explanations help users understand why a system behaves in a certain way and, more
importantly, whether its suggestions can be trusted.
      </p>
      <p>
        Explainability has emerged as a central concern in the development of artificial intelligence,
particularly in systems that support decision-making or where users must critically evaluate the information
provided [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The field of explainable AI (XAI) has gradually shifted from a purely technical endeavor,
focused on generating explanations, to a more user-centered perspective, where the focus is on how
explanations are received, interpreted, and acted upon by humans [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        A central finding by Cappuccio et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is that efective explanations are not only about transparency,
but about helpfulness, interactivity, and personalization. XUIs that allow users to explore explanations,
ask follow-up questions, and adapt content to their needs foster higher trust and adoption. This aligns
with the broader vision of Human-Centered Artificial Intelligence (HCAI), which reframes AI not as a
replacement for human intelligence, but as an augmentation tool [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ]. Within HCAI, explanations
are a form of dialogue: they sustain trust, calibrate reliance, and enable informed human agency.
      </p>
      <p>
        In this respect, recent work has stressed the need for what Ehsan and Riedl [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] call human-centered
explainable AI (HCXAI), a reflective socio-technical approach that places end users and their values
at the core of explanation design. Similarly, Schoonderwoerd et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] argue for the development of
design patterns that link explanation techniques to user needs, demonstrating that a single explanation
format rarely works for all situations or all users.
      </p>
      <p>
        A key reason for this shift is the recognition that explainability and trust are deeply intertwined.
Explanations are not merely descriptions of an algorithm’s reasoning: they play a crucial role in shaping
whether users choose to rely on a system, and in what way [
        <xref ref-type="bibr" rid="ref10 ref16 ref17 ref9">16, 17, 10, 9</xref>
        ]. Empirical studies confirm
this connection. For instance, experiments in decision-making tasks have shown that explanations
improve not only user performance but also the calibration of trust, helping people to distinguish when
reliance on AI is appropriate and when it is not [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. A broader review of decision support systems has
also identified trust and transparency as recurring evaluation criteria across domains such as healthcare
and finance [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        However, the relationship between explainability and trust is not straightforward. Meta-analyses
show that while there is a positive correlation between the two, the efect is moderate and highly
dependent on contextual factors, such as usability, accuracy, and perceived fairness of the system [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
Philosophical work adds nuance by distinguishing between diferent kinds of trust: some forms of
trust may indeed require explanations, while others rest on broader judgments about competence,
integrity, or benevolence. As Baron [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] notes, explainability may contribute to—but cannot fully
determine—trustworthiness. This distinction highlights that trust in AI is not only about the system’s
internal workings but also about the intentions and accountability of its designers and maintainers.
      </p>
      <p>
        The implications for HCAI and HCXAI are clear [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: Explanations must be accessible,
contextsensitive, and aligned with user goals. Poorly designed or overly technical explanations can easily
backfire, either overwhelming users or creating misplaced confidence. Incremental and adaptive
strategies, which reveal just enough information at the right time, are often more efective than
exhaustive technical transparency. Moreover, evaluation of explanations must take into account not
only subjective perceptions of trust but also behavioral measures of reliance and performance, since
these may diverge.
      </p>
      <p>
        While most studies of explanation user interfaces (XUIs) have focused on static or visual domains
such as expert dashboards or healthcare applications [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], their findings are directly relevant to
conversational agents. In dialogue, explanations unfold dynamically, and their efectiveness depends not
only on content but also on delivery—timing, phrasing, tone, and the agent’s responsiveness all matter.
Trust in chatbots is therefore shaped by the conversational process as much as by the information
disclosed [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. In inclusive contexts such as PROTECT, where users may face barriers of literacy, vision,
or confidence, explanations become not only mechanisms of transparency but also instruments of
reassurance and empowerment. The challenge, then, is to adapt the principles of XUIs and HCAI to
conversational interaction, designing explanations that sustain trust while remaining accessible, concise,
and meaningful.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. From Explanation Interfaces to Conversational Agents</title>
      <p>
        Most of the work on explanation user interfaces has so far concentrated on static or visually oriented
contexts [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Domains such as healthcare, finance, and recommendation systems have been the testing
ground for many explanation techniques, from visualizations of feature importance to counterfactual
examples that show how outcomes would change under diferent conditions. These applications
typically assume that users engage with explanations through dashboards, charts, or other visual means.
While valuable, this focus leaves relatively unexplored the possibilities that arise when explanations are
embedded in conversations.
      </p>
      <p>
        Conversational agents, such as chatbots and voice-based assistants, difer from static interfaces in one
fundamental respect: they are inherently dialogical [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Instead of presenting information in a single,
ifxed form, they support ongoing interaction where users can ask questions, request clarifications,
and shift direction as the dialogue unfolds. This interactive nature makes conversational agents a
particularly promising frontier for explainability. Explanations are no longer stand-alone artifacts but
become part of a dynamic exchange that can adapt to user needs in real time. This makes them ideal
candidates for embedding explanations into natural language conversations, creating Conversational
Explanation User Interfaces (ConvXUI).
      </p>
      <p>Embedding explanations into conversation opens up several opportunities. One is the possibility of
providing explanations that are seamlessly integrated into dialogue. Rather than requiring users to
interpret graphs or technical terms, the system can state in plain language why it suggested a resource,
or how it prioritized one option over another. For example, instead of passively presenting a list of links,
a chatbot might add: “I suggested this page because it contains the oficial information you asked for,
and it is maintained by a government agency.” Such conversational explanations situate transparency
directly within the user’s natural flow of interaction.</p>
      <p>Another opportunity lies in the proactive nature of dialogue. Unlike static UIs, conversational agents
are able to anticipate potential points of confusion and intervene before misunderstanding arises. A
chatbot could, for instance, notify the user if a suggested page does not meet accessibility standards, or
explain the confidence level it assigns to a recommendation. By ofering clarifications unprompted,
the agent reduces cognitive load and reassures users that the system is not only functional but also
attentive to their needs.</p>
      <p>
        Dialogue also makes follow-up interactivity natural. If a user is dissatisfied with an explanation or
wants to compare alternatives, they can simply ask questions such as “why not another source?” or “show
me a diferent option.” This ability to engage in back-and-forth negotiation distinguishes conversational
explanations from the static disclosures found in traditional XUIs. In efect, the explanation process
itself becomes a collaborative activity, rather than a one-way transmission of information.
Dialoguebased explanations are even more relevant in the context of education, where they can be either
employed to implement gradual machine teaching [
        <xref ref-type="bibr" rid="ref24 ref25">24, 25</xref>
        ] or to foster trust [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Intuitively, this is
particularly relevant in the context of digital accessibility, where users do not have access to the source
of information.
      </p>
      <p>Finally, conversational explanations can be adapted to diferent modalities, making them particularly
relevant for inclusive contexts. While textual or spoken explanations may sufice for many, agents can
also be designed to integrate with assistive technologies, ensuring that explanations are perceivable by
users with visual or auditory impairments. This multimodal adaptability strengthens the potential of
conversational agents to act as mediators of digital inclusion.</p>
      <p>
        Taken together, these characteristics suggest that chatbots can operationalize the principles of
explanation user interfaces in ways that go beyond traditional approaches. Explanations in this
setting are not fixed objects but evolving parts of a dialogue, tailored to user goals, responsive to
situational needs, and capable of building trust through transparency and responsiveness. In this sense,
conversational agents ofer an underexplored but highly promising space for advancing the design of
explainable and trustworthy AI systems. Additionally, this embodies the two-way interaction that is at
the core of Symbiotic AI [
        <xref ref-type="bibr" rid="ref11 ref27">27, 11</xref>
        ]: on one hand, through ConvXAI, humans are empowered in learning
new aspects of the websites they interact with, and their capabilities are augmented by overcoming the
dificulties that limit content accessibility; on the other hand, AI agents continously learn new patterns
of use and continously improve themselves by interacting with their users [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Towards Better, Proactive, and Symbiotic Chatbots</title>
      <p>
        The limitations of current conversational agents highlight the need for a new design orientation. If
chatbots are to support digital inclusion in meaningful ways, they cannot remain tools that simply
retrieve information. Instead, they must evolve into agents that explain, clarify, and collaborate with
their users [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We argue that three imperatives are particularly important: chatbots must be better,
proactive, and symbiotic.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Better: Human-Centered by Design</title>
        <p>
          To be efective, chatbots must begin from the principles of human-centered design [
          <xref ref-type="bibr" rid="ref23">23, 28</xref>
          ]. This means
not only creating interfaces that are technically functional, but also ensuring that interactions are aligned
with the abilities, needs, and expectations of diverse users. In practice, this requires explanations that
are accessible and adaptable. A visually impaired person may need a short, clear verbal summary of
why a page was recommended, while a more digitally literate user might want a detailed account of
the selection criteria. Providing multi-level explanations—simple justifications for some, more detailed
reasoning for others—ensures that chatbots can serve a broad population without overwhelming or
underserving particular groups. Importantly, explanations should avoid jargon and remain anchored to
what is relevant for the task at hand, so that they enhance rather than hinder understanding.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Proactive: Anticipating Needs and Clarifying Decisions</title>
        <p>
          A second imperative is that chatbots should not wait for users to request explanations but should
ofer them proactively when they are likely to be needed [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. In the context of explanation user
interfaces, trust has been shown to be one of the most frequently studied outcomes. However, trust
cannot be imposed; it must be calibrated. Proactive explanations can help in this process. For instance,
a chatbot that highlights potential limitations—such as a source being outdated or a webpage not
complying with accessibility standards—signals to users that it is attentive, reliable, and aligned with
their interests. Similarly, disclosing confidence levels or the rationale for prioritizing one result over
another allows users to make informed decisions about whether to rely on the system. These acts of
proactive explanation are not simply usability features; they are mechanisms through which the chatbot
demonstrates trustworthiness and invites appropriate reliance.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Symbiotic: Co-Constructing Meaning with Users</title>
        <p>
          Beyond being human-centered and proactive, chatbots should also aim for a form of interaction that can
be described as symbiotic [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Symbiosis implies a relationship in which human and machine capabilities
complement one another [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. In this model, the chatbot is not merely a tool that delivers answers, but a
partner in a collaborative process of sense-making. Explanations become part of an ongoing negotiation:
the system proposes, justifies, and adapts its reasoning, while users refine their questions, adjust their
goals, and provide feedback [29]. Over time, this back-and-forth builds not only better outcomes
for specific tasks but also greater confidence in the agent as a reliable collaborator. Additionally,
explanations may also serve as a mean to enable adaptability (i.e., customization explicitly triggered by
the users, for example through end-user development techniques [
          <xref ref-type="bibr" rid="ref4">4, 30</xref>
          ]) of the conversational agent,
allowing human users to guide the system in preferring some kinds of information with respect to
others [29]. In the context of the PROTECT project, such a symbiotic partnership directly supports the
broader goal of enabling dignified and active participation in digital life, where users are not passive
recipients of information but active participants in shaping their own interactions with technology.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Research Challenges and Future Directions</title>
      <p>Reimagining chatbots as better, proactive, and symbiotic agents raises a number of research challenges
that need to be addressed if this vision is to become reality. These challenges are not only technical
but also methodological and ethical, reflecting the complexity of designing systems that both explain
themselves and support diverse populations.</p>
      <p>Need for Evaluation Metrics While existing work on explanation user interfaces has frequently
assessed trust, usability, and satisfaction, conversational settings demand a richer set of measures.
Dialogue-specific aspects such as flow, responsiveness, and perceived empathy play an important role
in how users experience explanations. In addition, proactive explanations require us to assess not
just whether trust increases, but whether it is calibrated—whether users rely on the system
appropriately, neither too little nor too much. Developing robust methods for measuring calibrated trust in
conversational contexts is therefore an important step forward.</p>
      <p>Explainability vs. Cognitive Load Explanations are most useful when they clarify, but they risk
becoming counterproductive if they are too detailed, too frequent, or too technical. For users with
limited digital literacy, overwhelming explanations may cause disengagement rather than empowerment.
Incremental disclosure strategies, in which information is revealed gradually and on demand, may help
strike the right balance. Future work needs to explore how such strategies can be implemented in
conversation without disrupting the natural flow of dialogue.</p>
      <p>Personalization and Adaptivity Users difer in terms of expertise, preferences, and cognitive
capacities, and explanations must reflect this diversity. A system that treats all users identically risks
alienating some while underserving others. Adaptive conversational strategies that adjust the depth,
style, and timing of explanations to individual users are therefore a priority. Yet personalization also
raises questions about privacy, fairness, and the risk of overfitting explanations to perceived user
profiles, which will require careful study.</p>
      <p>Integration with Web Architectures To realize the PROTECT vision of conversational web
browsing, chatbots must be able to interact seamlessly with web content, extract relevant information, and
present it in accessible ways. This requires coupling dialogue systems with web accessibility layers in
a manner that preserves fidelity while still allowing for adaptation and explanation. Research at this
intersection is still limited and will be crucial for scaling inclusive solutions.</p>
      <p>Ethics and Regulation Explanations shape how users perceive and rely on AI systems, and thus have
consequences for fairness, autonomy, and accountability. As the EU AI Act [31] begins to take efect,
developers will need to ensure that conversational agents meet legal requirements for transparency while
also respecting the dignity and agency of their users. A key part of this is ensuring that explanations
do not encourage blind trust or mask limitations, but instead foster a balanced understanding of what
the system can and cannot do.</p>
      <p>Trust Calibration in Dialogue Trust in chatbots cannot be imposed; it must be calibrated.
Explanations must strike a balance between revealing system strengths and exposing limitations.
Overexplaining risks overwhelming users, while under-explaining risks fostering blind trust. Developing
conversational strategies for calibrated trust is an open challenge, particularly in inclusive contexts
where users may be vulnerable to over-reliance.</p>
      <p>Addressing these challenges will require contributions from multiple disciplines: computer science
to develop robust techniques, human-computer interaction to study usability and adaptivity, and social
sciences and philosophy to interrogate issues of trust, responsibility, and fairness. Taken together,
these lines of inquiry will help shape the next generation of conversational agents that are not only
technically proficient but also genuinely inclusive and trustworthy.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>
        The PROTECT project takes as its starting point a simple but pressing reality: large groups of citizens
remain excluded from the benefits of digital services, either because of disabilities, situational limitations,
or gaps in digital literacy [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Conversational AI ofers a promising path toward addressing these barriers,
but current systems fall short [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. They often provide access to information in a narrow sense—retrieving
or displaying content—without supporting the deeper processes of understanding, comparison, and
decision-making that genuine inclusion requires.
      </p>
      <p>This position paper has argued that to meet these challenges, chatbots must evolve into better,
proactive, and symbiotic agents. Drawing on the literature on explanation user interfaces and
humancentered AI, we have suggested that explanations are not just technical add-ons but integral to building
trust, enabling informed use, and fostering collaboration. Explanations delivered through conversation
can be tailored to user needs, ofered proactively to prevent confusion, and shaped through dialogue to
support joint sense-making. Additionally, explanations allow interventions, allowing users to customize
and adapt the conversational agent to their own specific needs [ 32, 29].</p>
      <p>Realizing this vision will not be without dificulties. Research is still needed to develop evaluation
methods that capture calibrated trust in conversational contexts, to design adaptive explanations that
respect users’ cognitive capacities, and to ensure technical integration with web architectures. Equally
important are the ethical and regulatory dimensions: as conversational systems mediate access to public
information, they must remain transparent, accountable, and aligned with societal values.</p>
      <p>Despite these challenges, the potential benefits are considerable. If designed with care, chatbots can
move beyond their current role as information retrievers and become genuine partners in interaction,
supporting citizens in accessing, understanding, and acting upon online resources. In doing so, they
can contribute directly to the goals of digital inclusion, ensuring that participation in the digital sphere
is not a privilege for the few but a right accessible to all.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The research of Andrea Esposito and Antonio Piccinno is supported by the Italian Ministry of University
and Research (MUR) and by the European Union - NextGenerationEU, under grant PRIN 2022 PNRR
“PROTECT: imPROving ciTizEn inClusiveness Through Conversational AI” (Grant P2022JJPBY) – CUP:
H53D23008150001. The research of Rosa Lanzilotti is supported by the co-funding of the European Union
- Next Generation EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3 – Partnerships extended
to universities, research centers, companies, and research D.D. MUR n. 341 del 15.03.2022 – Next
Generation EU (PE0000013 – “Future Artificial Intelligence Research – FAIR” - CUP: H97G22000210007).</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>European</given-names>
            <surname>Parliament</surname>
          </string-name>
          ,
          <article-title>Council of the European Union, Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products</article-title>
          and services,
          <year>2019</year>
          . URL: https://eur-lex.europa.eu/eli/dir/2019/882/oj/eng.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <source>Digital Decade Country Report 2023 - Italy, Digital Decade Report</source>
          <year>2023</year>
          ,
          <year>2023</year>
          . URL: https://digital-strategy.ec.europa.eu/en/library/ country-reports
          <source>-digital-decade-report-2023.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Matera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Piro</surname>
          </string-name>
          , E. Pucci,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Costabile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <article-title>Improving Citizen Inclusivity through Conversational AI. The PROTECT Approach</article-title>
          , in: F.
          <string-name>
            <surname>Falchi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Giannotti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Monreale</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Boldrini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Rinzivillo</surname>
          </string-name>
          , S. Colantonio (Eds.),
          <source>Proceedings of the Italia Intelligenza Artificiale - Thematic Workshops</source>
          , volume
          <volume>3486</volume>
          <source>of CEUR Workshop Proceedings</source>
          , CEUR, Pisa, Italy,
          <year>2023</year>
          , pp.
          <fpage>261</fpage>
          -
          <lpage>266</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3486</volume>
          /28.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <article-title>Adaptive and Adaptable Systems: Diferentiating and Integrating AI and EUD</article-title>
          , in: L.
          <string-name>
            <surname>D. Spano</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Santoro</surname>
          </string-name>
          , S. Stumpf (Eds.),
          <string-name>
            <surname>End-User</surname>
            <given-names>Development</given-names>
          </string-name>
          , volume
          <volume>13917</volume>
          , Springer Nature Switzerland, Cham,
          <year>2023</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          . URL: https://link.springer.com/10.1007/978-3-
          <fpage>031</fpage>
          -34433-
          <issue>6</issue>
          _ 1. doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>031</fpage>
          - 34433-
          <issue>6</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lam</surname>
          </string-name>
          , T.-S. Chua,
          <string-name>
            <surname>Proactive Conversational</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>A Comprehensive Survey of Advancements and Opportunities</article-title>
          ,
          <source>ACM Transactions on Information Systems</source>
          <volume>43</volume>
          (
          <year>2025</year>
          )
          <fpage>1</fpage>
          -
          <lpage>45</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3715097. doi:
          <volume>10</volume>
          .1145/3715097.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Grigsby</surname>
          </string-name>
          ,
          <article-title>Artificial Intelligence for Advanced Human-Machine Symbiosis</article-title>
          , in: D. D.
          <string-name>
            <surname>Schmorrow</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. M. Fidopiastis</surname>
          </string-name>
          (Eds.),
          <source>Augmented Cognition: Intelligent Technologies</source>
          , volume
          <volume>10915</volume>
          of Lecture Notes in Computer Science, Springer International Publishing, Cham,
          <year>2018</year>
          , pp.
          <fpage>255</fpage>
          -
          <lpage>266</lpage>
          . URL: https://link.springer.com/10.1007/978-3-
          <fpage>319</fpage>
          -91470-1_
          <fpage>22</fpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>319</fpage>
          - 91470- 1_
          <fpage>22</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A Survey of Methods for Explaining Black Box Models</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          . URL: https://dl.acm. org/doi/10.1145/3236009. doi:
          <volume>10</volume>
          .1145/3236009.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cappuccio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Greco</surname>
          </string-name>
          , G. Desolda,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rinzivillo</surname>
          </string-name>
          , Explanation User Interfaces:
          <string-name>
            <given-names>A Systematic</given-names>
            <surname>Literature</surname>
          </string-name>
          <string-name>
            <surname>Review</surname>
          </string-name>
          , Submitted to an
          <source>international journal-Preprint available on arXiv</source>
          ,
          <year>2025</year>
          . URL: https://arxiv.org/abs/2505.20085. doi:
          <volume>10</volume>
          .48550/ARXIV.2505.20085.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <article-title>Efects of Explanations in AI-Assisted Decision Making: Principles and Comparisons</article-title>
          ,
          <source>ACM Transactions on Interactive Intelligent Systems</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>36</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3519266. doi:
          <volume>10</volume>
          .1145/3519266.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Scharowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A. C.</given-names>
            <surname>Perrig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Svab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Opwis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Brühlmann</surname>
          </string-name>
          ,
          <article-title>Exploring the efects of humancentered AI explanations on trust and reliance</article-title>
          ,
          <source>Frontiers in Computer Science</source>
          <volume>5</volume>
          (
          <year>2023</year>
          )
          <article-title>1151150</article-title>
          . URL: https://www.frontiersin.org/articles/10.3389/fcomp.
          <year>2023</year>
          .1151150/full. doi:
          <volume>10</volume>
          .3389/fcomp.
          <year>2023</year>
          .
          <volume>1151150</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Desolda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Costabile</surname>
          </string-name>
          ,
          <article-title>From human-centered to symbiotic artificial intelligence: A focus on medical applications</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          <volume>84</volume>
          (
          <year>2024</year>
          )
          <fpage>32109</fpage>
          -
          <lpage>32150</lpage>
          . URL: https://rdcu.be/d1RF4. doi:
          <volume>10</volume>
          .1007/s11042- 024- 20414- 5.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Toward</given-names>
            <surname>Human-Centered</surname>
          </string-name>
          <string-name>
            <surname>AI</surname>
          </string-name>
          :
          <article-title>A Perspective from Human-Computer Interaction</article-title>
          , Interactions
          <volume>26</volume>
          (
          <year>2019</year>
          )
          <fpage>42</fpage>
          -
          <lpage>46</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3328485. doi:
          <volume>10</volume>
          .1145/3328485.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <volume>1</volume>
          <fpage>ed</fpage>
          ., Oxford University Press, Oxford,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>U.</given-names>
            <surname>Ehsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. O.</given-names>
            <surname>Riedl</surname>
          </string-name>
          ,
          <string-name>
            <surname>Human-Centered Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>Towards a Reflective Sociotechnical Approach</article-title>
          , in: C.
          <string-name>
            <surname>Stephanidis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kurosu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Degen</surname>
          </string-name>
          , L. Reinerman-Jones (Eds.),
          <source>HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence</source>
          , volume
          <volume>12424</volume>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <fpage>449</fpage>
          -
          <lpage>466</lpage>
          . URL: https://link.springer.com/10.1007/978-3-
          <fpage>030</fpage>
          -60117-1_
          <fpage>33</fpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>030</fpage>
          - 60117- 1_
          <fpage>33</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Schoonderwoerd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jorritsma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Neerincx</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Van Den</surname>
          </string-name>
          <string-name>
            <given-names>Bosch</given-names>
            ,
            <surname>Human-centered</surname>
          </string-name>
          <string-name>
            <surname>XAI</surname>
          </string-name>
          :
          <article-title>Developing design patterns for explanations of clinical decision support systems</article-title>
          ,
          <source>International Journal of Human-Computer Studies</source>
          <volume>154</volume>
          (
          <year>2021</year>
          )
          <article-title>102684</article-title>
          . URL: https://linkinghub.elsevier.com/ retrieve/pii/S1071581921001026. doi:
          <volume>10</volume>
          .1016/j.ijhcs.
          <year>2021</year>
          .
          <volume>102684</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Hernandez-Bocanegra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ziegler</surname>
          </string-name>
          ,
          <article-title>Efects of Interactivity and Presentation on Review-Based Explanations for Recommendations</article-title>
          , in: C. Ardito,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Malizia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Petrie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          , G. Desolda,
          <string-name>
            <given-names>K.</given-names>
            Inkpen (Eds.),
            <surname>Human-Computer</surname>
          </string-name>
          <string-name>
            <surname>Interaction - INTERACT</surname>
          </string-name>
          <year>2021</year>
          , volume
          <volume>12933</volume>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>597</fpage>
          -
          <lpage>618</lpage>
          . URL: https://link.springer.com/10. 1007/978-3-
          <fpage>030</fpage>
          -85616-8_
          <fpage>35</fpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>030</fpage>
          - 85616- 8_
          <fpage>35</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Cau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hauptmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Spano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          ,
          <article-title>Efects of AI and logic-style explanations on users' decisions under diferent levels of uncertainty, ACM Transactions on Interactive Intelligent Systems (</article-title>
          <year>2023</year>
          ). URL: https://doi.org/10.1145/3588320. doi:
          <volume>10</volume>
          .1145/3588320.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B.</given-names>
            <surname>Leichtmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Humer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hinterreiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Streit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mara</surname>
          </string-name>
          ,
          <article-title>Efects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task, Computers in Human Behavior 139 (</article-title>
          <year>2023</year>
          )
          <article-title>107539</article-title>
          . URL: https://linkinghub.elsevier.com/retrieve/pii/S0747563222003594. doi:
          <volume>10</volume>
          .1016/j.chb.
          <year>2022</year>
          .
          <volume>107539</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Kostopoulos</surname>
          </string-name>
          , G. Davrazos,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kotsiantis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Explainable Artificial</surname>
          </string-name>
          Intelligence
          <article-title>-Based Decision Support Systems: A Recent Review</article-title>
          ,
          <source>Electronics</source>
          <volume>13</volume>
          (
          <year>2024</year>
          )
          <article-title>2842</article-title>
          . URL: https://www.mdpi.com/ 2079-9292/13/14/2842. doi:
          <volume>10</volume>
          .3390/electronics13142842.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Atf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <article-title>Is Trust Correlated With Explainability in AI? A Meta-Analysis</article-title>
          ,
          <source>IEEE Transactions on Technology and Society</source>
          (
          <year>2025</year>
          )
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . URL: https://ieeexplore.ieee.org/document/ 10964393/. doi:
          <volume>10</volume>
          .1109/TTS.
          <year>2025</year>
          .
          <volume>3558448</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Baron</surname>
          </string-name>
          , Trust, Explainability and
          <string-name>
            <surname>AI</surname>
          </string-name>
          ,
          <source>Philosophy &amp; Technology</source>
          <volume>38</volume>
          (
          <year>2025</year>
          )
          <article-title>4</article-title>
          . URL: https://link. springer.
          <source>com/10.1007/s13347-024-00837-6</source>
          . doi:
          <volume>10</volume>
          .1007/s13347- 024- 00837- 6.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Khurana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Alamzadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Chilana</surname>
          </string-name>
          ,
          <article-title>ChatrEx: Designing Explainable Chatbot Interfaces for Enhancing Usefulness, Transparency, and Trust, in: 2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)</article-title>
          , IEEE,
          <string-name>
            <surname>St</surname>
            <given-names>Louis</given-names>
          </string-name>
          ,
          <string-name>
            <surname>MO</surname>
          </string-name>
          , USA,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . URL: https: //ieeexplore.ieee.org/document/9576440/. doi:
          <volume>10</volume>
          .1109/VL/HCC51201.
          <year>2021</year>
          .
          <volume>9576440</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Arar</surname>
          </string-name>
          ,
          <string-name>
            <surname>Conversational UX</surname>
          </string-name>
          <article-title>Design: A Practitioner's Guide to the Natural Conversation Framework, Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Selvaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cogswell</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Vedantam</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Parikh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Batra</surname>
          </string-name>
          , Grad-CAM:
          <article-title>Visual Explanations from Deep Networks via Gradient-Based Localization</article-title>
          ,
          <source>International Journal of Computer Vision</source>
          <volume>128</volume>
          (
          <year>2020</year>
          )
          <fpage>336</fpage>
          -
          <lpage>359</lpage>
          . URL: http://link.springer.
          <source>com/10.1007/s11263-019-01228-7</source>
          . doi:
          <volume>10</volume>
          .1007/s11263- 019- 01228- 7.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>P.</given-names>
            <surname>Fernandes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Treviso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pruthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Martins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Neubig</surname>
          </string-name>
          ,
          <article-title>Learning to scafold: Optimizing model explanations for teaching</article-title>
          , in: S. Koyejo,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Belgrave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Oh (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>35</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2022</year>
          , pp.
          <fpage>36108</fpage>
          -
          <lpage>36122</lpage>
          . URL: https://proceedings.neurips.cc/paper_files/paper/2022/ file/ea64883d500d31738cd39eb49a748fa4-Paper-Conference.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mindlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Beer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. N.</given-names>
            <surname>Sieger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Heindorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esposito</surname>
          </string-name>
          , A.
          <string-name>
            <surname>-C. Ngonga Ngomo</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Cimiano</surname>
          </string-name>
          ,
          <article-title>Beyond one-shot explanations: A systematic literature review of dialogue-based xAI approaches</article-title>
          ,
          <source>Artificial Intelligence Review</source>
          <volume>58</volume>
          (
          <year>2025</year>
          )
          <article-title>81</article-title>
          . URL: https://link.springer.
          <source>com/10.1007/s10462-024-11007-7</source>
          . doi:
          <volume>10</volume>
          .1007/s10462- 024- 11007- 7.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G.</given-names>
            <surname>Desolda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Dimauro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lanzilotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zancanaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A</given-names>
            <surname>Human-AI</surname>
          </string-name>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>