<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>T-REX: A Framework to Build Trustworthy Recom menders of Evidence Explanation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Fedele</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mattia Franchi de' Cavalieri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cristiano Landi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Clara Punzi</string-name>
          <email>clara.punzi@sns.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Tramacere</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Human-Machine Interaction, Explainable AI, Trustworthy AI</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Excellence in Robotics &amp; AI, Scuola Superiore Sant'Anna</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>KDD Lab, ISTI-CNR</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>LIDER-Lab, Scuola Superiore Sant'Anna</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Scuola Normale Superiore</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Scuola Superiore Sant'Anna</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>The BioRobotics Institute, Scuola Superiore Sant'Anna</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>University of Pisa</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The initial enthusiasm for eXplainable Artificial Intelligence (XAI) has been tempered by concerns about the efectiveness and reliability of its explanations. Studies show that some explanations are no more reliable than random ones. Tim Miller suggests a paradigm shift in XAI to address issues of cognitive biases, such as automation bias, which can afect decision-making processes. He advocates for hypothesis-driven support systems to align AI explanations with human cognitive processes. Addressing these issues, we propose the Trustworthy Recommenders of Evidence eXplanations (T-REX) framework. This approach aims to enhance XAI by moving from statistical explanations to those based on trustworthy scientific evidence, enabling AI systems to tackle complex tasks more efectively.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Following the initial boom in eXplainable Artificial Intelligence (XAI), the scientific community
began questioning the efectiveness, reliability, and social impact of such explanations. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
the authors experimentally demonstrate that the faithfulness and stability of some explanations
can be comparable to or even worse than random explanations. Additionally, in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], Miller
advocates for a paradigm shift in XAI to address the concerns about the reliability of automated
systems. These systems can be compromised by cognitive biases, such as over-reliance, where
users place excessive trust in system recommendations, or under-reliance, where users distrust
the system’s outputs. Additionally, issues with reliability may arise due to misalignment between
the AI system explanations and the cognitive processes used by humans in decision-making,
which Miller suggests handling by developing hypothesis-driven support systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Critical domains require human-AI collaboration, where appropriate reliance is key to the
successful use of the technology [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In recent years, we witnessed the rise of Large Language
Models (LLMs), which are rapidly revolutionizing our society by enabling new types of
humanmachine interactions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <sec id="sec-1-1">
        <title>While LLMs are at the forefront of research, they can produce hallucinations (i.e., incorrect outputs), especially when queried about information that are not included in the training set [6]. In such cases, it is critical that humans interacting with LLMs</title>
        <p>(CC BY 4.0).</p>
        <p>CEUR</p>
        <p>ceur-ws.org
understand their limitations in order not to fall into over-reliance in the (not-so-rare) case of
incorrect information.</p>
        <p>In order to foster such synergistic human-machine collaboration, we advocate for an
evidencedriven XAI methodology, which builds upon the strengths of explainability techniques and, at
the same time, mitigates its potential lack of persuasiveness or informative content. Leveraging
the authors’ diverse multidisciplinary expertise (i.e., computer science, biomedical engineering,
and law), we propose a framework to build Trustworthy Recommenders of Evidence eXplanations
(T-REX), advancing the XAI field from statistical-based explanations to trustworthy
communityapproved scientific evidence-based explanations. For this purpose, we suggest exploiting reliable
and traceable sources, such as scientific literature or World Health Organization (WHO)
publications, as a privileged knowledge base for the system, which additionally provides a valuable
layer of transparency and accountability, especially in high-risk applications.</p>
      </sec>
      <sec id="sec-1-2">
        <title>Additionally,</title>
        <p>implementing explainability measures will not only improve the appropriate utilization of the
AI system by human actors1 to support their decision-making but also boost the transparency
and human oversight of the entire AI system as required by the AI Act (EU Reg. 1689/2024),
specifically in Art. 13 (1) and (3)(b)(iv), and Art. 14 (4)(c).</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. T-REX Framework</title>
      <p>
        T-REX is a human-AI hybrid decision-making system [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] where the human actor synergistically
interacts with the machine; the framework objective is to support the human-AI decision process
by means of community-approved evidences, like scientific literature or WHO guidelines. Figure 1
provides a graphical illustration of the framework in a medical use case scenario. Specifically,
T-REX aims at moving from an aseptic &lt;ML outcome, Explanation&gt; bundle towards an approach
that cross-validates and enriches such pair with reputable sources authored by domain experts,
where the human-in-the-loop interaction is facilitated in the exploration of various hypotheses.
In order to satisfy the needs of the human actors and help them analyze doubts and possibilities,
such hypotheses may be validated through multiple interactions with the machine. To further
boost a critical evaluation of the hypotheses, T-REX is designed to provide humans not only
with evidence in support of them but also with those that contradict them (i.e., Supportive and
Contrastive Evidence).
      </p>
      <p>T-REX framework involves 4 main components: (i) a human actor, (ii) a ML classifier  , (iii) an
explainability technique</p>
      <p>, and (iv) an evidence retrieval and evidence classifier  . Specifically,
for a given query input  ,  outputs a set of predictions  ̂ along with their confidence  score (e.g.,
the predicted class probability), formally ( , ̂) =  ()
. After that, the  
explainer returns
an explanation   =   (,  ̂

) where   is the prediction selected by the human actor, which
could be based on its hypotheses only or according to the model confidence.Finally, a composite
function  employs the explanation   to construct a query and retrieve relevant evidence from
reputable sources. It then classifies the selected documents as supportive  or contrasting 
concerning the hypothesis under analysis  ̂ . This process and the retrieved evidence support
the human actor in making the final decision.</p>
      <p>Medical Use Case Scenario.</p>
      <p>
        The T-REX framework has the potential to make a significant
impact in the medical field by ofering an innovative system for evidence-based clinical
decisionmaking in diagnostics. The process starts when a human actor inputs patient data  for analysis
by the machine. The first algorithmic component  performs a standard classification task,
estimating the probability distribution over a set of possible outcomes. Although T-REX is
model-agnostic with respect to the choice of  , in this use case, we choose the model proposed
in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], where the authors developed a model for detecting chronic diseases. They trained an
1Defined as deployers in the AI Act.
artificial Neural Network (NN) classifier on diferent tabular datasets for each condition: breast
cancer, diabetes, heart attack, hepatitis, and kidney disease.
      </p>
      <p>
        Let us suppose the doctor chooses to investigate a specific prediction  ̂ . T-REX generates
a statistical-based explanation   for  ̂ by supplying the importance scores of the features
underlying the predictor  results. This explanation is generated by the   component, which
could be implemented, for instance, using the well-established SHapley Additive exPlanations
(SHAP) technique [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. Remarkably, diferent choices of explanation method could be taken
depending on the dataset and task of the specific use case scenario.
      </p>
      <p>
        The final component of the framework, represented as the function  , is composed of two
modules. The first module exploits the statistical-based explanation   to query the knowledge
base and retrieve the relevant documents with respect to the prediction  ̂ chosen by the doctor;
such retrieval is part of a well-known task in the computer science literature, known as semantic
search [
        <xref ref-type="bibr" rid="ref11 ref12 ref13 ref14">11, 12, 13, 14</xref>
        ]. The knowledge base should be composed of reliable, accountable, and
trustworthy documents, such as publications by WHO and scientific literature. This is a very
crucial part of the proposed framework: having the possibility of relying on such documents
enables the doctor to switch from data to medical science, i.e., understand the machine’s decision
on the basis of medical evidence and not on the basis of some statistical distribution in a dataset
which could contain errors. The second component of  uses  ̂ to group the retrieved documents
as either supporting or contrasting the hypothesis under analysis, specifically the detection of
the disease  ̂ . A potential solution involves leveraging a state-of-the-art NLP techniques, such
as sentiment analysis, where the goal is to determine whether the sentiment expressed in the
text indicates a favorable or unfavorable stance toward a specific subject.
      </p>
      <p>Overall, the T-REX framework encompasses multiple stages of human-AI interactions. First
of all, the human actors can choose which potential disease prediction to investigate to test
their hypothesis. Furthermore, they can explore diferent prediction paths and trigger a
humanfeedback loop by changing the hypothesis under analysis or by tweaking the query derived from
the explanation to incorporate additional knowledge.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusion</title>
      <p>
        This paper introduces T-REX, a framework that facilitates a more transparent and trustworthy
decision-making process in critical applications, such as healthcare, where the cost of errors is
critical. Combining statistical-based traditional explanations with evidence retrieved from trusted
sources enables human decision-makers to critically evaluate both supportive and contrasting
evidence related to AI predictions, potentially mitigating the risk of cognitive biases. Additionally,
AI systems using the T-REX framework should facilitate compliance with the legal requirements
regarding transparency outlined in the AI Act. The overall interface of the system will benefit
from further investigation; currently, we are considering using a traditional web-like interface, as
in [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ], to avoid the risk of hallucination that could occur with modern Retrieval-Augmented
Generation LLMs interfaces [
        <xref ref-type="bibr" rid="ref13 ref5">5, 13</xref>
        ]. We believe that the combination of explainability and
evidence-based reasoning ofered by T-REX represents a promising direction for creating more
reliable, trustworthy, and accountable AI systems in the future.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>This work is partially supported by the European Union NextGenerationEU programme under
the funding schemes PNRR-PE-AI scheme (M4C2, investment 1.3, line on Artificial Intelligence)
FAIR (Future Artificial Intelligence Research), and “SoBigData.it” - Prot. IR0000013, Res. Infr.
G.A. 871042 SoBigData++, G.A. 761758 Humane AI, G.A. 952215 TAILOR, ERC-2018-ADG
G.A. 834756 XAI.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Krishna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pawelczyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , I. Puri,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zitnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lakkaraju</surname>
          </string-name>
          , Openxai:
          <article-title>Towards a transparent evaluation of model explanations</article-title>
          , in: NeurIPS,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explainable AI is dead, long live explainable ai!: Hypothesis-driven decision support using evaluative AI</article-title>
          , in: FAccT, ACM,
          <year>2023</year>
          , pp.
          <fpage>333</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>267</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>See</surname>
          </string-name>
          ,
          <article-title>Trust in automation: Designing for appropriate reliance</article-title>
          ,
          <source>Human Factors: The Journal of the Human Factors and Ergonomics Society</source>
          <volume>46</volume>
          (
          <year>2004</year>
          )
          <fpage>50</fpage>
          -
          <lpage>80</lpage>
          . URL: https://doi.org/10.1518/hfes.46.1.50_
          <fpage>30392</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>A survey on retrieval-augmented text generation for large language models</article-title>
          ,
          <source>CoRR abs/2404</source>
          .10981 (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ning</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Yuan,</surname>
          </string-name>
          <article-title>LLM lies: Hallucinations are not bugs, but features as adversarial examples</article-title>
          ,
          <source>CoRR abs/2310</source>
          .01469 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Punzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pellungrini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Setzu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>Ai, meet human: Learning paradigms for hybrid decision making systems</article-title>
          ,
          <source>CoRR abs/2402</source>
          .06287 (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rashid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Batool</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Wasif</given-names>
            <surname>Nisar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Juneja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kushwaha</surname>
          </string-name>
          ,
          <article-title>An augmented artificial intelligence approach for chronic diseases prediction</article-title>
          ,
          <source>Frontiers in Public Health</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>860396</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A unified approach to interpreting model predictions</article-title>
          ,
          <source>in: NIPS</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>4765</fpage>
          -
          <lpage>4774</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Allgaier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mulansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Draelos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pryss</surname>
          </string-name>
          ,
          <article-title>How does the model make predictions? a systematic literature review on the explainability power of machine learning in healthcare</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>143</volume>
          (
          <year>2023</year>
          )
          <fpage>102616</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Bast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Buchhold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Haussmann</surname>
          </string-name>
          , et al.,
          <article-title>Semantic search on text and knowledge bases</article-title>
          ,
          <source>Foundations and Trends® in Information Retrieval</source>
          <volume>10</volume>
          (
          <year>2016</year>
          )
          <fpage>119</fpage>
          -
          <lpage>271</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bordawekar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Shmueli</surname>
          </string-name>
          ,
          <article-title>Using word embedding to enable semantic queries in relational databases</article-title>
          ,
          <source>in: Proceedings of the 1st workshop on data management for end-to-end machine learning</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piktus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Petroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Karpukhin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Küttler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , W.-t. Yih,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rocktäschel</surname>
          </string-name>
          , et al.,
          <article-title>Retrieval-augmented generation for knowledge-intensive nlp tasks</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>9459</fpage>
          -
          <lpage>9474</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Douze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Guzhva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Deng</surname>
          </string-name>
          , J. Johnson, G. Szilvasy, P.-E. Mazaré,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lomeli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hosseini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jégou</surname>
          </string-name>
          ,
          <article-title>The faiss library (</article-title>
          <year>2024</year>
          ). arXiv:
          <volume>2401</volume>
          .
          <fpage>08281</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Panigutti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Perotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <string-name>
            <surname>Doctor</surname>
            <given-names>XAI</given-names>
          </string-name>
          :
          <article-title>an ontology-based approach to black-box sequential data classification explanations</article-title>
          , in: FAT*, ACM,
          <year>2020</year>
          , pp.
          <fpage>629</fpage>
          -
          <lpage>639</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Metta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gallinari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rinzivillo</surname>
          </string-name>
          ,
          <article-title>Exemplars and counterexemplars explanations for image classifiers, targeting skin lesion labeling</article-title>
          , in: ISCC, IEEE,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>