<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>⋆The full paper has been originally published in the Journal of Pathology Informatics [1]</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Results⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Discussion Paper</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabio Giachelle</string-name>
          <email>fabio.giachelle@unipd.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Marchesin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianmaria Silvello</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Learning, Visual Analytics</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Clinical Practice, Digital Pathology, Expert Systems</institution>
          ,
          <addr-line>Explainable AI, Knowledge Extraction, Machine</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Information Engineering, University of Padua</institution>
          ,
          <addr-line>Padua</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>In recent years, knowledge extraction approaches have been adopted to distill the medical knowledge included in clinical reports. In this regard, the Semantic Knowledge Extractor Tool (SKET) has been introduced for extracting knowledge from pathology reports, leveraging a hybrid approach that combines unsupervised rule-based techniques with pre-trained Machine Learning (ML) models. Since ML models are usually based on probabilistic/statistical approaches, their predictions cannot be easily understood, especially for what concerns their underlying decision mechanism. To explain the SKET's decisionmaking process, we propose SKET eXplained (SKET X), a web-based system providing visual explanations in terms of the models, rules, and parameters involved for each prediction. SKET X is designed for pathologists and experts to ease the comprehension of SKET predictions, increase awareness, and improve the efectiveness of the overall knowledge extraction process according to the pathologists' feedback. To assess the learnability and usability of SKET X, we conducted a user study designed to collect useful suggestions from pathologists and domain experts to further improve the system.</p>
      </abstract>
      <kwd-group>
        <kwd>0000-0001-5015-5498 (F</kwd>
        <kwd>Giachelle)</kwd>
        <kwd>0000-0003-0362-5893 (S</kwd>
        <kwd>Marchesin)</kwd>
        <kwd>0000-0002-0877-7063 (G</kwd>
        <kwd>Silvello)</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the last decades, eXplainable Artificial Intelligence ( XAI) approaches have gained increasingly
importance to face the lack of interpretability and explainability of AI models relying on Machine
Learning (ML) and Deep Learning (DL) methods [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Indeed, in the medical domain, where ML
and DL based methods for information extraction and retrieval are gaining popularity [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ],
the transparency of models and their decision processes is essential to promote trustworthy
AI [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. In this regard, the High-Level Expert Group on Artificial Intelligence ( AI HLEG), set
up by the European Commission, recently published a set of ethics guidelines for trustworthy
AI, requiring that “algorithmic processes need to be transparent and decisions explainable” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
XAI approaches to support physicians and medical experts in the comprehension of algorithm
predictions are urgently needed due to the increasing application of AI in the medical domain,
especially for diagnostics [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In particular, explainability techniques can be useful in the
Digital Pathology (DPATH) domain, where most of the approaches for image analysis are
DL-based. Yet, they are efective but their comprehension can be challenging for humans
due to their black-box nature [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 11, 12</xref>
        ]. In this context, Pocevičiūtė et al. emphasizes the
importance of understanding why a specific prediction has been made in order to trust machine
predictions exploited for diagnostic purposes [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Moreover, from a recent interview presented
by Evans et al., it emerges that pathologists clearly prefer visual explanations for XAI [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. As
a prominent example, HistoMaprTM is a proprietary explainability tool for DPATH designed to
support pathologists during the annotation activities of histology images, by means of visual
explanations [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        In the DPATH domain the Semantic Knowledge Extractor Tool (SKET) has been introduced
to extract meaningful information – i.e., concepts, entity mentions, and labels – from pathology
reports provided in natural language [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. SKET performs the knowledge extraction process and
produces weak annotations (labels) that can be used to train image classification algorithms
for supporting the decision-making process in the DPATH domain [16]. However, since SKET
adopts a hybrid approach that combines unsupervised rule-based techniques with pre-trained
ML models, understanding the rationale behind SKET’s predictions may not be obvious for
humans, despite it is crucial in the medical domain to have transparent models in order to trust
their results. To explain SKET’s results, we designed and developed SKET eXplained (SKET X)
a web-based system that exploits Visual Analytics (VA) techniques to provide the pathologists
and experts with visual analyses and explanations regarding the underlying SKET’s decision
process.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. SKET X</title>
      <p>
        SKET X is a web-based tool that aims to visually explain SKET predictions in order to ease their
comprehension and support pathologists and domain experts in understanding the underlying
machine decision mechanism [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. SKET X is available at http://w3id.org/sketx1 and provides
a visual interface to interact with an online instance of SKET; it can be used to continuously
refine SKET parameters and rules in a human-in-the-loop interaction so that to improve SKET’s
efectiveness progressively. SKET X allows the users to explain the rationale employed to
determine the Named Entity Recognition and Linking (NER+L) pipeline’s outputs, including the
concepts and the corresponding mentions identified by SKET in the provided clinical reports.
Through SKET X, users can get useful insights about SKET’s knowledge extraction process and
the resulting outputs. Specifically, users can visually identify the diferent components (e.g.,
models, rules, and parameters) activated during the knowledge extraction process. Thereby,
users can easily understand why a certain output has been generated for the given clinical
reports. SKET X exploits VA techniques to make SKET’s outputs visually intuitive via interactive
interfaces. SKET X allows the users to execute SKET several times as independent pipelines
considering diferent models, parameters, and data. Thereby, users can easily compare SKET’s
results using the dedicated interface tab Compare which displays the results of two pipelines
considered in the comparison in two vertical panels arranged side-by-side. Moreover, users
1Access provided with credentials demo/demo
can compare variations of the same pipeline which is executed with diferent configuration
parameters to assess their impact on the overall efectiveness of the knowledge extraction
process. As a result, pathologists’ feedback can be exploited to refine the rules considered in
predictions, leading to improved efectiveness of the knowledge extraction process.
      </p>
      <p>Figure 1 shows the main interface of SKET X which is organized in six tabs: Overview, Input,
Output, Params, Analytics, and Compare. The current active tab in figure is Analytics which
allows the user to analyze the outputs for the current phase (e.g., Entity Linking (EL)) of the
knowledge extraction process. The interface is divided in three major parts: (A) report section
displaying information for the current report as well as controls to navigate among the reports;
(B) Sankey diagram presenting the SKET’s rules for the current phase and highlighting the
subset of rules activated for determining the current outputs; (C) section presenting the SKET’s
outputs. We can observe that SKET identifies several concepts including Biopsy of Colon, Mild
Colon Dysplasia, and Moderate Colon Dysplasia that have been identified using the SKET’s rules
highlighted in the Sankey diagram on the right side.</p>
      <p>User study
To evaluate SKET X in terms of learnability, usability, and user satisfaction we conducted a user
study to collect feedback from pathologists and experts to improve SKET X accordingly. Overall,
nine participants were involved in the user study. We provided the participants anonymized
credentials to access SKET X and a private link to an online form where they could answer
predefined questions and provide feedback. The user study was organized into two parts,
designed to measure the learnability and usability of SKET X, respectively. The learnability part
focused on assessing the users’ confidence and awareness with accomplishing the following
predefined tasks with SKET X: (i) analyzing the mentions/concepts identified by SKET; (ii)
analyzing the labels (weak annotations) produced by SKET and (iii) answer questions about
the results produced by SKET. To guide and support the users in the analysis process, we
provided two explanatory videos. Then, the collected answers of each user were compared with
the correct ones to assess the user proficiency with SKET X in the analysis and interpretation
of SKET’s results. Secondly, we evaluated SKET X in terms of usability and user satisfaction
using the System Usability Scale (SUS), that is considered an industry standard for assessing
systems’ usability [17]. In this regard, we computed the average SUS score and it emerges that
the usability of SKET X is quite good with a score equal to 66.7.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusions</title>
      <p>We introduced SKET X, a web-based system designed to explain the outputs generated by SKET
through visual interactive interfaces. SKET X aims at simplifying the comprehension of SKET
outputs as well as supporting pathologists and domain experts in their analysis, thus increasing
awareness and understanding of the machine decision process. SKET X exploits VA techniques
to explain why a specific prediction of SKET has been made and which are the roles of its diferent
components involved in the knowledge extraction process. Hence, SKET X allows expert users
to not only comprehend SKET results but also get valuable insights concerning the knowledge
extraction process. Moreover, we assessed SKET X in terms of usability and learnability, by
conducting a user study with digital pathology experts. To measure the SKET X’s learnability,
we asked the participants to complete a sequence of analysis tasks by employing SKET X and
then we asked them to answer a multiple-choice questionnaire. Thereby, we evaluated the
number of tasks completed correctly by each user and thus the degree of understanding. From
the answers and the explanations collected, we observed that almost all the participants correctly
understood how to use SKET X to explain SKET results. Moreover, we asked the participants
to answer a set of multiple choice questions to appraise user satisfaction and system usability
according to the SUS scale. Finally, we collected useful suggestions from pathologists and
other experts to identify the key necessities and foster further advancements in the design of
transparent and explainable models/algorithms for DPATH.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The work was supported by the ExaMode project as part of the EU H2020 program under Grant
Agreement no. 825292.
Intelligence and Machine Learning for Digital Pathology, Springer, 2020, pp. 204–227.
[16] N. Marini, S. Marchesin, S. Otálora, M. Wodzinski, A. Caputo, M. van Rijthoven, W.
Aswolinskiy, J. M. Bokhorst, D. Podareanu, E. Petters, S. Boytcheva, G. Buttafuoco, S.
Vatrano, F. Fraggetta, J. der Laak, M. Agosti, F. Ciompi, G. Silvello, H. Muller, M.
Atzori, Unleashing the potential of digital pathology data by training computer-aided
diagnosis models without human annotations, npj Digital Medicine 5 (2022). URL:
http://dx.doi.org/10.1038/s41746-022-00635-4. doi:1 0 . 1 0 3 8 / s 4 1 7 4 6 - 0 2 2 - 0 0 6 3 5 - 4 .
[17] J. Brooke, et al., Sus: A quick and dirty usability scale, Usability evaluation in industry
189 (1996) 4–7.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchesin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giachelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Marini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Atzori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Boytcheva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Buttafuoco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ciompi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Di Nunzio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fraggetta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Irrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Primov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vatrano</surname>
          </string-name>
          , G. Silvello,
          <article-title>Empowering digital pathology applications through explainable knowledge extraction tools</article-title>
          ,
          <source>Journal of Pathology Informatics</source>
          (
          <year>2022</year>
          )
          <article-title>100139</article-title>
          . URL: https://www.sciencedirect.com/ science/article/pii/S2153353922007337. doi:h t t p s : / / d o i .
          <source>o r g / 1 0 . 1 0 1 6 / j . j p i . 2 0</source>
          <volume>2 2 . 1 0 0 1 3 9 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>U.</given-names>
            <surname>Kamath</surname>
          </string-name>
          , J. Liu,
          <source>Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning</source>
          , Springer,
          <year>2021</year>
          .
          <source>doi:1 0 . 1 0</source>
          <volume>0 7 / 9 7 8 - 3 - 0 3 0 - 8 3 3 5 6 - 5</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchesin</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Silvello, TBGA: a large-scale gene-disease association dataset for biomedical relation extraction</article-title>
          ,
          <source>BMC Bioinform</source>
          .
          <volume>23</volume>
          (
          <year>2022</year>
          )
          <article-title>111</article-title>
          . URL: https://doi.org/10.1186/ s12859-022
          <source>-04646-6. doi:1 0 . 1 1 8 6 / s 1 2</source>
          <volume>8 5 9 - 0 2 2 - 0 4 6 4 6 - 6</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Agosti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchesin</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Silvello, Learning Unsupervised Knowledge-Enhanced Representations to Reduce the Semantic Gap in Information Retrieval</article-title>
          ,
          <source>ACM Trans. Inf. Syst</source>
          .
          <volume>38</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Giachelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchesin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Silvello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Alonso</surname>
          </string-name>
          ,
          <article-title>Searching for reliable facts over a medical knowledge base</article-title>
          ,
          <source>in: Proc. of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <string-name>
            <surname>SIGIR</surname>
          </string-name>
          <year>2023</year>
          , Taipei, Taiwan,
          <source>July 23-27</source>
          ,
          <year>2023</year>
          , ACM,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kundu</surname>
          </string-name>
          ,
          <article-title>Ai in medicine must be explainable</article-title>
          ,
          <source>Nature medicine 27</source>
          (
          <year>2021</year>
          )
          <fpage>1328</fpage>
          -
          <lpage>1328</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Plass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Brcic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stumptner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zatloukal</surname>
          </string-name>
          ,
          <article-title>Explainability and causability for artificial intelligence-supported medical image analysis in the context of the european in vitro diagnostic regulation</article-title>
          ,
          <source>New Biotechnology</source>
          <volume>70</volume>
          (
          <year>2022</year>
          )
          <fpage>67</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>G. Sartor,</surname>
          </string-name>
          <article-title>The impact of the General Data Protection Regulation (GDPR) on artificial intelligence</article-title>
          ,
          <source>Technical Report, Panel for the Future of Science and Technology (STOA)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          , G. Langs,
          <string-name>
            <given-names>H.</given-names>
            <surname>Denk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zatloukal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <article-title>Causability and explainability of artificial intelligence in medicine</article-title>
          ,
          <source>WIREs Data Mining Knowl. Discov</source>
          .
          <volume>9</volume>
          (
          <year>2019</year>
          ). URL: https://doi.org/10.1002/widm.1312.
          <source>doi:1 0 . 1 0 0 2 / w i d m . 1 3</source>
          <volume>1 2 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Malle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kieseberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Reihs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zatloukal</surname>
          </string-name>
          ,
          <article-title>Towards the augmented pathologist: Challenges of explainable-ai in digital pathology</article-title>
          ,
          <source>CoRR abs/1712</source>
          .06657 (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          , From machine learning to explainable
          <source>ai</source>
          ,
          <source>2018 World Symposium on Digital Intelligence for Systems and Machines (DISA)</source>
          (
          <year>2018</year>
          )
          <fpage>55</fpage>
          -
          <lpage>66</lpage>
          . URL: https://doi.org/10.1109/ DISA.
          <year>2018</year>
          .
          <volume>8490530</volume>
          .
          <source>doi:1 0 . 1 1 0 9 / D I S A . 2</source>
          <volume>0 1 8 . 8 4 9 0 5 3 0 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          <article-title>and multi-modal causability in medicine, i-com 19 (</article-title>
          <year>2021</year>
          )
          <fpage>171</fpage>
          -
          <lpage>179</lpage>
          . URL: https://doi.org/10.1515/icom-2020
          <source>-0024. doi:1 0 . 1 5 1 5 / i c o m - 2</source>
          <volume>0 2 0 - 0 0 2 4 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pocevičiūtė</surname>
          </string-name>
          , G. Eilertsen,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lundström</surname>
          </string-name>
          ,
          <article-title>Survey of xai in digital pathology</article-title>
          ,
          <source>in: Artificial intelligence and machine learning for digital pathology</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>56</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. O.</given-names>
            <surname>Retzlaf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Geißler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kargl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Plass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.-R.</given-names>
            <surname>Kiehl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zerbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <article-title>The explainability paradox: Challenges for xai in digital pathology</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          <volume>133</volume>
          (
          <year>2022</year>
          )
          <fpage>281</fpage>
          -
          <lpage>296</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>A. B. Tosun</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Pullara</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          <string-name>
            <surname>Becich</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Taylor</surname>
          </string-name>
          , S. C.
          <string-name>
            <surname>Chennubhotla</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          <string-name>
            <surname>Fine</surname>
          </string-name>
          , Histomapr™:
          <article-title>An explainable ai (xai) platform for computational pathology solutions</article-title>
          ,
          <source>in: Artificial</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>