<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, April</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>and AI through Parameterization and Implicitization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammad Hossein Jarrahi</string-name>
          <email>jarrahi@unc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohammad Haeri</string-name>
          <email>mhaeri@kumc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI, Interactive Machine Learning</institution>
          ,
          <addr-line>Explainable AI, Interpretability, Accountability, Human-AI</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>The University of Kansas Medical Center</institution>
          ,
          <addr-line>3901 Rainbow Boulevard, Kansas City, KS 66160</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of North Carolina</institution>
          ,
          <addr-line>100 Manning Hall, Chapel Hill, NC 27599</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>1</volume>
      <fpage>3</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>Pathology is a fundamental element of modern medicine that determines the final diagnosis and portrays the prognosis in most medical conditions. Due to continuous improvements in AI capabilities (e.g., object recognition and image processing), intelligent systems are bound to play a key role in augmenting pathology research and clinical practices. Despite the pervasive deployment of computational approaches in similar fields such as radiology, there has been less success in integrating AI in clinical practices and histopathologic diagnosis. This partly has to do with the opacity of end-to-end AI systems, which raises issues of interoperability and accountability of medical practices. In this article, we draw on interactive machine learning to take advantage of AI in digital pathology in an attempt to open the Blackbox of AI and generate a more effective partnership between pathologists and AI systems based on the metaphors of parameterization and implicitization. Artificial intelligence, Medical diagnosis, Pathology, Histopathologic diagnosis, End-to-end</p>
      </abstract>
      <kwd-group>
        <kwd>Parameterization</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Implicitization
partnership</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The application of AI for medical diagnosis is expanding rapidly [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In recent years the use of
machine learning, and specifically deep learning has made some great strides in computer-mediated
pathologic diagnosis and offer promising standardized, reproducible, and reliable potentials for digital
image analysis. Deep learning has provided unique affordances for “model‐based assessment of routine
diagnostic features in pathology, and the ability to extract and identify novel features that provide
insights into a disease” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Applications of AI in routine pathology; however, are constrained by some key challenges. These
can include infrastructural deficiencies such as limited digitization practices, reliable computational
infrastructures, or lack of reliable data storage. However, a closer examination into limited applications
of computer-mediated tools in pathology, particularly AI systems, reveals deeper issues than failed
technology and points to practice-level dynamics [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The blackbox nature of AI models and how AI
algorithms arrive at a decision is one of the largest stumbling blocks [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Lack of interpretability is at
odds with common standards of the medical community, which revolves around comprehending,
justifying, and taking responsibility for the underlying reasons for medical decisions [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. After all, to
receive regulatory approval and much-needed buy-in from medical professionals, AI systems must
provide a high level of transparency, currently lacking in most lab-based approaches towards AI.
      </p>
      <p>The mysterious and versatile power of neural networks is exactly what makes them blackboxes to
humans, but as AI is making inroads into various domains and providing unprecedented performance,</p>
      <p>
        2020 Copyright for this paper by its authors.
it leaves medical domain experts with ‘why’ questions. The job of these experts is often founded on
offering explanations and taking responsibility for decisions being made [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The accountability issues
are even more pronounced in high-stake decisions in pathology as human experts (i.e., pathologists) are
deemed irreplaceable and must actively participate in decision making. As others noted, historically
“human-machine collaborations have performed better than either one alone” in these contexts [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and
such a partnership requires opening the Blackbox of AI.
      </p>
      <p>
        An important fact to emphasize is that pathologic diagnoses rarely fall into Boolean answers (e.g.,
benign, or malignant). In real-life clinical practice, pathologists often build on a diverse and complex
set of background information and foundations including their broad understanding of the clinical
contexts, the patient's history, and years of tacit knowledge [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In how they report the results,
pathologists may also use sophisticated languages and terminologies, reflective of the complicated and
non-binary diagnosis process, to inform potential prognosis. The combination of these challenges could
render pathologists, regulators, and other stakeholders skeptical of the bottom-line impacts of AI in
clinical workflows [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
      </p>
      <p>In what follows we provide an overview of the common end-to-end approach towards the application
of AI in medical diagnoses and juxtapose it with our approach, which centers on explainable AI.
2. End-to-end AI</p>
      <p>The typical uses of deep learning focus on training and optimizing a model based on training data,
mostly medical images, that have been labeled by human experts, here clinicians/pathologists. The label
can include information related to patient outcomes, clinical classifications, and image annotations.
Some studies even go around the issue of pixel-wise manual annotations by pathologists, train AI
models on large training data sets (e.g., whole slide images) and offer automatic extraction and
identification of features [e.g., 5, 11].</p>
      <p>In this typical work system, the role of the human expert is either eliminated or relegated to 1) trainer
of the algorithmic system, and 2) validator of the system’s recommendations and decisions (the loop
marked in purple in Fig 1). It is important to note that beyond this stage in building and training
endto-end AI systems, the pathologist is not replaced in clinical practices but provided with automated
binary decisions. Such routines of AI systems arguably increase the speed and efficiency since the
pathologists receive a final diagnosis. However, as the AI-driven diagnoses are not evidence-based, the
clinicians cannot interact with machines and their logic and therefore there are few opportunities for
mutual learning. Due to the Blackbox nature of the AI system, there is not much room for learning on
the part of the human expert. In essence, both AI and human experts retain tacit knowledge which is
difficult to transfer to the other party.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Explainable AI: Expert-in-The Loop</title>
      <p>
        Our approach is informed and inspired by current research on interactive machine learning
(interactive ML), which is focused on training and optimizing algorithms through intuitive human
computer interface; and integrating users’ feedback into informing features. In interactive ML, human
experts are not just labelers or annotators, but they serve as the primary driver, guide, and active
explorer; she interacts with data and may directly contribute to feature extraction [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>We contribute to interactive ML as this research stops short of presenting a more comprehensive
work system, one that integrates nuanced organizational dynamics like developing a broader and
embedded work system of collaboration between humans and AI. Current research on interactive ML
typically stays at the level of user-interface interaction.
3.1.</p>
    </sec>
    <sec id="sec-4">
      <title>Meningioma diagnosis</title>
      <p>
        We propose a framework for developing explainable AI workflows using a case study of grading of
meningioma (see [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] for more details on the user study with pathologists). Meningioma is the most
common primary brain tumor. Based on its histopathologic features, meningioma is classified into three
grades. The prognosis, recurrence rate, and treatment management of different grades of meningioma
vary. There are four routes to a grade 2 meningioma including brain invasion, more than 3 mitosis per
10 high power field (HPF), and 3 out of the following 5 morphological characteristics which are
hypercellularity, sheeting architecture, prominent nucleoli, spontaneous necrosis, and small cell
component. The presence of 3 out of the aforementioned architectural morphology upgrades a grade 1
meningioma to a grade 2 tumor. The last route to a grade 2 managerama is specific subtypes including
clear-cell and chordoid meningioma. Our case study involves a redesigned workflow with key
contributions from both pathologists and AI in faster and more reliable grading of meningioma.
      </p>
      <p>
        We articulate the symbiotic interaction between pathologists and the AI system through the
continuous process of parameterization and implicitization. These two concepts are used as metaphors
to explain the reciprocal process through which the two partners work together and contribute to
explainable AI workflows. Inspired by the mathematical process of parameterization and
implicitization (mostly used in geometry), we describe how an effective partnership between humans
and AI can help reduce uncertainty and complexity as key factors that riddle the efficacy of decision
making in clinical settings and elsewhere. Humans have unique capacities in dealing with uncertainty
whereas AI systems are more competent in handling the complexity of information [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>Parameterization</title>
      <p>Parameterization is the process of identifying and expressing implicit equations using manifolds or
a variety of parameters. Parameterization is instrumental in defining the state or quality of a system.
Feynman parametrization, for example, helps express and evaluate loop integrals in so-called Feynman
diagrams. Parameterization reduces complexity. Complexity refers to the abundance of variables and
their associations. Parameterization ‘divides and conquers’ the problem area; that is, it breaks it down
to sub-problems (parameters) so that these parameters and their associations become simple enough to
be observed and understood directly.</p>
      <p>
        Human experts' contributions are still irreplaceable in two ways: (1) Pathologists initiate the criteria
for a specific diagnosis; choosing ‘good’ parameters (features) are always the first step in interactive
ML and require domain and problem-specific knowledge [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. (2) pathologists must stay “in the loop”
to decide if parameters and relationships extracted by the machine are meaningful.
      </p>
      <p>AI provides two contributions to the parameterization process. (1) It extracts features/parameters
defined by pathologists (e.g., it showcases the structure of cells in the data with nuclei larger than certain
diameters ) (2) it could produce new knowledge (latent knowledge) by discovering new parameters and
new relationships between parameters unnoticed or unknown by pathologists. Due to its analytical
superiority in parsing and analyzing a large number of features and data points, AI can uncover
parameters and associations that were not considered salient before. Humans decide if such new
parameters/relationships discovered by AI are meaningful and valuable. The valuable new knowledge
will be fed back to the algorithm for better diagnosis accuracy.</p>
      <p>As noted, AI helps the discovery of associations between parameters decided by human experts as
well as the association and parameters discovered by mass image processing and parametrization
producing new knowledge. In cited work of grading meningioma, the algorithm looks for all the features
that are compatible with a grade 2 meningioma and provides the histopathologic data such as brain
invasion, specific subtypes, and all other features that qualify a grade 2 meningioma. The AI focuses
on the criteria that are used by pathologists to grade the meningioma, not on a black-box process with
which a pathologist has no way of interaction. The system receives feedback from pathologists based
on their confirmation or rejection of an extracted feature; this leads to the improvement of the
characteristic detection by the system.</p>
      <p>In addition, AI can reduce complexity by adapting to evidence-based feature extraction and
diagnosis. The advantage of the system is the speed of pattern recognition and characteristics that could
take hours for a pathologist to go over. For instance, accurate detection of small foci of spontaneous
necrosis is a challenging task that can take over an hour for a pathologist to check the entire set of slides.
The system does the accurate detection task with high speed and provides the pathologist with all
spontaneous necrosis candidates and the pathologist confirms or rejects the findings. And once again,
such an interaction allows for the system to quickly get trained and apply newly discovered information
with the help of the pathologist.
3.3.</p>
    </sec>
    <sec id="sec-6">
      <title>Implicitization</title>
      <p>
        Implicitization is the inverse process of parameterization. Implicitization refers to converting back
parameters and their association to a single implicit equation. Implicitization is key in keeping the
holistic nature of a system in view (the system is bigger than the sums of its elements). In the
decisionmaking context, humans enjoy a competitive edge in generating and maintaining a holistic and abstract
view, which is central in overcoming uncertainty (defined as the lack of information about all
alternatives or their consequences). The histological diagnosis, empowered by AI, offers a crucial but
inexorably limited perspective. This piece of analysis must be complemented by a more comprehensive
synthesis of the broader context to construct the final integrated diagnosis. The broader context consists
of many factors (see Fig 3) and the synthesis that goes into the diagnosis report often include
information such as sub-type/variant of cancer, grade, stage, as well as prognosis and recommendations
relative to the further management/treatment decisions. AI algorithms in current forms lack this
humanlevel general intelligence, and only perform efficiently and with high accuracy in narrow and specific
(and often binary) tasks of histological diagnosis. This is what AI researchers refer to as “weak AI”
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Machines lack common sense [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Implicitization involves a crucial sensemaking component
through which human experts put together all the parameters as well as contextual information and
decide if a certain diagnosis or prognosis “makes sense.”
      </p>
      <p>This noted, AI can potentially contribute to the implicitization process by helping human experts in
visualizing the associations between many parameters (i.e., patterns of associations) and understanding
the ways interactions among those parameters/features or their unique combinations may give rise to
the problem at hand.</p>
      <p>In this workflow, the AI is utilized to recognize morphology and the final decision is made by the
pathologists who make the final diagnosis and assemble the final report for the clinicians and patients.
The pathologist decides on the accuracy of the results by going over all the findings detected and
presented by the AI system. Such an approach is empowered by close and transparent interaction
between the AI and pathologists.</p>
      <p>
        The holistic view possessed by medical experts is often of an implicit/tacit nature and derives from
an intuitive decision-making style [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Considering human-AI interaction, humans can 1) specify the
key parameters and feed them into the AI system, and 2) put the AI-enabled analysis and
parameterization into perspective and produce a final recommendation or decision. A crucial
component of the activity (2) here has to do with the pathologists' unique ability to “contextualize.”
While AI systems may reveal more contextual features, it is the human expert that can bring all together
(e.g., patients' prior history) in the form of a final decision/diagnosis and report.
      </p>
      <p>Parameterization and Implicitization help to open the Blackbox of AI-empowered systems and fulfill
the vision of human-AI partnership. This approach reinforces the mutual learning between human
experts and AI: The inner circle in figure 2 facilitates mutual learning through continuous
parameterization and implicitization. Finally, the interpretability achieved here enables higher levels of
trust and accountability as the human expert retakes the helm in understanding how the whole work
system arrives at a decision. In this new work system, each partner brings unique capabilities and
comparative advantage to the table (see Table 1).</p>
      <p>In contrast to the end-to-end approach, our approach places humans at the center of “the learning
loop”. Rather than consumers of the AI-generated diagnosis or trainers of the AI system; they occupy
a more critical role (beyond feeding the ML model) and learn alongside the machine as the process of
human-AI interaction is parameterized and unpacked. Such collaboration enables pathologists to
discover new parameters and relationships identified by machines while they are enabled to feedback
their verdict into the system.</p>
    </sec>
    <sec id="sec-7">
      <title>4. Conclusions</title>
      <p>Contribution of our approach to computer-mediated pathology is twofold. First, it provides a more
effective approach towards human-AI interaction, one that reinforces and elevates the role of
pathologists as human experts. Second, we offer a middle ground between (1) end-to-end AI and (2)
manually engineered feature extraction by human experts. In our approach, we draw on the strength of
both methods while overcoming their inherent limitations.</p>
      <p>In this approach, AI closely collaborates with humans and contributes to parameterization and
differential diagnosis by 1) capturing the complexity of multiple factors lying outside the cognitive
processing of humans, 2) discovering new parameters, and finding connections between multiple
parameters and evidence, and 3) learning alongside pathologists by using their inputs. In addition, AI
can facilitate knowledge transfer among pathologists. Parameterization and finding criteria (as well as
corresponding evidence) help experienced pathologists in the process of knowledge transfer to novices
and students. Over the years these experts tend to internalize knowledge in a way that transcends
explicitly connecting evidence and parameters; so, in their diagnostic approach, some of these
experienced experts are less attentive to individual parameters and instead rely on holistic, implicit and
automatic evaluations that do not easily lend themselves to explicit knowledge transfer and explanation.
Through AI-enabled parameterization, interactions between experts and learners are facilitated.</p>
      <p>
        Lack of interpretability and transparency stands in the way of clinical adoptions of these systems.
This symbiotic relationship presented here could clarify the unique contributions of both humans and
machines and raise trust in the application of AI in pathology. Our approach takes up the challenge of
interpretability and accountability, as common end-to-end approaches based on artificial neural
networks “do not provide a verifiable path to understanding the rationale behind its decisions” [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
    </sec>
    <sec id="sec-8">
      <title>5. Acknowledgements</title>
      <p>We appreciate Hongyan Gu, Yifan Xu, and Xiang 'Anthony' Chen for their contribution to this
research project.</p>
    </sec>
    <sec id="sec-9">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Acs</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          et al.
          <year>2020</year>
          .
          <article-title>Artificial intelligence as the next step towards precision pathology</article-title>
          .
          <source>Journal of internal medicine. 288</source>
          ,
          <issue>1</issue>
          (Jul.
          <year>2020</year>
          ),
          <fpage>62</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Aodha</surname>
            ,
            <given-names>O.M.</given-names>
          </string-name>
          et al.
          <year>2014</year>
          .
          <article-title>Putting the Scientist in the Loop -- Accelerating Scientific Progress with Interactive Machine Learning</article-title>
          .
          <source>2014 22nd International Conference on Pattern Recognition.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Brynjolfsson</surname>
            , E. and Mitchell,
            <given-names>T.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>What can machine learning do? Workforce implications</article-title>
          .
          <source>Science</source>
          .
          <volume>358</volume>
          ,
          <issue>6370</issue>
          (Dec.
          <year>2017</year>
          ),
          <fpage>1530</fpage>
          -
          <lpage>1534</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Burke</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M.K.</given-names>
          </string-name>
          <year>1999</year>
          .
          <article-title>Taking the mystery out of intuitive decision making</article-title>
          .
          <source>Academy of Management Perspectives</source>
          .
          <volume>13</volume>
          ,
          <issue>4</issue>
          (Nov.
          <year>1999</year>
          ),
          <fpage>91</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Campanella</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          et al.
          <year>2019</year>
          .
          <article-title>Clinical-grade computational pathology using weakly supervised deep learning on whole slide images</article-title>
          .
          <source>Nature medicine. 25</source>
          ,
          <issue>8</issue>
          (Aug.
          <year>2019</year>
          ),
          <fpage>1301</fpage>
          -
          <lpage>1309</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Gu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          et al.
          <year>2020</year>
          .
          <article-title>CrossPath: Top-down, Cross Data Type, Multi-Criterion Histological Analysis by Shepherding Mixed AI Models</article-title>
          .
          <source>arXiv [cs.HC].</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Jarrahi</surname>
            ,
            <given-names>M.H.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational Decision Making</article-title>
          .
          <source>Business horizons. 61</source>
          ,
          <issue>4</issue>
          (Jul.
          <year>2018</year>
          ). DOI:https://doi.org/10.1016/j.bushor.
          <year>2018</year>
          .
          <volume>03</volume>
          .007.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          et al.
          <year>2020</year>
          .
          <article-title>Emerging role of deep learning-based artificial intelligence in tumor pathology</article-title>
          .
          <source>Cancer communications. 40</source>
          ,
          <issue>4</issue>
          (Apr.
          <year>2020</year>
          ),
          <fpage>154</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Langlotz</surname>
            ,
            <given-names>C.P.</given-names>
          </string-name>
          <year>2019</year>
          .
          <source>Will Artificial Intelligence Replace Radiologists? Radiology: Artificial Intelligence. 1</source>
          ,
          <issue>3</issue>
          (May
          <year>2019</year>
          ),
          <year>e190058</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Pena</surname>
            ,
            <given-names>G.P.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Andrade-Filho</surname>
            ,
            <given-names>J.S.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>How does a pathologist make a diagnosis? Archives of pathology &amp; laboratory medicine</article-title>
          .
          <volume>133</volume>
          ,
          <issue>1</issue>
          (
          <year>2009</year>
          ),
          <fpage>124</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Rathore</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          et al.
          <year>2015</year>
          .
          <article-title>Novel structural descriptors for automated colon cancer detection and grading</article-title>
          .
          <source>Computer methods and programs in biomedicine. 121</source>
          ,
          <issue>2</issue>
          (Sep.
          <year>2015</year>
          ),
          <fpage>92</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Norvig</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Artificial intelligence: a modern approach</article-title>
          . Malaysia. Pearson Education Limited.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Tizhoosh</surname>
            ,
            <given-names>H.R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Pantanowitz</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Artificial Intelligence and Digital Pathology: Challenges and Opportunities</article-title>
          .
          <source>Journal of pathology informatics. 9</source>
          ,
          <string-name>
            <given-names>(</given-names>
            <surname>Nov</surname>
          </string-name>
          .
          <year>2018</year>
          ),
          <fpage>38</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Voosen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>How AI detectives are cracking open the black box of deep learning</article-title>
          .
          <source>Science.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Preininger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>AI in Health: State of the Art, Challenges, and Future Directions</article-title>
          .
          <source>Yearbook of medical informatics. 28</source>
          ,
          <issue>1</issue>
          (Aug.
          <year>2019</year>
          ),
          <fpage>16</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>