<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explanation Ontology in Action: A Clinical Use-Case ?</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>IBM Research</institution>
          ,
          <addr-line>Cambrdige, MA, USA, 02142</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Rensselaer Polytechnic Institute</institution>
          ,
          <addr-line>Troy, NY</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl.org/heals/eo). Such a representation is increasingly necessary as explainability has become an important problem in Artificial Intelligence with the emergence of complex methods and an uptake in high-precision and user-facing settings. In this submission, we provide step-by-step guidance for system designers to utilize our ontology, introduced in our resource track paper, to plan and model for explanations during the design of their Artificial Intelligence systems. We also provide a detailed example with our utilization of this guidance in a clinical setting. Resource: https://tetherless-world.github.io/explanation-ontology</p>
      </abstract>
      <kwd-group>
        <kwd>Modeling of Explanations and Explanation Types</kwd>
        <kwd>Sup- porting Explanation Types in Clinical Reasoning</kwd>
        <kwd>Tutorial for Explana- tion Ontology Usage</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Explainable Artificial Intelligence (AI) has been gaining traction due to
increasing adoption of AI techniques in high-precision settings. Consensus is lacking
amongst AI developers on the type of explainability approaches and we observe
a lack of infrastructure for user-centric explanations. User-centric explanations
address a range of users’ questions, have different foci, and such variety provides
an opportunity for end-users to interact with AI systems beyond just
understanding why system decisions were made. In our resource paper [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], we describe
an Explanation Ontology (EO), which we believe is a step towards semantic
encoding of the components necessary to support user-centric explanations. This
companion poster focuses on describing the usage steps (Section 2) that would
? Copyright c 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
serve as a guide for system developers hoping to use our ontology. We
demonstrate the usage of the protocol (Section 3) as a means to support and encode
explanations in a guideline-based clinical decision support setting.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Usage Directions for the Explanation Ontology</title>
      <p>Protocol 1 Usage of Explanation Ontology at System Design Time
Inputs: A list of user questions, knowledge sources and AI methods
Goal: Model explanations that need to be supported by a system based on inputs
from user studies
The protocol:
1. Gathering requirements
(a) Conduct a user study to gather the user’s requirements of the system
(b) Identify and list user questions to be addressed
2. Modeling
(a) Align user questions to explanation types
(b) Finalize explanations to be included in the system
(c) Identify components to be filled in for each explanation type
(d) Plan to populate slots for each explanation type desired
(e) Use the structure of sufficiency conditions to encode the desired set
of explanations</p>
      <p>System designers can follow the usage directions for EO at design time when
planning for the capabilities of an AI system. The guidance aims to ensure that
end-user requirements are translated into user-centric explanations. This
protocol guidance is supported by resources made available on our website. These
resources include queries to competency questions to retrieve sample user
questions addressed by each explanation type3 and the components to be filled for
each explanation type.4 Additionally, the sufficiency conditions that serve as a
means for structuring content to fit the desired explanation type can be browsed
via our explanation type details page.5
3</p>
    </sec>
    <sec id="sec-3">
      <title>Clinical Use Case</title>
      <p>
        We applied the protocol to understand the need for explanations in a
guidelinebased care setting and identify which explanation types would be most relevant
3 https://tetherless-world.github.io/explanation-ontology/competencyquestions/
#question2
4 https://tetherless-world.github.io/explanation-ontology/competencyquestions/
#question3
5 https://tetherless-world.github.io/explanation-ontology/modeling/
#modelingexplanations
for this clinical use case. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] Further, we utilized the EO to model some of the
explanations we pre-populated into the system prototype. Hereafter, we describe
our application of the EO usage guidelines and highlight how we used a user
study to guide system design. As a part of the user study, we first held an
expert panel interview to understand the clinicians’ needs when working with
guideline-based care. We utilized the expert panel input to design a cognitive
walkthrough of a clinical decision support system (CDSS) with some
explanations pre-populated at design time to address questions that clinicians would
want answered.
      </p>
      <p>The prototype of a CDSS we designed for the walkthrough, included
allowances for explanations on different screens of the system (Fig. 1), in all,
allowing clinicians to inspect a complicated type-2 diabetes patient case. Some
examples of the pre-populated explanations are: contrastive explanations to help
clinicians decide between drugs, trace based explanations to expose the guideline
reasoning behind why a drug was suggested, and counterfactual explanations
that were generated based on a combination of patient factors. Other examples
were provided to us by clinicians during the walkthrough.</p>
      <p>
        Below, we step-through how the protocol guidance (Section 2) can be used
to model an example of a counterfactual explanation from this clinical use case.
A counterfactual explanation can be generated by modifying a factor on the
treatment planning screen of the CDSS prototype. For illustration’s sake, let
us suppose that this explanation is needed to address a question, “What if the
patient had an ASCVD risk factor?" where the “ASCVD risk factor" is an
alternate set of inputs the system had not previously considered. From our definition
of a counterfactual explanation
(https://tetherless-world.github.io/explanationontology/modeling/#counterfactual), in response to the modification a system
would need to generate a system recommendation based on the consideration of
the new input. More specifically, in our use case, a system would need to consider
the alternate input of the ASCVD risk factor in conjunction with the patient
context and consult evidence from the American Diabetes Association (ADA)
guidelines [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] to arrive at a new suitable drug recommendation for the patient
case. With the alternate set of inputs and the corresponding recommendation in
place, the counterfactual explanation components can now be populated as slots
based on the sufficiency condition for this explanation type. A Turtle snippet of
this counterfactual explanation example can be viewed in Fig. 2. Additionally,
there have been some promising machine learning (ML) model efforts in the
explainability space [
        <xref ref-type="bibr" rid="ref1 ref6">1,6</xref>
        ] that could be used to generate system recommendations
to populate specific explanation types. We are investigating how to integrate
some of these AI methods with the semantic encoding of EO.
In the explainable AI space, several research projects [
        <xref ref-type="bibr" rid="ref3 ref4 ref7">4,7,3</xref>
        ] have begun to
explore how to support end-user specific needs such as ours. Wang et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] present
a framework for aligning explainable AI practices with human reasoning
approaches, and they test this framework in a co-design exercise using a prototype
CDSS. However, their framework isn’t in a machine-readable format and hence
is difficult to reuse for supporting explanations in a system. The findings from
their co-design process that clinicians seek different explanations in different
scenarios corroborate with those from our cognitive walkthrough, and EO can help
support system designers in this endeavor. Similarly, Dragoni et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] propose
a rule-based explanation system in the behavior change space capable of
providing trace based explanations to end-users to encourage better lifestyle and
nutrition habits. While this explanation system was tested with real target users
and subject matter experts, it is limited in scope by the types of explanations
it provides. Vera et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] have released a question bank of explanation question
types that can be used to drive implementations such as ours.
5
      </p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>We have presented guidelines for using our Explanation Ontology in user-facing
settings and have demonstrated the utility of this guidance in a clinical setting.
This guidance describes an end-to-end process to support the translation of
enduser requirements into explanations supported by AI systems. We are taking a
two-pronged approach to pursue future work in operationalizing the use of our
explanation ontology for clinical decision support systems. To this end, we are
working towards building an explanation as a service that would leverage our
explanation ontology and connect with AI methods to support the generation
of components necessary to populate explanation types in different use cases.
Further, we are implementing the clinical prototype as a functional UI with
affordances for explanations. We expect the guidance along with open-sourced
resources on our website will be useful to system designers looking to utilize our
ontology to model and plan for explanations to include in their systems.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work is done as part of the HEALS project, and is partially supported by
IBM Research AI through the AI Horizons Network.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Arya</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellamy</surname>
            ,
            <given-names>R.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>P.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dhurandhar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hind</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoffman</surname>
            ,
            <given-names>S.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Houde</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liao</surname>
            ,
            <given-names>Q.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luss</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mojsilović</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al.:
          <article-title>One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques</article-title>
          . arXiv preprint arXiv:
          <year>1909</year>
          .
          <volume>03012</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Chari</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seneviratne</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gruen</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foreman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGuinness</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          :
          <article-title>Explanation Ontology: A Model of Explanations for User-Centered AI</article-title>
          .
          <source>In: Int. Semantic Web Conf</source>
          . p. to appear. Springer (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Dragoni</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donadello</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eccher</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Explainable ai meets persuasiveness: Translating reasoning results into behavioral change advice</article-title>
          .
          <source>Artificial Intelligence in Medicine</source>
          p.
          <volume>101840</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Liao</surname>
            ,
            <given-names>Q.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gruen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Questioning the AI: Informing Design Practices for Explainable AI User Experiences</article-title>
          . arXiv preprint arXiv:
          <year>2001</year>
          .
          <volume>02478</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. American Diabetes Assoc.: 9. Pharmacologic Approaches to Glycemic
          <source>Treatment: Standards of Medical Care in Diabetes-2020. Diabetes Care</source>
          <volume>43</volume>
          (
          <issue>Suppl 1</issue>
          ),
          <source>S98</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guestrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Model-agnostic interpretability of machine learning</article-title>
          .
          <source>arXiv preprint arXiv:1606.05386</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abdul</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B.Y.</given-names>
          </string-name>
          :
          <article-title>Designing theory-driven user-centric explainable AI</article-title>
          .
          <source>In: Proceedings of the 2019 CHI Conf. on Human Factors in Computing Systems</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>