<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explaining complex machine learning platforms to members of the general public</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rachel Eardley</string-name>
          <email>rachel@racheleardley.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ewan Soubutts</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amid Ayobi</string-name>
          <email>amid.ayobi@bristol.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rachael Gooberman-Hill</string-name>
          <email>r.gooberman-hill@bristol.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aisling O'Kane</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bristol</institution>
          ,
          <addr-line>Beacon House, Queens Road, Bristol</addr-line>
          ,
          <country country="UK">U.K</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this workshop paper we present an overview of our research into understanding how to explain complex machine learning (ML) health platforms to members of the general public who might benefit from them, specifically those who have Type 2 Diabetes (T2D). The availability of home health sensor technology is increasing; however, it is unclear how to explain these platforms to potential users so that they can make an 'informed decision' on the adoption of that platform within their home. Through a user-centered-design approach, we have completed a case study with three studies that have (1) Given an overview of a complex ML platform, that of SPHERE; (2) Identified how the participants would like us to explain this content and (3) Created and validated an explanation document that presents, at a high-level the SPHERE platform. We present our findings on the priority of understanding how and why the platform can help them over the technical detail of the platform itself.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Explanations</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Digital Health</kwd>
        <kwd>Informed decision</kwd>
        <kwd>Home health</kwd>
        <kwd>Complex platforms</kwd>
        <kwd>Design</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        In many parts of our daily lives, Artificial
Intelligence (AI) and Machine Learning (ML)
have become ubiquitous in assisting our
decision making, e.g., suggesting films to
watch on Netflix [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], suggesting purchases
online or people to ‘follow’ on social media.
Similar technologies are also increasingly
common in specialist areas such as healthcare,
in particular clinical support tools [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], used to
support clinician and/or patient
decisionmaking about their condition and the risks and
benefits of potential treatments. However,
when it comes to more critical factors such as
our health and wellbeing, many would argue
that those who are receiving and those who are
providing healthcare, should be made aware of
the reasonings behind those decisions
[
        <xref ref-type="bibr" rid="ref1 ref15 ref7 ref9">1,7,9,15</xref>
        ]. In order to bridge the lack of
understanding, we look to Explainable AI
(XAI), an area of study that challenges different
disciplines (‘developers’, ‘theorists’, ‘ethicists’
etc.) to make transparent the decisions that the
AI and ML algorithms make. This is
particularly important for those who are
receiving and those who are providing
healthcare to understand what the system is
doing, for example to justify the clinical results
given, correct errors, improve medical
algorithms or to highlight a new discovery
[
        <xref ref-type="bibr" rid="ref1 ref15 ref7">1,7,15</xref>
        ].
      </p>
      <p>
        In the domain of healthcare, Holzinger et al
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] states that there is a growing need for AI
systems that are ‘trustworthy, transparent,
interpretable and explainable’, and there is
evidence to benefit the use of clinical AI
systems, for instance predicting the risks of
hospital readmission for pneumonia patients or
spotting bone fractures [
        <xref ref-type="bibr" rid="ref20 ref6">6,20</xref>
        ]. However, there
is also an opportunity for AI to contribute to
healthcare outside clinical settings, for instance
supporting individuals with chronic illnesses
who manage their own conditions at home, a
more common trend with today’s increasing
healthcare costs [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Ballegaard et al [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] argues
that healthcare is not just about keeping
individuals healthy but allowing them to
continue to live a sustainable and independent
lives. With this in mind, we look to ML/AI
platforms such as SPHERE (sensor platform for
healthcare in a residential environment) which
uses ML to algorithmically interpret data based
on the individual’s patterns of living at home
[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. How though, do we gain sufficiently
informed consent from the public install such
complex ML platforms within their homes?
      </p>
      <p>
        In the medical field, there is a legal and
ethical requirement for the patient and clinician
to go through a process of ‘informed consent’
[
        <xref ref-type="bibr" rid="ref13 ref17 ref8">8,13,17</xref>
        ], where the patient presented with the
benefits, risks and any alternatives to their
treatment makes a decision [
        <xref ref-type="bibr" rid="ref3 ref8">3,8</xref>
        ]. For ML
platforms, there is also an ethical process that
includes explaining the benefits, risks,
limitations and the data used for potential
translation of the ML algorithms [
        <xref ref-type="bibr" rid="ref1 ref14">1,14</xref>
        ]. To
make an ‘informed decision' around the
adoption of a complex platform, an individual
needs to have enough knowledge to think
critically about the processes that the platform
implements or supports [
        <xref ref-type="bibr" rid="ref11 ref12">11,12</xref>
        ]. As with
informed consent in medical care, for an
individual to make an informed decision around
the adoption of a complex platform, a process
needs to occur that supports the explanation of
both the platform's risks and benefits. When
and how does this informed decision process
occur for home health technology?
      </p>
      <p>
        To understand how we should explain
complex ML/AI platforms to members of the
general public, we conducted a case study that
focused on the SPHERE platform and members
of the general public with Type 2 Diabetes
(T2D), where most of the care takes place
outside clinical settings [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Using a
usercentered-design methodology in creating an
explanation document to aid informed consent,
we gained insight into users’ interpretation of
the ‘informed decision’ process of adopting the
complex platform within their homes. What we
found is that even though the document
explained the complex ML/AI platform in a
manner that was understandable to our
participants and that they could see the
SPHERE platforms benefits, they were more
focused on the purpose of the technology,
questioning why and how the platform could
help them as individuals with T2D.
      </p>
      <p>The seven devices</p>
      <p>Water sensor</p>
      <p>Appliance sensor
b
d
g
Environmental</p>
      <p>Sensor
SPHERE home
gateway
a
c
f</p>
      <p>Electricity (mains)</p>
      <p>e</p>
      <p>Wearable
The ten sensors</p>
      <p>Silhouette sensor
Vibrations Electricity Silhouette Movement
Light levels Air pressure Motion
Humidity
Temperature</p>
      <p>Appliance usage</p>
    </sec>
    <sec id="sec-2">
      <title>2. Defining the Explanation</title>
      <p>Using a user-centered-design methodology
to define the explanation of the SPHERE
platform, we first completed semi-structured
interviews with eight members of the SPHERE
team who had built and maintained the system.
After this, we ran a second study which
presented alternative designs about the
platform’s hardware (figure 2a-c), the ground
truthing of the data (figure 2d-f) and the ML
process unsupervised learning (figure 2g-i) to
nine people with Type 2 diabetes and members
of their households who might also have to live
with this domestic health technology. From the
findings of these two studies, we created an
explanation document (figure 4) that presents
and explains the SPHERE platform to the
general public who had T2D. Finally, we ran a
validation study that reviewed how the
explanation document was used in an
onboarding/set-up session with technicians and
how the SPHERE system and the document
was interpreted and understood.
2.1.</p>
    </sec>
    <sec id="sec-3">
      <title>Understanding the platform</title>
      <p>Our first challenge was to understand what
SPHERE was capable of, its processes,
hardware and ML/AI requirements. With this
aim in mind, we conducted semi-structured
interviews with eight out of eleven of the team
members. The team members had been working
on the project from two to six years and had
mixed roles within SPHERE (2 x Deployment
technicians, 3 x ML experts, 1 x Hardware
engineer, 1 x Researcher and 1 x Community
liaison).</p>
      <p>By interviewing these team members with a
diverse range of roles within SPHERE, we were
able to gain an overview of all aspects of the
complex platform. We conducted the
interviews individually within a
universitybased meeting room, audio-recorded and then
transcribed verbatim. Using affinity
diagramming and a bottom-up approach we
created a total of 681 post-it notes (Machine
Learning x 245, Research x 63, Community
Engagement x 68, Hardware x 100 and
Deployment Technician x 205). Once the five
job roles (deployment technicians, machine
learning, research, hardware and community
liaison) had been initially coded into themes,
the post-it notes were organized by the first
author into 35 further themes that were then
broken down into three overarching themes.
These overarching themes were (1) Hardware
and Network; (2) Installation, Training and
Data gathering; (3) Machine learning and Data
visualization. We then transferred these themes
into a Microsoft Word document. At that stage,
the first author merged any duplicated content.
We then asked the eight core team members
who took part in the interviews to review the
document to confirm the draft document was
technically correct.</p>
      <p>These three overarching themes helped us
define the platform, for example, capturing
seven sensor devices (Figure 1a-g) and ten
individual sensors (Figure 1) with technical and
positioning limitations. We also captured the
installation process where the deployment
technicians will visit a participant’s home four
times (survey, installation, maintenance and
removal) and that the data collected is saved on
a hard disk within the participants home and
with their permission and processed through
supervised and unsupervised machine learning.</p>
    </sec>
    <sec id="sec-4">
      <title>2.2. Understanding interpretations the</title>
      <p>Once we had gained an understanding of the
complex platform, our next challenge was to
define how to present the information to our
participants. For this study we focused on one
area of each of the overarching themes: For
Hardware &amp; network we selected the most
technically complex sensor, the ‘environmental
sensor’ (figure 2a-c), for Installation, training &amp;
data collection, we selected ‘ground truth’
(figure 2d-f) as this process informs the ML
algorithms. For Machine learning &amp; data
visualization, we selected 'Unsupervised
learning' (figure 2g-i) as this is the more
speculative form of ML. Through a design
workshop with six participants (three university
researchers and three members of a community
engagement charity), we focused on the
‘environmental sensor’ (figure 2a-c) and
created three alternative designs that presented
the platforms information at different technical
levels, detail, approaches to language and
visual elements. We then, used these design
decisions to create three alternative designs for
the further two areas of the platform, ‘ground
truth’ (figure 2d-f) and ‘unsupervised learning’
(figure 2g-i).</p>
      <p>We presented these nine designed
documents (figure 2) to nine participants who
either had T2D or lived with someone who did.
The nine participants (five female, four male)
were aged between 25 to 74, with a varying
education level ranging from that of entry-level
to PhD. Six participants had T2D, and three
participants lived with someone who did. All
participants owned a smartphone, four
participants had an IoT device such as Amazon
Alexa or Google Home. Two participants (AD2
and AD6) had weather stations at home and due
to this had prior knowledge of sensors and their
capabilities. The Environmental Sensors were
presented first with the alternative designs
alternated (using the Latin square method), then
the Ground Truth and finally Unsupervised
Learning.</p>
    </sec>
    <sec id="sec-5">
      <title>2.2.1. Overview of findings</title>
      <p>For all three areas (environmental sensor,
ground truth and unsupervised learning), the
participants considered the alternative design
with the most technical information and detail
to be far too complex, scary or off putting. The
participants additionally preferred the language
as used in the simpler design alternatives as it
used common language an non-technical
words.</p>
      <p>For the environmental sensor (figure 2a-c),
the participants requested that the image of the
sensor be the version from figure 2c, with the
sensor measurements as in figure 2a in both
centimeters and inches. They requested an
understanding of where the position of the
sensors within the home, however, they did not
like the list in figure 2a or the storyboard in
figure 2b as they provided unnecessary
information (the deployment technician would
fit the sensor). They preferred the more
structural visual approach to the rules of the
sensor placement as in figure 2b and requested
more of a description of what each sensor did.</p>
      <p>With the ‘ground truth’ (figure 2d-f) the
participants considered the simpler version
(figure 2f) to be just enough information and
were positive with the storyboard flow. The
other two alternatives (figure 2d and 2e) were
both thought of as too much information and
not relevant to the participants as the
deployment technician would complete the
process.</p>
      <p>Finally, for ‘unsupervised learning’ the
participants were confused by the charts and
graphs considering figure 2i as the better
description with a few changes. These changes
included the change of an icon so that it fits the
descriptive text better and combining the whole
of figure 2i with the righthand side of figure h,
here showing the participant how the
‘unsupervised machine learning’ works and
showing the results in an understandable chart.</p>
    </sec>
    <sec id="sec-6">
      <title>2.2.2. Final designs as specified by the participants</title>
      <p>
        Using this feedback, we then updated the
page designs (figure 3) to match the participants
preferences. For the environmental sensor
(figure 3a), we created an illustration to present
the sensor placement location and added
information about the sensor’s limitations as
suggested by Cai et al [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The ‘ground truth’
we merged the content that was over two pages
in figure 2f to just one page in figure 3c. For
‘unsupervised learning’, as requested by the
participants we merged figure 2h and 2i to
highlight the process of collecting and
presenting that data. From these final designs,
we updated the visual design style and created
a number of templates that we used for all
similar items (e.g. the SPHERE sensors).
      </p>
    </sec>
    <sec id="sec-7">
      <title>2.3. Validating the and interpretation</title>
      <p>Our next challenge was to validate this
explanation document (figure 4) to understand
if we had created a translation of the SPHERE
platform that potential participants would feel
they could use to make an ‘informed decision’.
Overall, the participants liked the document, all
understanding at a high-level the data collected
and how that data would be used to identify
their daily activity. The participants did ask for
a number of updates (e.g. page order, image
updates and a reduction of pages within the
document) and even though they understood
the platform (at a high-level) they wanted to
understand why SPHERE was useful to them as
individuals with T2D.</p>
    </sec>
    <sec id="sec-8">
      <title>3. Next steps</title>
      <p>Our next steps are to investigate how we can
incorporate the findings from the validation
study so that we reduce the number of pages
and not just explain the technical aspect of the
SPHERE platform but also understand how to
explain why this platform would be beneficial
to the participants without influencing their
decision in consenting to have the platform
within their home. Additionally, we wish to
investigate the best medium to presenting this
content (Paper or video) and understand how
this explanation document can work within the
first steps of creating a process for the
selfinstallation of the SPHERE platform.</p>
    </sec>
    <sec id="sec-9">
      <title>4. Acknowledgements</title>
      <p>We would like to thank Sue Mackinnon,
Jess Linington, Zoe Banks Gross and Fiona
Dowling from Knowle West Media Centre for
their support on this project. We would
additionally like to thank the SPHERE team
members who engaged in this project and for
taking their time to explain their work to us.</p>
      <p>This work was completed through
the SPHERE Next Steps Project funded by the
UK Engineering and Physical Sciences
Research Council (EPSRC), Grant
EP/R005273/1.
explanation</p>
    </sec>
    <sec id="sec-10">
      <title>5. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Amina</given-names>
            <surname>Adadi</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mohammed</given-names>
            <surname>Berrada</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)</article-title>
          .
          <source>IEEE Access</source>
          <volume>6</volume>
          :
          <fpage>52138</fpage>
          -
          <lpage>52160</lpage>
          . https://doi.org/10.1109/ACCESS.
          <year>2018</year>
          .
          <volume>28</volume>
          70052
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Stinne</given-names>
            <surname>Aaløkke</surname>
          </string-name>
          <string-name>
            <surname>Ballegaard</surname>
          </string-name>
          , Thomas Riisgaard Hansen, and
          <string-name>
            <given-names>Morten</given-names>
            <surname>Kyng</surname>
          </string-name>
          .
          <year>2008</year>
          . Healthcare in Everyday Life - Designing
          <source>Healthcare Services for Daily Life</source>
          .
          <year>1807</year>
          -
          <fpage>1816</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M</given-names>
            <surname>Brezis</surname>
          </string-name>
          <article-title>, … S Israel -</article-title>
          …
          <source>Journal for Quality in, and undefined 2008</source>
          .
          <article-title>Quality of informed consent for invasive procedures</article-title>
          .
          <source>academic.oup.com. Retrieved December 15</source>
          ,
          <year>2020</year>
          from https://academic.oup.com/intqhc/articleabstract/20/5/352/1794518
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Alison</given-names>
            <surname>Burrows</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ian</given-names>
            <surname>Craddock</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>SPHERE: Meaningful and Inclusive Sensor-Based Home Healthcare</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Carrie</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Cai</surname>
            , Samantha Winter, David Steiner,
            <given-names>Lauren</given-names>
          </string-name>
          <string-name>
            <surname>Wilcox</surname>
            , and
            <given-names>Michael</given-names>
          </string-name>
          <string-name>
            <surname>Terry</surname>
          </string-name>
          .
          <year>2019</year>
          . “
          <article-title>Hello Ai”: Uncovering the onboarding needs of medical practitioners for human-AI collaborative decisionmaking</article-title>
          .
          <source>Proceedings of the ACM on Human-Computer Interaction 3</source>
          , CSCW. https://doi.org/10.1145/3359206
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Rich</given-names>
            <surname>Caruana</surname>
          </string-name>
          , Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and
          <string-name>
            <given-names>Noemie</given-names>
            <surname>Elhadad</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Intelligible Models for HealthCare</article-title>
          .
          <source>Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data</source>
          Mining - KDD '
          <volume>15</volume>
          :
          <fpage>1721</fpage>
          -
          <lpage>1730</lpage>
          . https://doi.org/10.1145/2783258.2788613
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Liya</given-names>
            <surname>Ding</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Human Knowledge in Constructing AI Systems - Neural Logic Networks Approach towards an Explainable AI</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>126</volume>
          :
          <fpage>1561</fpage>
          -
          <lpage>1570</lpage>
          . https://doi.org/10.1016/j.procs.
          <year>2018</year>
          .
          <volume>08</volume>
          .12 9
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Johanna</given-names>
            <surname>Glaser</surname>
          </string-name>
          , Sarah Nouri, Alicia Fernandez, Rebecca L.
          <string-name>
            <surname>Sudore</surname>
          </string-name>
          ,
          <string-name>
            <surname>Dean Schillinger</surname>
          </string-name>
          ,
          <string-name>
            <surname>Michele</surname>
            Klein-Fedyshin, and
            <given-names>Yael</given-names>
          </string-name>
          <string-name>
            <surname>Schenker</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Interventions to Improve Patient Comprehension in Informed Consent for Medical and Surgical Procedures: An Updated Systematic Review</article-title>
          .
          <source>Medical Decision Making</source>
          <volume>40</volume>
          ,
          <fpage>119</fpage>
          -
          <lpage>143</lpage>
          . https://doi.org/10.1177/0272989X198963 48
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Holzinger</surname>
          </string-name>
          , Chris Biemann, Constantinos S. Pattichis, and
          <string-name>
            <surname>Douglas</surname>
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Kell</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>What do we need to build explainable AI systems for the medical domain</article-title>
          ? Ml:
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          . https://doi.org/10.3109/14015439.
          <year>2012</year>
          .
          <volume>66</volume>
          0499
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Alexandra</given-names>
            <surname>Kirsch</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Explain to whom? Putting the user in the center of explainable AI</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          <year>2071</year>
          . https://doi.org/10.1016/j.juro.
          <year>2013</year>
          .
          <volume>04</volume>
          .049
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Emily</surname>
            <given-names>R</given-names>
          </string-name>
          <string-name>
            <surname>Lai</surname>
          </string-name>
          .
          <source>2011. Critical Thinking: A Literature Review Research Report. Retrieved December 15</source>
          ,
          <year>2020</year>
          from http://www.pearsonassessments.com/rese arch.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Susan</surname>
            <given-names>Lechelt</given-names>
          </string-name>
          , Yvonne Rogers, and
          <string-name>
            <given-names>Nicolai</given-names>
            <surname>Marquardt</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Coming to your senses: Promoting critical thinking about sensors through playful interaction in classrooms</article-title>
          .
          <source>Proceedings of the Interaction Design and Children Conference</source>
          ,
          <string-name>
            <surname>IDC</surname>
          </string-name>
          <year>2020</year>
          :
          <fpage>11</fpage>
          -
          <lpage>22</lpage>
          . https://doi.org/10.1145/3392063.3394401
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Roger</surname>
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Lemaire</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>Informed consent - A contemporary myth</article-title>
          ?
          <source>Journal of Bone and Joint Surgery - Series B 88</source>
          ,
          <issue>1</issue>
          :
          <fpage>2</fpage>
          -
          <lpage>7</lpage>
          . https://doi.org/10.1302/
          <fpage>0301</fpage>
          -
          <lpage>620X</lpage>
          .
          <year>88B1</year>
          .
          <fpage>16435</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Tim</surname>
            <given-names>Miller</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Piers</given-names>
            <surname>Howe</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Liz</given-names>
            <surname>Sonenberg</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>Beware of Inmates Running the Asylum</article-title>
          .
          <source>IJCAI International Joint Conference on Artificial Intelligence</source>
          . https://doi.org/10.1016/j.jsams.
          <year>2012</year>
          .
          <volume>02</volume>
          .
          <fpage>0</fpage>
          <lpage>03</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Alun</surname>
            <given-names>Preece</given-names>
          </string-name>
          , Dan Harborne, Dave Braines, Richard Tomsett, and
          <string-name>
            <given-names>Supriyo</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Stakeholders in Explainable AI</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Marco</given-names>
            <surname>Tulio</surname>
          </string-name>
          <string-name>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sameer</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Carlos</given-names>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2016</year>
          . “
          <article-title>Why should i trust you?” Explaining the predictions of any classifier</article-title>
          .
          <source>In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          ,
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          . https://doi.org/10.1145/2939672.2939778
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Yael</surname>
            <given-names>Schenker</given-names>
          </string-name>
          , Alicia Fernandez, and
          <string-name>
            <given-names>Rebecca</given-names>
            <surname>Sudore</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Interventions to Improve Patient Comprehension in Informed Consent for Medical and Surgical Procedures: A Systematic Review</article-title>
          .
          <source>journals.sagepub.com 31</source>
          , 1:
          <fpage>151</fpage>
          -
          <lpage>173</lpage>
          . https://doi.org/10.1177/0272989X103642 47
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Bastian</surname>
            <given-names>Seegebarth</given-names>
          </string-name>
          , Felix Müller, Bernd Schattenberg, and
          <string-name>
            <given-names>Susanne</given-names>
            <surname>Biundo</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations</article-title>
          .
          <source>International Conference on Automated Planning and Scheduling</source>
          :
          <fpage>225</fpage>
          -
          <lpage>233</lpage>
          . Retrieved from https://www.aaai.org/ocs/index.php/ICAP S/ICAPS12/paper/viewPaper/4691
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Diabetes</surname>
            <given-names>UK.</given-names>
          </string-name>
          <year>2020</year>
          . No Title. https://www.diabetes.org.uk/type-2- diabetes].
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Rebecca</given-names>
            <surname>Voelker</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Diagnosing Fractures With AI</article-title>
          .
          <source>JAMA 320</source>
          ,
          <issue>1</issue>
          :
          <fpage>23</fpage>
          . https://doi.org/10.1001/jama.
          <year>2018</year>
          .8565
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Jichen</surname>
            <given-names>Zhu</given-names>
          </string-name>
          , Antonios Liapis,
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Risi</surname>
          </string-name>
          , Rafael Bidarra, and
          <string-name>
            <given-names>G. Michael</given-names>
            <surname>Youngblood</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative CoCreation</article-title>
          .
          <source>IEEE Conference on Computatonal Intelligence and Games</source>
          , CIG 2018-Augus. https://doi.org/10.1109/CIG.
          <year>2018</year>
          .
          <volume>849043</volume>
          <fpage>3</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Ni</surname>
            <given-names>Zhu</given-names>
          </string-name>
          , Tom Diethe, Massimo Camplani, Lili Tao, Alison Burrows, Niall Twomey, Dritan Kaleshi, Majid Mirmehdi,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Flach</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Ian</given-names>
            <surname>Craddock</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Bridging e-Health and the Internet of Things: The SPHERE Project</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          <volume>30</volume>
          , 4:
          <fpage>39</fpage>
          -
          <lpage>46</lpage>
          . https://doi.org/10.1109/MIS.
          <year>2015</year>
          .57
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <article-title>How Machine Learning is Transforming Clinical Decision Support Tools</article-title>
          .
          <source>Retrieved December 14</source>
          ,
          <year>2020</year>
          from https://healthitanalytics.com/features/how -machine
          <article-title>-learning-is-transformingclinical-decision-support-tools</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>