<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Evidence Capture for Accountable AI Systems ?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Wei Pang</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Milan Markovic</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iman Naja</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chiu Pang Fung</string-name>
          <email>C.P.Fung@leeds.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Edwards</string-name>
          <email>p.edwardsg@abdn.ac.uk</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computing, University of Leeds</institution>
          ,
          <addr-line>Leeds, LS2 9JT</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Mathematical and Computer Sciences, Heriot-Watt University Edinburgh</institution>
          ,
          <addr-line>EH14 4AS</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>School of Natural and Computing Sciences, University of Aberdeen Aberdeen</institution>
          ,
          <addr-line>AB24 3UE</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This research explores evidence capture for accountable AI systems. First, di erent scopes of AI accountability are set out by extending existing classi cation. Based on these scopes, two important and fundamental questions in evidence capture are answered: what types of evidence need to be captured and how we can capture them to facilitate better AI accountability. We hope that this research can provide guidance on building better accountable AI systems with e ective evidence capture and initiate further research along this line.</p>
      </abstract>
      <kwd-group>
        <kwd>Accountability</kwd>
        <kwd>Arti cial Intelligence</kwd>
        <kwd>Evidence Capture</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Accountability of AI systems has been increasingly studied in recent years, and
it has attracted much attention from not only academia [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and industry [
        <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
        ],
but also government [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and public sectors [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Realising accountable AI Systems entails knowing who the people were
behind the key decisions made throughout the AI system's life cycle, e.g., how the
system was designed and built, how it is being used and maintained, and how
the laws, regulations, and standards were followed [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>A crucial step to achieve this is to capture evidence e ectively. To start with,
two questions need to be answered: what types of evidence need to be captured
and how they can be captured. Answering these two fundamental questions
will help implement functional evidence capture components for AI systems,
? Copyright © 2021 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).</p>
      <p>This research is supported by the RAInS project (https://rainsproject.org/) funded
by EPSRC (EP/R033846/1). We thank all other members of the project for their
inspiration and suggestions for this research.
thus making AI systems accountable. It will also provide guidance on how we
can perform accountability-related investigations (e.g., incident investigation for
automated vehicles and bias investigation for AI-assisted recruitment) through
e ective evidence gathering.</p>
      <p>In this research, we will extensively discuss the above two questions. We
do not intend to provide speci c solutions or frameworks for evidence capture;
instead, we aim to provide guidelines and suggestions, and we hope this could
inspire further research on this topic.</p>
      <p>The rest of the paper is organised as follows: rst, di erent scopes of AI
accountability are set out in Section 2. Then based on these scopes, in Section 3
a series of \what" questions are answered. This is followed by Section 4, in which
the \how" question is discussed. Finally, Section 5 concludes the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>The Three Scopes of AI Accountability</title>
      <p>
        AI accountability may have di erent scopes and meanings in various scenarios.
Following the brief discussion in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], we further extend the following three scopes
of AI accountability (which are called the three \senses" of AI accountability in
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]) by providing more details about each scope and expanding the third scope
(see Section 2.3). This will allow us to discuss the two questions of evidence
capture (what and how) in the following sections.
2.1
      </p>
      <sec id="sec-2-1">
        <title>Technology-oriented Accountability</title>
        <p>
          In this scope, accountability is considered as a feature or component of an AI
system per se. An AI system can o er related functions to make itself accountable.
These functions include explainability, attributability, auditability, and
provenance. Similar to accountability, each of these functions may have di erent scopes
and meanings in various scenarios. Explainability entails enabling the system to
justify its outputs (e.g., decisions and predictions). This can be automated by
XAI (eXplainable AI) tools, whether model agnostic [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] or model-speci c [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In
the technical context, attributability involves identifying the roles that technical
components have, e.g., if the AI System consists of more than one model, then
it is important to know which model was responsible for an erroneous result.
Auditability entails allowing the system to be inspected and assessed.
Provenance entails documenting how the AI system and its components came to be,
e.g., the information about where the training data came from, how a model was
implemented, and how performance was evaluated.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Human-oriented Accountability</title>
        <p>
          Within this scope, accountability aims to hold the persons or organisations
accountable. This is because the AI systems are made by and for humans (we argue
that for the AI systems automatically produced by AutoAI/AutoML [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], humans
are the creators of these AutoAI/AutoML systems). This scope of accountability
focuses on the persons or organisations who are behind the AI systems, including
the AI designers, developers, service suppliers, and users. The proposed
Algorithm Accountability Act of 2019 [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] is concerned with the accountability in
this scope.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Systems-oriented Accountability</title>
        <p>
          In the broadest scope, an AI system is viewed as a complex system, for example,
a socio-technical system [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] or a tech-legal system [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Accountability in this
scope involves how one should build an accountable AI system considering not
only the complexity from social, technical, ethical, and legal perspectives, but
also the complicated interactions of system components across these
perspectives. The goal is to build an AI system that is not only technically robust, but
also trustworthy and complies with legal and ethical requirements.
        </p>
        <p>
          It is noted that, further to the classi cation in [
          <xref ref-type="bibr" rid="ref10 ref7">7, 10</xref>
          ], we apply a complex
system view to this scope, and we consider that an AI system is composed of
core AI components and their supporting facilities (e.g., hardware and software),
and such an system is operated in, and interacts with its environment.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>What to Capture</title>
      <p>To be accountable for everything means to be accountable for nothing.
Correspondingly, capturing everything is neither feasible nor necessary. To decide
what we will actually capture (the action), we need to answer the following three
questions: what is the scope of capture, what is the capability of capture, and
what is the obligation to capture.</p>
      <p>First, the scope of capture is determined by the scope of accountability in
consideration (as set out in Section 2); we will discuss this in Section 3.1. Second,
the capability of capture is subject to both AI system limitations and external
constraints; we will address this in Section 3.2. Third, the obligation to capture
is often determined by the requirements of speci c domains, regulations, laws,
and standards; we will cover this in Section 3.3. Lastly, what we will actually
capture is the ultimate question, which is a ected by the answers of the rst
three questions. We will discuss this nal question in Section 3.4.</p>
      <p>In what follows (Sections 3.1 3.4), we will not produce an exhaustive list
of the types of potential evidence in each subsection (as such an exhaustive
list is impossible to generate), but rather, we provide the most essential and
representative types of evidence, some of which are accompanied by concrete
examples.
3.1</p>
      <sec id="sec-3-1">
        <title>The Scope of Capture</title>
        <p>Considering the three distinct scopes of AI accountability set out in Section 2,
di erent sets of evidence for capture can be accordingly considered for each
scope. We will now discuss them in detail.</p>
        <p>Technical Aspect The rst scope of AI accountability is concerned with the
technical aspect. First, it is essential to record information about the training
and evaluation data (e.g., sources, pre-processing processes, and quality
analysis) and about the models, which includes the training paradigm and evaluation
procedures. Furthermore, explanations of AI predictions and inference processes,
fairness, uncertainty, robustness analysis (and even formal veri cation) for the
AI system often need to be recorded for auditing and potential investigations.
In many cases, the above information has not been generated or it is not
feasible to generate such information beforehand; therefore, whenever possible, the
approaches to generating such information should be investigated, initially
congured, and documented for post-hoc accountability analysis. For instance,
appropriate XAI and fairness analysis tools for the AI system may be prepared
and the instructions for using these tools are recorded.</p>
        <p>Social and Human Aspect The second scope of AI accountability focuses on
human and social activities. Human activities related to the AI system need to
be captured in order to hold them accountable. This includes human
decisionmaking processes and human-human interactions, either directly or through AI
system components, during the life cycle of an AI system. For example, the
following information may be captured as evidence: the stakeholders' meetings
and discussions on the AI system to be developed, AI designers' decision making
processes on using particular AI models, the interactions between AI designers
and developers during the implementation stage of the AI system, and how users
operate an AI system deployed in the wild.</p>
        <p>Complex System Aspect As for the third and broadest scope of AI
accountability, we need to capture not only the information regarding the rst two
scopes, but also the interactions and information ows of di erent elements of
the complex AI system, including the interactions between people, the AI
components, the lower-level software and hardware supporting infrastructure, and
the environment which the AI system is operated in and interacts with.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>The Capability of Capture</title>
        <p>As mentioned at the beginning of this section, what can be captured is subject
to the AI system limitations and other external constraints. As in Section 3.1,
we will again consider the di erent scopes of accountability set out in Section 2
to discuss this in detail.</p>
        <p>
          Technical Aspect The capture ability is determined by the functionalities and
limitations of the AI system. The limitations of the AI models being used may
a ect such ability. For instance, explaining the inference and reasoning processes
of black-box models is generally more challenging compared to white-box models.
The robustness of some AI models, e.g. some sophisticated deep neural networks,
may be hard to analyse against adversarial attacks. For many cutting-edge AI
models, their formal veri cation may be very challenging or even not possible [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
Social and Human Aspect If the documentation on some decisions made
during the AI system's life cycle is not done well or missing, we may not be
able to capture related human activities. Considering a legacy AI system, the
documentation of which on the design and development stages is missing, we
will not be able to capture the activities of the designers and developers as well
as their interactions. Therefore, it will be impossible to hold them accountable.
Complex System Aspect Hardware, software, environmental, ethical, and
legal factors can all a ect the ability to capture. For example, considering the
sensors used by an automated vehicle, their limited ability means we can only
capture the data up to a certain resolution. Another example that we may not be
able to capture some human activities due to privacy and security considerations.
3.3
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Obligation to Capture</title>
        <p>
          For a speci c AI application, we must consider related laws, regulations, or
standards to capture the required types of information or capture them as
much as possible. One example is one of the UK national standards for
automated vehicles, the BSI standard PAS 1882 [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], which suggests that high
frequency/resolution data should be captured 30 seconds before and after an
incident involving an automated vehicle, as well as during the incident.
Another example is the well-known (and much debated) \right to explanation"
of automated decision making in EU's General Data Protection Regulation
(GDPR) [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], which demands explanations for decisions made by algorithms.
3.4
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>Action: What We Will Actually Capture</title>
        <p>Having covered the scope, capability, and obligation of evidence capture, we can
now discuss what we will actually capture.</p>
        <p>It is obvious that deciding what evidence we actually capture should consider
the above three factors simultaneously. For a particular application, from a
pragmatic perspective, we may start from the obligations, and then examine/improve
the capture capability within the scope of capture. By doing this we will get a
narrower set of evidence to be captured.</p>
        <p>For the above re ned set, we propose the following three principles to
further re ne it: rst, evidence capture should not signi cantly a ect the system
performance (e.g., accuracy, e ciency, and reliability) or take too much resource
(e.g., computational time, storage, and human labour), and we call this the
performance principle. Second, evidence capture should be less invasive to the AI
system and its environment (e.g., requiring no signi cant change to the AI
system or environment), and we call this the friendly principle. Third, considering
the above two principles, capturing more is better than capturing less, and we
call this the redundancy principle.</p>
        <p>Finally, evidence capturing needs to consider the nature of the application
domain and the requirements of the particular accountability investigation.
Capturing potential evidence for an automated vehicle is more likely to include
hardware and environmental data, such as the vehicle's engine information, road and
weather conditions; but capturing potential evidence for an AI recruitment
system may focus more on the technical aspect, such as bias analysis and
decision/prediction explanation.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>How to Capture</title>
      <p>In this section, we discuss the methods of evidence capture. First, the same three
principles in Section 3.4 need to be followed when designing capture methods.
Second, evidence capture should be carried out throughout the AI life cycle,
including requirement analysis, design, implementation, deployment, operation,
and maintenance. Related components and work ows which enable evidence
capture should be carefully designed and implemented for each stage of the AI
system life cycle. Third, based on the degree of automation, capturing methods
can be classi ed into three categories: automatic, semi-automatic, and manual.
We discuss them in detail below.</p>
      <p>
        Automatic capture does not involve human intervention. Google's TFX
framework [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] o ers functionalities to automatically record machine learning (ML)
model training and evaluation information. The sensors of an automated vehicle
can automatically collect system and environmental data. Automatic capture
can be further divided into two types: passive capture (capture just in case) and
active capture (capture initiated by speci c events).
      </p>
      <p>
        Semi-automatic capture requires some degree of human input; for instance,
the Model Card Toolkit (MCT) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], an open-source tool developed for
generating the Model Card [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], requires AI developers to manually input some model
information, such as the overview, owner, and limitations of the ML model, but
MCT can also rely on TFX components to automatically capture information
on training data and model performance, and it can automatically generate the
nal Model Card in HTML format for better inspection. Another example is that
knowledge graph has been used to support evidence capture by both human and
automatic means [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Finally, manual capture is the last resort if the rst two approaches are not
feasible; for example, gathering stakeholders' meeting minutes and extracting
related information from these documents are likely to be done manually.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>We have extensively discussed two fundamental questions on evidence capture
for accountable AI systems: what to capture and how to capture. We hope the
discussion can guide more e ective evidence capture and thus contribute to the
development of better accountable AI systems.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frosst</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caruana</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
            ,
            <given-names>G.E.</given-names>
          </string-name>
          :
          <article-title>Neural additive models: Interpretable machine learning with neural nets</article-title>
          . arXiv preprint arXiv:
          <year>2004</year>
          .
          <volume>13912</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Arnold</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellamy</surname>
            ,
            <given-names>R.K.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hind</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>FactSheets: Increasing trust in AI services through supplier's declarations of conformity</article-title>
          .
          <source>IBM Journal of Research and Development</source>
          <volume>63</volume>
          (
          <issue>4</issue>
          /5), 6:
          <issue>1</issue>
          {6:
          <issue>13</issue>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miao</surname>
          </string-name>
          , H.:
          <article-title>Introducing the model card toolkit for easier model transparency reporting</article-title>
          , https://ai.googleblog.com/
          <year>2020</year>
          /07/introducing-model
          <article-title>-cardtoolkit-for</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Gebru</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morgenstern</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vecchione</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wortman</surname>
            <given-names>Vaughan</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Daume</surname>
          </string-name>
          <string-name>
            <given-names>III</given-names>
            , H.,
            <surname>Crawford</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Datasheets for datasets</article-title>
          (
          <year>March 2018</year>
          ), https://www.microsoft.com/en-us/research/publication/datasheets-for-datasets/
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>AutoML: A survey of the state-of-the-art</article-title>
          .
          <source>KnowledgeBased Systems</source>
          <volume>212</volume>
          ,
          <issue>106622</issue>
          (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Leofante1</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narodytska</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pulina</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tacchella1</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Automated veri cation of neural networks: Advances, challenges and perspectives (</article-title>
          <year>2018</year>
          ), https://arxiv.org/pdf/
          <year>1805</year>
          .09938.pdf
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Millar</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barron</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koichi</surname>
            <given-names>Hori</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>R.F.</given-names>
            ,
            <surname>Kotsuki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Kerr</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          :
          <article-title>Theme 3: Accountability in AI: Promoting greater societal trust</article-title>
          .
          <source>In: G7 Multistakeholder Conference on Arti cial Intelligence</source>
          . pp.
          <volume>1</volume>
          {
          <fpage>16</fpage>
          . Montreal, Canada (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. Mitchell,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Zaldivar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Vasserman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Hutchinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Spitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Raji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.D.</given-names>
            ,
            <surname>Gebru</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          :
          <article-title>Model cards for model reporting</article-title>
          .
          <source>Association for Computing Machinery</source>
          , New York, NY, USA (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Modi</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koo</surname>
            ,
            <given-names>C.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foo</surname>
            ,
            <given-names>C.Y.</given-names>
          </string-name>
          , et al:
          <article-title>TFX: A tensor ow-based production-scale machine learning platform</article-title>
          .
          <source>In: KDD</source>
          <year>2017</year>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Naja</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Markovi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Edward</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cottril</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A semantic framework to support AI system accountability and audit</article-title>
          .
          <source>In: ESWC 2021</source>
          . p. in press.
          <source>Greece</source>
          (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>NHS</surname>
          </string-name>
          :
          <article-title>A guide to good practice for digital and data-driven health technologies (2021), tinyurl</article-title>
          .com/NHSAICode, this URL has been shortened
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guestrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>"why should I trust you?": Explaining the predictions of any classi er</article-title>
          .
          <source>In: Proceedings of the 22nd ACM SIGKDD</source>
          , San Francisco, CA, USA,
          <year>August</year>
          13-
          <issue>17</issue>
          ,
          <year>2016</year>
          . pp.
          <volume>1135</volume>
          {
          <issue>1144</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Selbst</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Powles</surname>
          </string-name>
          , J.:
          <article-title>Meaningful information and the right to explanation</article-title>
          .
          <source>International Data Privacy Law</source>
          <volume>7</volume>
          (
          <issue>4</issue>
          ),
          <volume>233</volume>
          {
          <volume>242</volume>
          (12
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Shah</surname>
          </string-name>
          , H.:
          <article-title>Algorithmic accountability</article-title>
          .
          <source>Philosophical Transactions of the Royal Society A</source>
          <volume>376</volume>
          (
          <issue>20170362</issue>
          ),
          <volume>20170362</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Shin</surname>
          </string-name>
          , D.D.:
          <article-title>Socio-technical design of algorithms: Fairness, accountability, and transparency</article-title>
          .
          <source>In: 30th European Regional ITS Conference</source>
          . p.
          <volume>205212</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cobbe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norval</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Decision provenance: Harnessing data ow for accountable systems</article-title>
          .
          <source>IEEE Access 7</source>
          ,
          <issue>6562</issue>
          {
          <fpage>6574</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>The British Standards Institution (BSI):</surname>
            <given-names>PAS</given-names>
          </string-name>
          <year>1882</year>
          :
          <article-title>2021 Data collection and management for automated vehicle trials for the purpose of incident investigation</article-title>
          . speci cation, https://shop.bsigroup.com/ProductDetail/?pid=
          <fpage>000000000030408477</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. US House of Representatives: H.R.2231-Algorithmic Accountability Act of
          <year>2019</year>
          , https://www.congress.gov/bill/116th-congress/house-bill/
          <fpage>2231</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>