<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On Evidence Capture for Accountable AI Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Wei</forename><surname>Pang</surname></persName>
							<email>w.pang@hw.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">School of Mathematical and Computer Sciences</orgName>
								<orgName type="institution">Heriot-Watt University Edinburgh</orgName>
								<address>
									<postCode>EH14 4AS</postCode>
									<country key="GB">UK</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">School of Natural and Computing Sciences</orgName>
								<orgName type="institution">University of Aberdeen Aberdeen</orgName>
								<address>
									<postCode>AB24 3UE</postCode>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Milan</forename><surname>Markovic</surname></persName>
							<email>milan.markovic@abdn.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="department">School of Natural and Computing Sciences</orgName>
								<orgName type="institution">University of Aberdeen Aberdeen</orgName>
								<address>
									<postCode>AB24 3UE</postCode>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Iman</forename><surname>Naja</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">School of Natural and Computing Sciences</orgName>
								<orgName type="institution">University of Aberdeen Aberdeen</orgName>
								<address>
									<postCode>AB24 3UE</postCode>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Chiu</forename><forename type="middle">Pang</forename><surname>Fung</surname></persName>
							<email>c.p.fung@leeds.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">School of Mathematical and Computer Sciences</orgName>
								<orgName type="institution">Heriot-Watt University Edinburgh</orgName>
								<address>
									<postCode>EH14 4AS</postCode>
									<country key="GB">UK</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">School of Computing</orgName>
								<orgName type="institution">University of Leeds</orgName>
								<address>
									<postCode>LS2 9JT</postCode>
									<settlement>Leeds</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Peter</forename><surname>Edwards</surname></persName>
							<email>p.edwards@abdn.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="department">School of Natural and Computing Sciences</orgName>
								<orgName type="institution">University of Aberdeen Aberdeen</orgName>
								<address>
									<postCode>AB24 3UE</postCode>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">On Evidence Capture for Accountable AI Systems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B9B232C221CACF0FA358D350507B919A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Accountability</term>
					<term>Artificial Intelligence</term>
					<term>Evidence Capture</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This research explores evidence capture for accountable AI systems. First, different scopes of AI accountability are set out by extending existing classification. Based on these scopes, two important and fundamental questions in evidence capture are answered: what types of evidence need to be captured and how we can capture them to facilitate better AI accountability. We hope that this research can provide guidance on building better accountable AI systems with effective evidence capture and initiate further research along this line.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Accountability of AI systems has been increasingly studied in recent years, and it has attracted much attention from not only academia <ref type="bibr" target="#b13">[14]</ref> and industry <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b3">4]</ref>, but also government <ref type="bibr" target="#b17">[18]</ref> and public sectors <ref type="bibr" target="#b10">[11]</ref>.</p><p>Realising accountable AI Systems entails knowing who the people were behind the key decisions made throughout the AI system's life cycle, e.g., how the system was designed and built, how it is being used and maintained, and how the laws, regulations, and standards were followed <ref type="bibr" target="#b9">[10]</ref>.</p><p>A crucial step to achieve this is to capture evidence effectively. To start with, two questions need to be answered: what types of evidence need to be captured and how they can be captured. Answering these two fundamental questions will help implement functional evidence capture components for AI systems, thus making AI systems accountable. It will also provide guidance on how we can perform accountability-related investigations (e.g., incident investigation for automated vehicles and bias investigation for AI-assisted recruitment) through effective evidence gathering.</p><p>In this research, we will extensively discuss the above two questions. We do not intend to provide specific solutions or frameworks for evidence capture; instead, we aim to provide guidelines and suggestions, and we hope this could inspire further research on this topic.</p><p>The rest of the paper is organised as follows: first, different scopes of AI accountability are set out in Section 2. Then based on these scopes, in Section 3 a series of "what" questions are answered. This is followed by Section 4, in which the "how" question is discussed. Finally, Section 5 concludes the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">The Three Scopes of AI Accountability</head><p>AI accountability may have different scopes and meanings in various scenarios. Following the brief discussion in <ref type="bibr" target="#b6">[7]</ref>, we further extend the following three scopes of AI accountability (which are called the three "senses" of AI accountability in <ref type="bibr" target="#b6">[7]</ref>) by providing more details about each scope and expanding the third scope (see Section 2.3). This will allow us to discuss the two questions of evidence capture (what and how) in the following sections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Technology-oriented Accountability</head><p>In this scope, accountability is considered as a feature or component of an AI system per se. An AI system can offer related functions to make itself accountable. These functions include explainability, attributability, auditability, and provenance. Similar to accountability, each of these functions may have different scopes and meanings in various scenarios. Explainability entails enabling the system to justify its outputs (e.g., decisions and predictions). This can be automated by XAI (eXplainable AI) tools, whether model agnostic <ref type="bibr" target="#b11">[12]</ref> or model-specific <ref type="bibr" target="#b0">[1]</ref>. In the technical context, attributability involves identifying the roles that technical components have, e.g., if the AI System consists of more than one model, then it is important to know which model was responsible for an erroneous result. Auditability entails allowing the system to be inspected and assessed. Provenance entails documenting how the AI system and its components came to be, e.g., the information about where the training data came from, how a model was implemented, and how performance was evaluated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Human-oriented Accountability</head><p>Within this scope, accountability aims to hold the persons or organisations accountable. This is because the AI systems are made by and for humans (we argue that for the AI systems automatically produced by AutoAI/AutoML <ref type="bibr" target="#b4">[5]</ref>, humans are the creators of these AutoAI/AutoML systems). This scope of accountability focuses on the persons or organisations who are behind the AI systems, including the AI designers, developers, service suppliers, and users. The proposed Algorithm Accountability Act of 2019 <ref type="bibr" target="#b17">[18]</ref> is concerned with the accountability in this scope.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Systems-oriented Accountability</head><p>In the broadest scope, an AI system is viewed as a complex system, for example, a socio-technical system <ref type="bibr" target="#b14">[15]</ref> or a tech-legal system <ref type="bibr" target="#b15">[16]</ref>. Accountability in this scope involves how one should build an accountable AI system considering not only the complexity from social, technical, ethical, and legal perspectives, but also the complicated interactions of system components across these perspectives. The goal is to build an AI system that is not only technically robust, but also trustworthy and complies with legal and ethical requirements.</p><p>It is noted that, further to the classification in <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b9">10]</ref>, we apply a complex system view to this scope, and we consider that an AI system is composed of core AI components and their supporting facilities (e.g., hardware and software), and such an system is operated in, and interacts with its environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">What to Capture</head><p>To be accountable for everything means to be accountable for nothing. Correspondingly, capturing everything is neither feasible nor necessary. To decide what we will actually capture (the action), we need to answer the following three questions: what is the scope of capture, what is the capability of capture, and what is the obligation to capture.</p><p>First, the scope of capture is determined by the scope of accountability in consideration (as set out in Section 2); we will discuss this in Section 3.1. Second, the capability of capture is subject to both AI system limitations and external constraints; we will address this in Section 3.2. Third, the obligation to capture is often determined by the requirements of specific domains, regulations, laws, and standards; we will cover this in Section 3.3. Lastly, what we will actually capture is the ultimate question, which is affected by the answers of the first three questions. We will discuss this final question in Section 3. <ref type="bibr" target="#b3">4</ref>.</p><p>In what follows (Sections 3.1 ∼ 3.4), we will not produce an exhaustive list of the types of potential evidence in each subsection (as such an exhaustive list is impossible to generate), but rather, we provide the most essential and representative types of evidence, some of which are accompanied by concrete examples.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">The Scope of Capture</head><p>Considering the three distinct scopes of AI accountability set out in Section 2, different sets of evidence for capture can be accordingly considered for each scope. We will now discuss them in detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Technical Aspect</head><p>The first scope of AI accountability is concerned with the technical aspect. First, it is essential to record information about the training and evaluation data (e.g., sources, pre-processing processes, and quality analysis) and about the models, which includes the training paradigm and evaluation procedures. Furthermore, explanations of AI predictions and inference processes, fairness, uncertainty, robustness analysis (and even formal verification) for the AI system often need to be recorded for auditing and potential investigations. In many cases, the above information has not been generated or it is not feasible to generate such information beforehand; therefore, whenever possible, the approaches to generating such information should be investigated, initially configured, and documented for post-hoc accountability analysis. For instance, appropriate XAI and fairness analysis tools for the AI system may be prepared and the instructions for using these tools are recorded.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Social and Human Aspect</head><p>The second scope of AI accountability focuses on human and social activities. Human activities related to the AI system need to be captured in order to hold them accountable. This includes human decisionmaking processes and human-human interactions, either directly or through AI system components, during the life cycle of an AI system. For example, the following information may be captured as evidence: the stakeholders' meetings and discussions on the AI system to be developed, AI designers' decision making processes on using particular AI models, the interactions between AI designers and developers during the implementation stage of the AI system, and how users operate an AI system deployed in the wild.</p><p>Complex System Aspect As for the third and broadest scope of AI accountability, we need to capture not only the information regarding the first two scopes, but also the interactions and information flows of different elements of the complex AI system, including the interactions between people, the AI components, the lower-level software and hardware supporting infrastructure, and the environment which the AI system is operated in and interacts with.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">The Capability of Capture</head><p>As mentioned at the beginning of this section, what can be captured is subject to the AI system limitations and other external constraints. As in Section 3.1, we will again consider the different scopes of accountability set out in Section 2 to discuss this in detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Technical Aspect</head><p>The capture ability is determined by the functionalities and limitations of the AI system. The limitations of the AI models being used may affect such ability. For instance, explaining the inference and reasoning processes of black-box models is generally more challenging compared to white-box models. The robustness of some AI models, e.g. some sophisticated deep neural networks, may be hard to analyse against adversarial attacks. For many cutting-edge AI models, their formal verification may be very challenging or even not possible <ref type="bibr" target="#b5">[6]</ref>.</p><p>Social and Human Aspect If the documentation on some decisions made during the AI system's life cycle is not done well or missing, we may not be able to capture related human activities. Considering a legacy AI system, the documentation of which on the design and development stages is missing, we will not be able to capture the activities of the designers and developers as well as their interactions. Therefore, it will be impossible to hold them accountable.</p><p>Complex System Aspect Hardware, software, environmental, ethical, and legal factors can all affect the ability to capture. For example, considering the sensors used by an automated vehicle, their limited ability means we can only capture the data up to a certain resolution. Another example that we may not be able to capture some human activities due to privacy and security considerations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Obligation to Capture</head><p>For a specific AI application, we must consider related laws, regulations, or standards to capture the required types of information or capture them as much as possible. One example is one of the UK national standards for automated vehicles, the BSI standard PAS 1882 <ref type="bibr" target="#b16">[17]</ref>, which suggests that high frequency/resolution data should be captured 30 seconds before and after an incident involving an automated vehicle, as well as during the incident. Another example is the well-known (and much debated) "right to explanation" of automated decision making in EU's General Data Protection Regulation (GDPR) <ref type="bibr" target="#b12">[13]</ref>, which demands explanations for decisions made by algorithms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Action: What We Will Actually Capture</head><p>Having covered the scope, capability, and obligation of evidence capture, we can now discuss what we will actually capture.</p><p>It is obvious that deciding what evidence we actually capture should consider the above three factors simultaneously. For a particular application, from a pragmatic perspective, we may start from the obligations, and then examine/improve the capture capability within the scope of capture. By doing this we will get a narrower set of evidence to be captured.</p><p>For the above refined set, we propose the following three principles to further refine it: first, evidence capture should not significantly affect the system performance (e.g., accuracy, efficiency, and reliability) or take too much resource (e.g., computational time, storage, and human labour), and we call this the performance principle. Second, evidence capture should be less invasive to the AI system and its environment (e.g., requiring no significant change to the AI system or environment), and we call this the friendly principle. Third, considering the above two principles, capturing more is better than capturing less, and we call this the redundancy principle.</p><p>Finally, evidence capturing needs to consider the nature of the application domain and the requirements of the particular accountability investigation. Capturing potential evidence for an automated vehicle is more likely to include hardware and environmental data, such as the vehicle's engine information, road and weather conditions; but capturing potential evidence for an AI recruitment system may focus more on the technical aspect, such as bias analysis and decision/prediction explanation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">How to Capture</head><p>In this section, we discuss the methods of evidence capture. First, the same three principles in Section 3.4 need to be followed when designing capture methods. Second, evidence capture should be carried out throughout the AI life cycle, including requirement analysis, design, implementation, deployment, operation, and maintenance. Related components and workflows which enable evidence capture should be carefully designed and implemented for each stage of the AI system life cycle. Third, based on the degree of automation, capturing methods can be classified into three categories: automatic, semi-automatic, and manual. We discuss them in detail below.</p><p>Automatic capture does not involve human intervention. Google's TFX framework <ref type="bibr" target="#b8">[9]</ref> offers functionalities to automatically record machine learning (ML) model training and evaluation information. The sensors of an automated vehicle can automatically collect system and environmental data. Automatic capture can be further divided into two types: passive capture (capture just in case) and active capture (capture initiated by specific events).</p><p>Semi-automatic capture requires some degree of human input; for instance, the Model Card Toolkit (MCT) <ref type="bibr" target="#b2">[3]</ref>, an open-source tool developed for generating the Model Card <ref type="bibr" target="#b7">[8]</ref>, requires AI developers to manually input some model information, such as the overview, owner, and limitations of the ML model, but MCT can also rely on TFX components to automatically capture information on training data and model performance, and it can automatically generate the final Model Card in HTML format for better inspection. Another example is that knowledge graph has been used to support evidence capture by both human and automatic means <ref type="bibr" target="#b9">[10]</ref>.</p><p>Finally, manual capture is the last resort if the first two approaches are not feasible; for example, gathering stakeholders' meeting minutes and extracting related information from these documents are likely to be done manually.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>We have extensively discussed two fundamental questions on evidence capture for accountable AI systems: what to capture and how to capture. We hope the discussion can guide more effective evidence capture and thus contribute to the development of better accountable AI systems.</p></div>		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This research is supported by the RAInS project (https://rainsproject.org/) funded by EPSRC (EP/R033846/1). We thank all other members of the project for their inspiration and suggestions for this research.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Neural additive models: Interpretable machine learning with neural nets</title>
		<author>
			<persName><forename type="first">R</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Frosst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Caruana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2004.13912</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">FactSheets: Increasing trust in AI services through supplier&apos;s declarations of conformity</title>
		<author>
			<persName><forename type="first">M</forename><surname>Arnold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K E</forename><surname>Bellamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hind</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IBM Journal of Research and Development</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="issue">4/5</biblScope>
			<biblScope unit="page">13</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Miao</surname></persName>
		</author>
		<ptr target="https://ai.googleblog.com/2020/07/introducing-model-card-toolkit-for.html" />
		<title level="m">Introducing the model card toolkit for easier model transparency reporting</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Datasheets for datasets</title>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Morgenstern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vecchione</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wortman Vaughan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iii</forename><surname>Daumé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Crawford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename></persName>
		</author>
		<ptr target="https://www.microsoft.com/en-us/research/publication/datasheets-for-datasets/" />
		<imprint>
			<date type="published" when="2018-03">March 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">AutoML: A survey of the state-of-the-art</title>
		<author>
			<persName><forename type="first">X</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">212</biblScope>
			<biblScope unit="page">106622</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Leofante1</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Narodytska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pulina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tacchella1</surname></persName>
		</author>
		<ptr target="https://arxiv.org/pdf/1805.09938.pdf" />
		<title level="m">Automated verification of neural networks: Advances, challenges and perspectives</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Theme 3: Accountability in AI: Promoting greater societal trust</title>
		<author>
			<persName><forename type="first">J</forename><surname>Millar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Barron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">F</forename><surname>Koichi Hori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kotsuki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kerr</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">G7 Multistakeholder Conference on Artificial Intelligence</title>
				<meeting><address><addrLine>Montreal, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zaldivar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Vasserman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hutchinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Spitzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Raji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<title level="m">Model cards for model reporting</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">TFX: A tensorflow-based production-scale machine learning platform</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Modi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Y</forename><surname>Koo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Y</forename><surname>Foo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">KDD</title>
		<imprint>
			<biblScope unit="volume">2017</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A semantic framework to support AI system accountability and audit</title>
		<author>
			<persName><forename type="first">I</forename><surname>Naja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Markovi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Edward</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cottril</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ESWC 2021</title>
				<meeting><address><addrLine>Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>in press</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="tinyurl.com/NHSAICode" />
		<title level="m">NHS: A guide to good practice for digital and data-driven health technologies</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>, this URL has been shortened</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">why should I trust you?&quot;: Explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD</title>
				<meeting>the 22nd ACM SIGKDD<address><addrLine>San Francisco, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">August 13-17, 2016. 2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Meaningful information and the right to explanation</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Selbst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Powles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Data Privacy Law</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="233" to="242" />
			<date type="published" when="2017">12. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Algorithmic accountability</title>
		<author>
			<persName><forename type="first">H</forename><surname>Shah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Transactions of the Royal Society A</title>
		<imprint>
			<biblScope unit="volume">376</biblScope>
			<biblScope unit="issue">20170362</biblScope>
			<biblScope unit="page">20170362</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Socio-technical design of algorithms: Fairness, accountability, and transparency</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Shin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">30th European Regional ITS Conference</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">205212</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Decision provenance: Harnessing data flow for accountable systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cobbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Norval</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="6562" to="6574" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="https://shop.bsigroup.com/ProductDetail/?pid=000000000030408477" />
		<title level="m">): PAS 1882:2021 Data collection and management for automated vehicle trials for the purpose of incident investigation</title>
				<meeting><address><addrLine>BSI</addrLine></address></meeting>
		<imprint>
			<publisher>The British Standards Institution</publisher>
		</imprint>
	</monogr>
	<note>specification</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<ptr target="https://www.congress.gov/bill/116th-congress/house-bill/2231" />
		<title level="m">Algorithmic Accountability Act of 2019</title>
				<editor>
			<persName><forename type="first">H</forename><forename type="middle">R</forename></persName>
		</editor>
		<imprint>
			<publisher>US House of Representatives</publisher>
			<date type="published" when="2231">2231</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
