<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Trustworthy &quot;blackbox&quot; Self-Adaptive Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Beatriz</forename><surname>Cabrero-Daniel</surname></persName>
							<email>beatriz.cabrero-daniel@gu.se</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Gothenburg</orgName>
								<address>
									<addrLine>Hörselgången 5, 417 56</addrLine>
									<settlement>Göteborg</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yasamin</forename><surname>Fazelidehkordi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Gothenburg</orgName>
								<address>
									<addrLine>Hörselgången 5, 417 56</addrLine>
									<settlement>Göteborg</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olga</forename><surname>Ratushniak</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Gothenburg</orgName>
								<address>
									<addrLine>Hörselgången 5, 417 56</addrLine>
									<settlement>Göteborg</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Trustworthy &quot;blackbox&quot; Self-Adaptive Systems</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FF9AC308D0EF3156BBA6D835A06DCA18</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-04-29T06:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Trustworthy AI</term>
					<term>Human Oversight</term>
					<term>Autonomous Vehicles</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>For humans to trust Self-Adaptive Systems in critical situations, they must be robust, ethical, and lawful, but human intelligence is still needed to make ethical decisions. This paper presents a framework to discuss human values in the RE process for Self-Adaptive Systems and RE-specific challenges arising due to the AI paradigm shift towards foundation models: self-supervised blackboxes. Semi-autonomous heavy mining vehicles are a running example to present the requirements.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>There is much public discussion on how Artificial Intelligence (AI) differs from human intelligence. We trust the latter, we are wary of the former. Industry practitioners share these concerns and put effort into measuring safety, privacy, etc. Their goal is ensuring AI-based Self-Adaptive Systems (SAS) can at least reach human performance in the tasks they were designed for <ref type="bibr" target="#b0">[1]</ref>. However, these efforts are often insufficient for humans to trust SASs, especially with the introduction of foundation models, such as OpenAI's ChatGPT, rapidly permeating society.</p><p>Foundation models are based on large-scale self-supervised deep learning algorithms <ref type="bibr" target="#b1">[2]</ref>, whose inner workings are not transparent, making them difficult to explain to and interpret by users. Moreover, foundation models often use large amounts of unlabelled data, often gathered disregarding ethical concerns, e.g., diversity. The more complex and accurate the models become, the more data is needed to train them, and the harder it is to explain their decision making process. Thus, the conflict between these powerful AI "blackboxes" and user trust <ref type="bibr" target="#b2">[3]</ref>.</p><p>Requirements Engineering (RE) guidelines for ethical AI were reviewed with the aim of building a framework for Trustworthy SASs (T-SASs). The outlined T-SAS framework is motivated by the emergence of semi-autonomous heavy vehicles for mining, as running example, which raise concerns addressed here. Nevertheless, the T-SAS framework could address human values in other fields. The focus will be on human oversight, still needed to promote trust in SASs <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref>. The insights on human-on-the-loop (HOTL) expectations for T-SAS monitoring and human intervention aim to foster discussions among the RE practitioners about creating T-SASs that adhere to ethical principles and laws <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b6">7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background and Mining Context</head><p>Aristotle defined credibility in terms of wisdom, virtue, and goodwill. Centuries later, EU guidelines state that AI should be trustworthy, that is robust, lawful, and ethical <ref type="bibr" target="#b3">[4]</ref>. Fig. <ref type="figure" target="#fig_0">1</ref> shows requirements related to human autonomy and shared responsibility in EU guidelines. Evaluating whether adaptive systems meet stakeholders' needs often focuses on robustness verification, but this may not capture ethical values <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9]</ref>. Nevertheless, embedding ethical values in SASs is challenging, partly due to the recent AI developments such as foundation models, e.g., text-to-image generators for non-expert users <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b1">2]</ref>. Designing comprehensive evaluation strategies for these complex and industrial systems is difficult due to the lack of auditability and sustainability analysis, and the emergence of unforeseen skills during training <ref type="bibr" target="#b1">[2]</ref>. Moreover, the lack of open APIs and benchmarks hinders research on foundation models' transparency, robustness, fairness, etc. Moreover, the resources needed to train and test such systems hinder academics' access to evaluating their benefits and harms <ref type="bibr" target="#b1">[2]</ref>. Nevertheless, high-risk SASs like Autonomous Vehicles (AV), potentially using foundation models, must nevertheless show transparency to allow for human oversight and intervention <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. SASs must inform diverse end-users, e.g., end-users or third-party audits, about their capacities and limitations and trace them back to input data to enable responsibility reasoning <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>. Responsibility sharing and mitigation of foreseeable misuse are challenging and raise ethical questions that need to be answered during the RE process <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b12">13]</ref>.</p><p>Mining AVs in safety-critical situations are high-risk AI products, therefore a HOTL to monitor the AVs and intervene when prompted is needed <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. Human drivers and AVs primarily rely on vision, or Computer Vision (CV), to avoid danger and their responsibilities must be balanced <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. AI algorithms can help mining vehicles remote operators in critical situations: by measuring user attention, either driver or remote operator, to reduce reaction times or by facilitating fallback to human control in case of low AI confidence <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b6">7]</ref>. Even HOTL AVs can be involved in incidents, potentially fatal with heavy mining machinery, so risks arising from faulty interactions must be mitigated. Human-AI interaction is receiving increasing academic attention together with limitations of AV, including benefits, harms, and development practices <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18]</ref>. The AI paradigm is shifting to blackbox models, hindering HOTL-SAS interaction and raising the question of how to split the responsibility of decision-making.</p><p>Deep Learning algorithms are increasingly popular to detect edge cases where human intervention might be needed, but they rely on large amounts of annotated data, which is difficult or impossible to gather, expensive and time-consuming to curate <ref type="bibr" target="#b1">[2]</ref>. However, sensor difficulties, e.g., extreme weather affecting visibility, or cognitive limits, e.g., insufficient training data, might cause malfunctions <ref type="bibr" target="#b18">[19]</ref>. The RE process therefore needs to set standards for data quality, security, and privacy <ref type="bibr" target="#b9">[10]</ref>. Based on the data, robustness needs to be periodically evaluated by stakeholders, using performance metrics and criteria that reflect their values and goals, e.g., ore throughput rate <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b9">10]</ref>. Limitations of mining AVs should be clearly explained to the HOTL at all times, e.g., to prevent incidents, improve throughput rates, or audit accidents <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b6">7]</ref>. Transparency, though, is not always possible when using these algorithms, especially in opaque blackbox algorithms or foundation models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Framework for Trustworthy Self-Adaptive Systems</head><p>This section outlines a framework to guide the RE process for T-SAS focusing on requirements for HOTL-mechanisms (see Figure <ref type="figure" target="#fig_0">1</ref>) in light of the trend to incorporate foundation models such as GPT-3, DALL-E, or BERT <ref type="bibr" target="#b19">[20]</ref>. The relationship between the concepts is also discussed:</p><p>Robustness. Classic AIs use annotated data, whilst foundation models use large volumes of unlabeled data, removing the difficult and time-consuming task of curating data sets. This paradigm can particularly benefit AVs for mining, which inherently need to deal with previously unseen scenarios. Nevertheless foundation models, especially learning online, can be affected by incorrect, redundant, or unstable data, which could lead to safety-critical situations. Therefore, the T-SAS framework promotes the usage of high-quality, diverse, self-updating, and self-augmenting data sets <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b3">4]</ref>. Appropriate requirements for data availability, usability, consistency, and integrity, must be discussed <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b0">1]</ref>.</p><p>Human oversight. Whilst foundation models can accomplish complex tasks, e.g., image synthesis, they still show limitations, e.g., generalizing to new scenes, mainly due to selfsupervised training <ref type="bibr" target="#b1">[2]</ref>. Even if totally reliable, SASs incorporating such models would still need to be transparent to facilitate human oversight, foster human autonomy, and, ultimately, be trustworthy. HOTL-SAS interaction is an open and important problem for humans, who should be able to supervise and override SAS decisions at all times. Therefore, T-SASs must integrate HOTL strategies and monitoring interfaces adequate to the end-users, designed to address the transparency and accountability needs of T-SASs <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b9">10]</ref>.</p><p>Transparency. T-SASs should provide concise, complete, correct, and clear explanations that are relevant, accessible and comprehensible to users in a context (use or foreseeable misuse), to avoid risks to health, safety, or fundamental rights <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b2">3]</ref>. These requirements intend to ensure human autonomy and responsibility sharing but integrating these needs into SAS is challenging. Previous work has focused on highly trained operators, e.g., aircraft pilots, but there is still the need to investigate how to design interactions with non-expert users <ref type="bibr" target="#b10">[11]</ref>. Training end-users while using SASs could be considered. For that, appropriate metrics and criteria, adapted to the user and the operation context, would be needed to ensure clarity and avoid ambiguity about the state of the T-SAS.</p><p>Accountability. As discussed above, many SASs, including AV, cannot ensure safety on their own and need to be monitored by humans during operation. Even when SASs are not entirely robust, might be able to produce priors and convey information that greatly helps the HOTL in critical situations. This has long been a focus of Human-Computer Interaction research <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b6">7]</ref>. Moreover, T-SASs must also be accountable to justify their goals, motivations and rationale in post hoc analysis by third parties. This topic is strongly related to detecting, leveraging, and mitigating risks by public authorities. Therefore, the framework should explicitly connect these needs to open communication requirements, critical for T-SASs that closely interact with humans, e.g., AV drivers <ref type="bibr" target="#b3">[4]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>Humans often mistrust SASs or show automation bias <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b10">11]</ref>. Both are concerning as SASs increasingly integrate foundation models, far from being transparent or auditable <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b1">2]</ref>. Much effort has been devoted to support practitioners in addressing human values in the RE process but the absence of clear guidelines, benchmarks, metrics, and evaluation criteria, makes this task challenging. As a result, there is still a need for human oversight, e.g., fallback procedures <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b15">16]</ref>. Academics from different backgrounds should examine the models' biases and limitations, and inform society about their trustworthiness <ref type="bibr" target="#b1">[2]</ref>. These recommendations are based on existing international laws, domestic legislation, and AI development frameworks and aim to increase awareness among RE practitioners and inspire the development of a generic framework for creating T-SASs. Efforts to homogenise mining processes are already being made but further research is needed to adequately address human values in HOTL mining SASs. For instance, it is necessary to consider the implications that foundation models will entail with respect to other ethical considerations. Agreeing on appropriate recommendations with practitioners to address human values in the RE process for T-SAS would be a necessary next step. Frameworks from other disciplines and the ad-hoc practices of RE practitioners could be studied to propose adaptations to existing frameworks to better address human values in T-SAS development. Data governance should in turn be aligned with stakeholders' values, e.g., non-discrimination, and requirements such as privacy or fairness. These considerations are left out for future work.</p><p>This work is based on European Union guidelines but different values might prevail in non-EU countries. Even within the EU, revisions to the AI legislation, which is still in draft form, might have a significant impact on the SAS now in development. As such, it is important for the framework to adapt to new, unforeseeable trust elements introduced by public authorities that might, directly and indirectly, impact the expectations for T-SASs. As a final note, future research must also address the question of how to allow for diverse legislation and context-dependent interpretation of T-SAS requirements.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Framework for trustworthy SAS using opaque self-supervised AI models.</figDesc><graphic coords="2,89.29,263.17,416.69,60.65" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work is thanks to the University of Gothenburg's Amanuens program. Thanks to Prof. Berger and Assoc. Prof. Horkoff for their valuable guidance. This work was supported by the Vinnova project ASPECT [2021-04347].</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Requirements engineering for artificial intelligence: What is a requirements specification for an artificial intelligence?</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Berry</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-98464-9_2</idno>
	</analytic>
	<monogr>
		<title level="j">LNCS</title>
		<imprint>
			<biblScope unit="volume">13216</biblScope>
			<biblScope unit="page" from="19" to="25" />
			<date type="published" when="2022">2022</date>
			<publisher>Springer Science and Business Media Deutschland GmbH</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">On the opportunities and risks of foundation models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Bommasani</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2108.07258</idno>
		<ptr target="https://arxiv.org/abs/2108.07258.doi:10.48550/ARXIV.2108.07258" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" />
		<title level="m">EUR-Lex -52021PC0206 -EN -EUR-Lex</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<idno type="DOI">10.2759/002360</idno>
		<title level="m">European Commission and Directorate-General for Communications Networks, Content and Technology, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment</title>
				<imprint>
			<publisher>Publications Office</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Entrust: engineering trustworthy self-adaptive software with dynamic assurance cases</title>
		<author>
			<persName><forename type="first">R</forename><surname>Calinescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weyns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gerasimou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iftikhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Habli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kelly</surname></persName>
		</author>
		<idno type="DOI">10.1145/3180155.3182540</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="495" to="495" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Toward requirements specification for machine-learned components</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rahimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kokaly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chechik</surname></persName>
		</author>
		<idno type="DOI">10.1109/REW.2019.00049</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE</title>
		<imprint>
			<biblScope unit="page" from="241" to="244" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Dimatteo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Czarnecki</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-2584/RE4AI-paper2.pdf" />
		<title level="m">Requirements for monitoring inattention of the responsible human in an autonomous vehicle: The recall and precision tradeoff</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Halme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Agbese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Antikainen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>-K. Alanen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jantunen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-K</forename><surname>Kemell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vakkuri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abrahamsson</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org" />
		<title level="m">Ethical user stories: Industrial study</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">B</forename><surname>Aydemir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dalpiaz</surname></persName>
		</author>
		<idno type="DOI">10.1145/3194770.3194778</idno>
		<idno>doi:10.1145/3194770.3194778</idno>
		<ptr target="https://doi.org/10.1145/3194770.3194778" />
		<title level="m">A roadmap for ethics-aware software engineering</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Ethics is a software design concern</title>
		<author>
			<persName><forename type="first">I</forename><surname>Ozkaya</surname></persName>
		</author>
		<idno type="DOI">10.1109/MS.2019.2902592</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Software</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="4" to="8" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The impact of automation assisted aircraft separation on situation awareness</title>
		<author>
			<persName><forename type="first">A.-Q</forename><forename type="middle">V</forename><surname>Dao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Brandt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Battiste</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-P</forename><forename type="middle">L</forename><surname>Vu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Strybel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Johnson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Human Interface and the Management of Information. Information and Interaction</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Salvendy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Smith</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="738" to="747" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Virginia dignum: Responsible artificial intelligence: How to develop and use ai in a responsible way</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">E</forename><surname>Gold</surname></persName>
		</author>
		<idno type="DOI">10.1007/S10710-020-09394-1</idno>
		<ptr target="https://link.springer.com/article/10.1007/s10710-020-09394-1.doi:10.1007/S10710-020-09394-1" />
	</analytic>
	<monogr>
		<title level="j">Genetic Programming and Evolvable Machines</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="137" to="139" />
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Doshi-Velez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kortz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Budish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bavitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gershman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>O'brien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Scott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schieber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Waldo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weinberger</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1711.01134</idno>
		<title level="m">Accountability of ai under the law: The role of explanation</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Monocular human pose estimation: A survey of deep learning-based methods</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1016/J.CVIU.2019.102897</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Vision and Image Understanding</title>
		<imprint>
			<biblScope unit="volume">192</biblScope>
			<biblScope unit="page">102897</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Toward fast and accurate human pose estimation via soft-gated skip connections</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bulat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kossaifi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tzimiropoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pantic</surname></persName>
		</author>
		<idno type="DOI">10.1109/FG47880.2020.00014</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings -2020 15th IEEE International Conference on Automatic Face and Gesture Recognition</title>
				<meeting>-2020 15th IEEE International Conference on Automatic Face and Gesture Recognition<address><addrLine>FG</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="8" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Attention for vision-based assistive and automated driving: A review of algorithms and datasets</title>
		<author>
			<persName><forename type="first">I</forename><surname>Kotseruba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tsotsos</surname></persName>
		</author>
		<idno type="DOI">10.1109/TITS.2022.3186613</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Intelligent Transportation Systems</title>
		<imprint>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Updating our understanding of situation awareness in relation to remote operators of autonomous vehicles</title>
		<author>
			<persName><forename type="first">C</forename><surname>Mutzenich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Durant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Helman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dalton</surname></persName>
		</author>
		<idno type="DOI">10.1186/S41235-021-00271-8/FIGURES/6</idno>
	</analytic>
	<monogr>
		<title level="j">Cognitive Research: Principles and Implications</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Social impacts of ethical artifical intelligence and autonomous system design</title>
		<author>
			<persName><forename type="first">N</forename><surname>Hutchins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Kirkendoll</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hook</surname></persName>
		</author>
		<idno type="DOI">10.1109/SYSENG.2017.8088298</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Symposium on Systems Engineering, ISSE 2017 -Proceedings</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">2d human pose estimation: New benchmark and state of the art analysis</title>
		<author>
			<persName><forename type="first">M</forename><surname>Andriluka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pishchulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gehler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schiele</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2014.471</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="3686" to="3693" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.1810.04805</idno>
		<ptr target="https://arxiv.org/abs/1810.04805.doi:10.48550/ARXIV.1810.04805" />
		<title level="m">Bert: Pre-training of deep bidirectional transformers for language understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Borg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H.-M</forename><surname>Heyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Horkoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Habibullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Knauss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Knauss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Li</surname></persName>
		</author>
		<ptr target="https://www.vinnova.se/en/p/precog-requirements-engineering-toward-safe-machine-learning-based-perception-systems-for-autonomous-mobility/" />
		<title level="m">Precog: Requirements Engineering toward Safe Machine Learning-Based Perception Systems for Autonomous Mobility | Vinnova</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Manipulating and measuring model interpretability</title>
		<author>
			<persName><forename type="first">F</forename><surname>Poursabzi-Sangdeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Goldstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Hofman</surname></persName>
		</author>
		<idno type="DOI">10.1145/3411764.3445315</idno>
	</analytic>
	<monogr>
		<title level="m">Conference on Human Factors in Computing Systems -Proceedings</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
