<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Symbiotic AI: What is the Role of Trustworthiness?</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Miriana</forename><surname>Calvano</surname></persName>
							<email>miriana.calvano@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Bari &quot;Aldo Moro&quot;</orgName>
								<address>
									<addrLine>Via Edoardo Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Antonio</forename><surname>Curci</surname></persName>
							<email>antonio.curci@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Bari &quot;Aldo Moro&quot;</orgName>
								<address>
									<addrLine>Via Edoardo Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">University of Pisa</orgName>
								<address>
									<addrLine>Largo B. Pontecorvo 3</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Rosa</forename><surname>Lanzilotti</surname></persName>
							<email>rosa.lanzilotti@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Bari &quot;Aldo Moro&quot;</orgName>
								<address>
									<addrLine>Via Edoardo Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Antonio</forename><surname>Piccinno</surname></persName>
							<email>antonio.piccinno@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Bari &quot;Aldo Moro&quot;</orgName>
								<address>
									<addrLine>Via Edoardo Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<addrLine>Ital-IA 2024, 29-30th May 2024</addrLine>
									<settlement>Naples</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Symbiotic AI: What is the Role of Trustworthiness?</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">5CE4A4FB1392A87C1C0AD651E25B4129</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Symbiotic AI</term>
					<term>Trustworthiness</term>
					<term>Design and Evaluation</term>
					<term>Human-Centered Approach</term>
					<term>AI Act (AIA)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The design, development, and use of Artificial Intelligence (AI) is crucial in modern society. The traditional design of AI systems focuses on models with very high performances without highlighting how relevant the role of humans is in this context. To create AI systems that suit end users' needs and preferences, it is important to involve them in each phase of the system lifetime cycle. AI systems must present interfaces and interaction paradigms that enhance users' cognitive models, ensuring usability and a positive User Experience (UX). In this new scenario, Human-Computer Interaction (HCI) and AI contaminate each other leading to reach the human-AI symbiosis. Researchers should shift the focus toward Symbiotic AI (SAI) systems, which aims to enhance humans' abilities without replacing them. This manuscript presents preliminary considerations for the creation of a framework to design high-quality SAI systems and metrics that can be employed to appropriately evaluate them. Being a novel field, it focuses on the current investigation regarding the definition of the properties of SAI systems, stressing the importance of Trustworthiness, and whether new design principles for SAI systems can be extracted from the AI act.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The fast and broad spread of artificial intelligence (AI) over the past few years has allowed individuals to use new services, products, and systems to perform various tasks and activities. AI has been introduced in various fields, such as medicine, law, and education, raising several concerns because the results of the systems can influence humans to make decisions that are often irreversible and can impact other individuals. Consequently, legal bodies and governments are working to regulate AI to preserve humans with new laws, such as the Artificial Intelligence Act (AIA), which undertakes a risk-based approach regarding the design, development, and deployment of AI for EU citizens, identifying its best and forbidden practices while delineating guiding principles <ref type="bibr" target="#b0">[1]</ref>. This implies that the future direction of AI is undergoing substantial changes that should be addressed with a multidisciplinary approach <ref type="bibr" target="#b1">[2]</ref>.</p><p>The main issue with AI systems is that the traditional approach to their development heavily focuses on achieving high-performing models and obtaining excellent metrics (e.g., accuracy, precision, recall). Such models are also called black boxes: users cannot analyze and com-prehend the processes that lead to the outputs of such systems, causing low transparency <ref type="bibr" target="#b2">[3]</ref>. This can be addressed by adopting a human-centered approach when designing and developing AI systems to foster a symbiotic relationship with humans and let technology support humans' daily activities without replacing them, adapting to their mental and physical models <ref type="bibr" target="#b3">[4]</ref>. Human-Centred Design (HCD), which belongs to the Human-Computer Interaction (HCI) discipline, stresses that end-users must always be involved in the creation of any kind of product, in order to create clear, appropriate and effective interfaces that allow end-users to interact correctly with the software they are using <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b3">4]</ref>. On the other hand, software engineering (SE) is another pillar in the development of quality software systems, as it is the discipline that studies how software should be developed, maintained and used through specific standards and processes <ref type="bibr" target="#b7">[8]</ref>. It is, therefore, crucial to integrate practices and principles from the two disciplines to support designers and developers in creating artificial intelligence systems that enable a symbiotic relationship with their end users.</p><p>This research is part of the Future Artificial Intelligence Research (FAIR) project, which aims to bring innovation to the European Union in the context of AI. FAIR follows a holistic and multidisciplinary approach to rethink the foundations of AI and investigate its social impact. Its goal is to build systems capable of interacting and collaborating with humans and foster trustworthiness. Specifically, the research presented in this article is performed within the Spoke 6, named Symbiotic AI (SAI), which investigates the scientific, social, economic, legal and ethical challenges related to the growing symbiosis between humans and AI. SAI refers to a collaborative re-lationship between humans and AI systems in which "the human understands and intuitively reacts to the machine, and the machine understands and intuitively reacts to the human" <ref type="bibr" target="#b8">[9]</ref>. To reach the human-AI symbiosis, users should trust the system's decisions and properly comprehend them, making Trustworthiness one of the main properties to consider when dealing with such systems. However, due to the novelty of the field, limited work is available in the literature. Our research aims to propose a comprehensive framework and evaluation metrics to support designers, developers, and AI specialists in creating and evaluating Symbiotic AI (SAI) systems that inspire trust, ensure fairness, and are responsible and compliant with the various domains in which they operate <ref type="bibr" target="#b9">[10]</ref>.</p><p>This manuscript is structured as follows: Section 3 describes the approach that will be undertaken to design and evaluate SAI systems; Section 2 presents how trustworthiness can be defined in the SAI field, exploring the perspectives of the European Commission and academia; Section 4 concludes and explores the future work of the project.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Trustworthiness for SAI Systems</head><p>For people and society, trustworthiness is undoubtedly one of the prerequisites that AI systems should have to be used without hesitations <ref type="bibr" target="#b10">[11]</ref>. It, therefore, becomes the starting point of our research because of its breadth and multifaceted nature. In this section, the concept of trustworthiness is explored by analysing the perspectives of European policymakers and academics to determine how to consider it in the context of SAI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">The European Commission Perspective</head><p>This section focuses on two documents drafted by the European Commission: the Ethics Guidelines for Trustworthy AI and the AIA. The goal is to delineate a clear image of the standpoints of policymakers to create AI products that fully comply with laws, regulations, and norms and track the efforts of the EU concerning human rights, ethics, and philosophical issues.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.1.">Ethics Guidelines for Trustworthy AI)</head><p>The role of the AI HLEG is to define the approach of the European Commission with respect to AI by indicating the key principles and policies. In 2019, they drafted the "Ethics Guidelines for Trustworthy AI" report, which identifies seven requirements of Trustworthiness, identified as the umbrella property to ensure a human-centric approach to AI <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>, illustrated in Figure <ref type="figure" target="#fig_0">1</ref>. Such requirements are briefly described in the following:</p><p>• Human agency and Oversight: incorporating mechanisms for human intervention in critical decision-making processes ensures human control and supervision over AI systems to prevent unintended consequences.</p><p>• Technical Robustness and Safety: developing AI systems necessitates a risk-preventive approach that ensures reliable behavior, minimizing and preventing unintentional and unexpected harm.</p><p>• Privacy and Data Governance: ensuring privacy protection requires robust data governance, encompassing both the quality and integrity of the data used in processing to guarantee privacy.</p><p>• Transparency: encompassing the transparency of elements requires to comprehend the reason that lies behind the decision taken by the system.</p><p>• Diversity, Non-Discrimination and Fairness: involving all stakeholders throughout the entire system lifecycle ensures equal access through inclusive design processes and equitable treatment.</p><p>• Societal and Environmental Well-being: maximizing sustainability, social impact, and ecological responsibility of AI systems to positively contribute to society while minimizing negative consequences.</p><p>• Accountability: creating mechanisms to ensure accountability of AI systems, both before and after their development, deployment and use guarantees fairness <ref type="bibr" target="#b10">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.2.">The Artificial Intelligence Act (AIA)</head><p>Starting from the requirements of Trustworthy AI, listed in Section 2.1.1, in 2021, the EU has defined the AIA to regulate the adoption of harmonised and standardized rules for AI systems. Specifically, it merges trustworthiness with the risk-based approach to determine the acceptability of the types of systems through norms and regulations <ref type="bibr" target="#b11">[12]</ref>. The risk-based approach outlines four categories of AI systems in relation to the risks they might cause:</p><p>1. Unacceptable Risk: it encompasses systems that might include prohibited AI practices that must be banned to guarantee a well-functioning society, such as those that might threaten minorities or those used by public authorities.</p><p>2. High Risk: it regards systems used in fields such as education and vocational training, access to private and public services, law enforcement, etc. 3. Limited Risk: it encompasses AI systems that must comply with specific transparency obligations because they interact with humans (e.g., biometric recognition systems, and emotion recognition systems).</p><p>4. Low or Minimal Risk: it refers to systems that feature AI but do not require specific conformity checks <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">The Academic Perspective</head><p>Ben Shneiderman, one of the pioneers of HCI, proposes trustworthiness as one of three principles, along with safety and reliability, of human-centered AI (HCAI) systems, which guarantee an appropriate balance of automation and human control. Specifically:</p><p>• Trustworthiness concerns the property that makes systems deserving of being trusted by humans.</p><p>• Reliability comes from the application of technical practices of software engineering that build systems that produce appropriate and/or expected responses.</p><p>• Safety is a strategy to guide the refinement of the model performance to prevent potential failure and improper use <ref type="bibr" target="#b12">[13]</ref>.</p><p>The three above mentioned properties are the most recurrent in the literature since they are the main areas of research and can encompass the other properties; nevertheless, the state of the art concerning the human-AI interaction, considers other 22 properties that can influence the design and development of any kind of system (e.g., usable, observable, explainable, resilient, agile, etc.) <ref type="bibr" target="#b12">[13]</ref>.</p><p>The investigation of our research work consists in understanding what principles are applicable to SAI and identifying the potential new properties.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">The Impact of Trustworthiness in SAI</head><p>Our objective is to define a framework that encompasses both standpoints; in this regard, the authors are performing a Systematic Literature Review (SLR), following the Kitchenham protocol, to identify the guidelines and principles that can be drawn from the AIA that could be applied to the lifecycle of SAI systems <ref type="bibr" target="#b13">[14]</ref>. This SLR has the objective to determine how the research community is investigating and employing the AIA with respect to the design and development AI. From the preliminary results, it emerged that trustworthiness is intrinsic in SAI because humans must fully trust systems in order to symbiotically with them.</p><p>Belonging to the domain of AI built following a humancentered approach, SAI can include Trustworthiness, Safety, and Reliability as principles; however, the establishment of a symbiotic relationship might require their refinement or to the definition of new ones. The ongoing SLR will also serve to establish the new principles and identify new guidelines suitable for the field of SAI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Conceptual Framework for SAI Systems</head><p>The starting point is understanding the gaps in the traditional approach to the development of AI systems to determine the changes to propose and the integration of new processes into the software lifecycle. This conceptual framework aims to support designers and developers in creating and evaluating SAI systems. The objective is to provide a standardized methodology to those who create AI-powered services that reduce the gap between technology and humans and decrease cognitive demand when interpreting and understanding the outputs that systems produce. The objective of this work lies in defining a framework that considers and merges the two perspectives (i.e. Ethics Guidelines for Trustworthy AI and AIA), while identifying principles, guidelines, and techniques that belong to different disciplines by finding the appropriate links. Figure <ref type="figure" target="#fig_1">2</ref> presents an initial version of the conceptual framework that consists in two layers, Design and Assessment, explained below. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Design</head><p>This layer embraces four main research areas that contribute equally: Human-Computer Interaction (HCI), Law &amp; Ethics, Software Engineering (SE), and AI. The following sections describe each component of the framework, illustrating its role in the SAI scenario.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Human-Computer Interaction (HCI)</head><p>HCI is one of the pivotal components of this framework because the symbiotic relationship can be achieved if such systems allow users to reach their goals with effectiveness, efficiency, and satisfaction, thus, by being usable and providing a positive user experience. Other key elements that HCI is responsible for are feedback and affordance, enabling humans to understand how the system should be used, making them feel at ease with proper communication <ref type="bibr" target="#b5">[6]</ref>. Involving humans iteratively during each phase of the system's lifecycle implies performing interviews, questionnaires, field studies, and focus groups to perform quantitative and qualitative evaluations of the systems and to obtain rich insights concerning the users' needs, preferences and cognitive models <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Ethical &amp; Legal Factors</head><p>This dimension considers the regulatory, philosophical, and ethical standpoint since designers and developers must create products that preserve users' social, working, and personal well-being. One of the main issues concerning AI, which becomes particularly valid for the branch of SAI, consists of avoiding biases and ensuring fairness. This element must be always considered because the root of biases is found in how data is treated by AI models, for example, in the learning phase. This determines the unfair behavior of systems that can influence humans' decisions when em-ploying AI as an instrument. The legal standpoint must be considered for designing and developing AI systems to create products that comply with regulations and can be released to the public. Currently, the main elements to consider are the AIA and the General Data Protection Regulation (GDPR); the first regulates the design, development, and use of AI systems in the EU, while the GDPR is a law that defines how data is handled, stored, and processed <ref type="bibr" target="#b14">[15]</ref>.</p><p>These regulations define the ethical principles that any kind of system should possess to be available to society.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Artificial Intelligence (AI)</head><p>This dimension refers to AI from a technical and algorithmic standpoint because the framework aims to suggest the appropriate techniques and practices to adopt depending on the requirements of the systems to create. AI models, along with high computational power, can be employed in multiple domains, such as business, finance, healthcare, agriculture, smart cities, and cybersecurity; however, they cannot be used as a one-size-fits-all solution because, depending on the activities, different tasks are needed -e.g., classification, prediction, description -, raising the need for context-specific models, parameters, and variables <ref type="bibr" target="#b15">[16]</ref>. The effectiveness of SAI systems is not guaranteed by simply obtaining high-performing models but rather by systems that properly integrate Transparency, Explainability, and Interpretability. This provides users with the right instruments to comprehend the processes behind outputs, influencing their decisions, and what data is responsible for the system's responses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Software Engineering (SE)</head><p>This framework aims to guide design and developers in creating SAI systems, ensuring that they operate by following a human-centered approach while complying with legal requirements and implementing high-performing AI systems. Thus, the objective is to integrate the Agile principles and the processes of the Agile Development Lifecycle with those belonging to the SAI design, creating a mapping that does not exclude any discipline <ref type="bibr" target="#b16">[17]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Assessment</head><p>In this new scenario, where a strict correlation and contamination exists between human and AI performance, it becomes essential to define novel metrics to assess the human-AI symbiotic relationship.</p><p>Traditionally, human beings and AI have been viewed as distinct and unrelated entities, causing UX and AI metrics to be defined independently to evaluate both human behavior and system performance. Considering them in unison, it is possible to draft a preliminary set of metrics that can be employed to assess the symbiosis. By integrating both the dataset and user information and considering the user's characteristics from the training phase of the AI model, it is possible to foster symbiosis, making the system's behaviour as much as possible adaptable to the user's needs.</p><p>Since Trustworthiness allows users to trust systems that operate safely and exhibit reliable behavior, it is contemplated as one of the starting points of this research work <ref type="bibr" target="#b3">[4]</ref>. Assessing this aspect is difficult since it varies across many application contexts <ref type="bibr" target="#b3">[4]</ref>; therefore, it is necessary to understand whether its evaluation should consider it as a stand-alone property or as an ensemble of other dimensions, such as safety, fairness, robustness, etc <ref type="foot" target="#foot_0">1</ref> .</p><p>Two potential metrics are proposed to assess how Trustworthy an AI system is: Preventing Undesired System Behaviors, which refers to how effectively the system avoids actions that could potentially harm the user or deviate from expected behavior; Correctness of Decisions, which measures the extent to which system's decisions align with user expectations and desired outcomes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>This paper presents preliminary considerations concerning the novel field of Symbiotic AI with respect Trustworthiness. It presents the main challenges of identifying the principles of this field while stressing the need for a human-centered approach when dealing with AI systems of any kind. This research work is the starting ground for the definition of a comprehensive framework, presented in Section 3, that encompasses multiple disciplines and aims to guide designers and developers in creating SAI systems. This framework is still in its early stages and at a conceptual state. Delineating a standardized approach to assess the behavior and performance of such systems is crucial to ensure the proper deployment of AI, which is part of the daily lives of countless individuals. As Trustworthiness plays a pivotal role in an effective human-AI interaction, the future of this research will focus on determining its complementary principles and its impact on symbiosis by carrying out verticalized case studies and performing in-depth investigations in the literature.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1:The seven key requirements of Trustworthy AI: all are of equal importance and support each other<ref type="bibr" target="#b10">[11]</ref> </figDesc><graphic coords="3,90.77,84.19,200.40,181.03" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Conceptual Comprehensive Framework for the design and the evaluation of Symbiotic AI</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The research of Miriana Calvano and Antonio Curci is supported by the co-funding of the European Union -Next Generation EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3 -Partnerships extended to universities, research centers, companies, and research D.D. MUR n. 341 del 15.03.2022 -Next Generation EU (PE0000013 -"Future Artificial Intelligence Research -FAIR" -CUP: H97G22000210007).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Commission</surname></persName>
		</author>
		<ptr target="http://thomas.loc.gov/cgi-bin/query/z?c102:H.CON.RES.1.IH" />
		<title level="m">Proposal for a regulation of the european parliament and of the council laying down harmonised rules on asrtificial intelligence (artificial intelligence act) and amending certain union legislative acts</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Ai ethics principles in practice: Perspectives of designers and developers</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sanderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Douglas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Schleiger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Whittle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lacey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Newnham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hajkowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hansen</surname></persName>
		</author>
		<idno type="DOI">10.1109/tts.2023.3257303</idno>
		<ptr target="http://dx.doi.org/10.1109/TTS.2023.3257303.doi:10.1109/tts.2023.3257303" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Technology and Society</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="171" to="187" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A Survey of Methods for Explaining Black Box Models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3236009</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/3236009.doi:10.1145/3236009" />
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page" from="1" to="42" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Designing the User Interface: Strategies for Effective Human-Computer Interaction</title>
		<author>
			<persName><forename type="first">B</forename><surname>Shneiderman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Plaisant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jacobs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Elmqvist</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Diakopoulos</surname></persName>
		</author>
		<ptr target="URL:https://books.google.it/books?id=PpItDAAAQBAJ" />
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>Pearson Education</publisher>
		</imprint>
	</monogr>
	<note>6 ed</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">O</forename><surname>Standardization</surname></persName>
		</author>
		<ptr target="https://www.iso.org/standard/77520.html" />
		<title level="m">ergonomics of human-system interaction</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">210</biblScope>
		</imprint>
	</monogr>
	<note>Iso 9241:</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Sharp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Preece</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Rogers</surname></persName>
		</author>
		<title level="m">Interaction Design: beyond human-computer interaction</title>
				<imprint>
			<publisher>John Wiley &amp; Sons, Inc</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note>5 ed</note>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">O</forename><surname>Standardization</surname></persName>
		</author>
		<ptr target="https://www.iso.org/standard/77520.html" />
		<title level="m">ergonomics of human-system interaction: Human-centred design for interactive systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">210</biblScope>
		</imprint>
	</monogr>
	<note>Iso 9241:</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m">SWEBOK: guide to the software engineering body of knowledge</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Bourque</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Fairley</surname></persName>
		</editor>
		<meeting><address><addrLine>Los Alamitos, CA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note>version 3.0 ed</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Artificial intelligence for advanced human-machine symbiosis</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Grigsby</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Augmented Cognition: Intelligent Technologies</title>
				<editor>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Schmorrow</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Fidopiastis</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="255" to="266" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The risks associated with generative AI apps in the European Artificial Intelligence Act (AIA)</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vahabava</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Munich, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Commission</surname></persName>
		</author>
		<ptr target="https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html" />
		<title level="m">European commission -ethics guidelines for trustworthy ai</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Trustworthy artificial intelligence and the European Union &lt;span style=&quot;font-variant:small-caps;&quot;&gt;AI&lt;/span&gt; act: On the conflation of trustworthiness and acceptability of risk</title>
		<author>
			<persName><forename type="first">J</forename><surname>Laux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wachter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mittelstadt</surname></persName>
		</author>
		<idno type="DOI">10.1111/rego.12512</idno>
		<ptr target="https://onlinelibrary.wiley.com/doi/10.1111/rego.12512.doi:10.1111/rego.12512" />
	</analytic>
	<monogr>
		<title level="j">Regulation &amp; Governance</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="3" to="32" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Shneiderman</surname></persName>
		</author>
		<idno type="DOI">10.1093/oso/9780192845290.001.0001</idno>
		<ptr target="https://academic.oup.com/book/41126.doi:10.1093/oso/9780192845290.001.0001" />
		<title level="m">Human-Centered AI</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
		<respStmt>
			<orgName>Oxford University PressOxford</orgName>
		</respStmt>
	</monogr>
	<note>1 ed</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Procedures for Performing Systematic Reviews</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kitchenham</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
		<respStmt>
			<orgName>Keele University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">Gazzetta Ufficiale dell&apos;Unione Europea, General Data Protection Regulation (GDPR): Regulation</title>
		<imprint>
			<biblScope unit="page">679</biblScope>
			<date type="published" when="2016">2016. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Sarker</surname></persName>
		</author>
		<idno type="DOI">10.20944/preprints202202.0001.v1</idno>
		<title level="m">Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A systematic literature review for agile development processes and user centred design integration</title>
		<author>
			<persName><forename type="first">D</forename><surname>Salah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">F</forename><surname>Paige</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cairns</surname></persName>
		</author>
		<idno type="DOI">10.1145/2601248.2601276</idno>
		<idno>doi:10.1145/2601248.2601276</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/2601248.2601276" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering</title>
				<meeting>the 18th International Conference on Evaluation and Assessment in Software Engineering<address><addrLine>London England United Kingdom</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
