<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Acceptability of Symbiotic Artificial Intelligence: Highlights from the FAIR project</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Francesca</forename><forename type="middle">Alessandra</forename><surname>Lisi</surname></persName>
							<email>francesca.lisi@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">DiB Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>via E. Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Antonio</forename><surname>Carnevale</surname></persName>
							<email>antonio.carnevale@uniba.it</email>
							<affiliation key="aff1">
								<orgName type="department">DIRIUM Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Piazza Umberto I</addrLine>
									<postCode>70121</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Abeer</forename><surname>Dyoub</surname></persName>
							<email>abeer.dyoub@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">DiB Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>via E. Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Antonio</forename><surname>Lombardi</surname></persName>
							<email>antonio.lombardi@uniba.it</email>
							<affiliation key="aff1">
								<orgName type="department">DIRIUM Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Piazza Umberto I</addrLine>
									<postCode>70121</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Piero</forename><surname>Marra</surname></persName>
							<email>piero.marra@uniba.it</email>
							<affiliation key="aff2">
								<orgName type="department">LAW Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Piazza C. Battisti 1</addrLine>
									<postCode>70121</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lorenzo</forename><surname>Pulito</surname></persName>
							<email>lorenzo.pulito@uniba.it</email>
							<affiliation key="aff3">
								<orgName type="department">DJSGE Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Via Duomo 259</addrLine>
									<postCode>74123</postCode>
									<settlement>Taranto</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Acceptability of Symbiotic Artificial Intelligence: Highlights from the FAIR project</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FFF298A70EB410C6B272302A31381AE5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this work we report the highlights of the work done at the University of Bari within the FAIR project and concerning the acceptability of Symbiotic Artificial Intelligence.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The notion of symbiosis originated in the 19th century to indicate a relationship between two taxonomically separate life forms that nevertheless give rise to a single organism. Life forms in a symbiotic relationship are not isolated but coexist in ways that are more or less essential to their survival and development. The first to advocate a symbiosis between humans and machines was J.C.R Licklider in 1960 <ref type="bibr" target="#b0">[1]</ref>. In his view, this kind of symbiosis would allow the computer to become an active part of the thinking process that leads to resolving technical problems and not just an executor of solutions thought up beforehand. Licklider was mainly thinking of human-computer interfaces that would allow greater real-time collaboration and shorten the distance between human and machine language. He was pointing to a road that has since been successfully travelled, bringing us to the so-called Symbiotic Artificial Intelligence (SAI). Human-AI symbiosis promises to boost human-machine collaboration and socio-technical teaming, with mutually beneficial relationships, by augmenting (and valuing) human cognitive abilities rather than replacing them <ref type="bibr" target="#b1">[2]</ref>. In particular, socio-technical teaming refers to the collabora-tive partnership between humans and machines within a broader social and technological context, where the focus is not on a substantial peer-to-peer relationship but on integrating technology into human-centric processes and systems. In this context, symbiosis involves humans and machines working together as a cohesive unit, each playing a specific role and contributing to the team's overall performance. On one hand, humans provide the cognitive and emotional capabilities necessary for creativity, empathy, ethical decision-making, and adaptability. On the other hand, machines offer computational power, data processing, and automation capabilities that can handle repetitive and data-intensive tasks efficiently.</p><p>When applied to AI, the concept of symbiosis becomes more complex, posing a whole series of foundational questions. Addressing these questions is one of the goals of the research done by the University of Bari (together with INFN) within the project Future AI Research (FAIR). In particular, the acceptability of SAI is the subject of research for our investigation within a dedicated work package (WP 6.5) of FAIR. Acceptability involves value alignment between AI and humans. It is related, e.g., to understanding AI decisions, the algorithmic bias, the respect of privacy policies for data collected by AI systems, the struggle between security ensured by AI systems and fundamental freedoms, the mitigation of possible safety and health risks. In FAIR, studies on the acceptability of SAI adopt an interdisciplinary approach involving researchers in AI, Law, and Philosophy.</p><p>In this paper, we briefly report the main achievements of our research on ethical and legal acceptability of SAI in the 1st year of the project (Sections 2-3) and outline the steps needed to go from general principles to operational definitions for ethical acceptability (Section 4). Section 5 concludes the paper with final remarks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Ethical acceptability of SAI</head><p>The philosophical approach to AI is contributing to the debate on the identification and analysis of the ethical implications of algorithms. We have continued the investigation aiming to build the proposal of a methodological framework grounded in process-oriented evaluations to assess the human-centricity and acceptability of SAIs together with their societal benefit.</p><p>The research carried out concerned two different scientific lines:</p><p>Questioning the notion of "symbiosis" in SAI systems. The research focused mainly on the meaning of "symbiosis" and its applicability to AI <ref type="bibr" target="#b2">[3]</ref>. To this end, preliminary research has been carried out on the transformation of the concept of intelligence in the history of ideas <ref type="bibr" target="#b3">[4]</ref>. In several internal meetings, the notion of symbiosis was explored both from a biological and phenomenological point of view, with reference to the key recent AI-driven technological developments (AI and drones, AI and robotics, LLM, ML, etc.).</p><p>Assessing the ethical impact of SAI in terms of acceptability and human-centricity. Defining the fundamental conceptual stages of a methodology for evaluating AI systems involves comparing and studying a series of international regulatory frameworks -inter alia AI HLEG, Ethics Guidelines for Trustworthy AI . We have outlined a model with different fundamental steps: (a) onto-epistemic foundation of the method; (b) screening; (c) risk evaluation; (d) impact assessment. Now, we need to work within each step to refine procedures and metrics further.</p><p>The efforts in this direction have led to a joint paper presented at the BEWARE workshop organized in Rome within the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023) <ref type="bibr" target="#b4">[5]</ref>, an article accepted for publication in the journal Intelligenza Artificiale <ref type="bibr" target="#b5">[6]</ref>, and different book chapters in the final stages of publication <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b7">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Legal acceptability of SAI</head><p>In line with the ethical and philosophical considerations on symbiosis, moving from the perspective of humanmachine interaction to a procedural model of construction and assessment of SAI decision, within a legal methodology theory we have identified the first legal pragmatic conditions of algorithmic decision-making, such as that of the significant human control, a notion borrowed from the international debate within the UN on autonomous weapons. In this way, symbiosis translates also a techno-procedural legal principle capable of formalizing a human-centric value where persons do not remain behind technological development and society but are an integral part of the same evolutionary process and are responsible for it. We think that this approach is keeping with the provisions, ex multis, of memorandum no. 38 of the Proposal for a EU Reg. on artificial intelligence. A procedural condition ensures the fairness and transparency of decision-making and it allows recipients to understand and respect the decision itself. Indeed, in law, it is not sufficient the content of the decision, but also its enforcement. Thus, the effectiveness remains a constitutive element of legality <ref type="bibr" target="#b8">[9]</ref>.</p><p>Furthermore, some legal issues raised by the interaction between humans and AI were addressed in some areas of law (such as those that most require judgments of predictive type, like the assessment of dangerousness aimed, for example, at commensurate punishment and/or granting alternative measures). It has been so possible to observe and identify some essential conditions that should be taken into account in designing AI systems in this field, necessary to promote the symbiosis between humans and AI as well as to improve the trustworthiness, fairness and efficiency of the interaction (for example, enriching the methods of responding to the crime in compliance with the fundamental principles of proportionality and dignity of the person, realizing the requests for individualization of the punishment) <ref type="bibr" target="#b9">[10]</ref>.</p><p>Finally, we would like to mention that, the European legal framework for AI gives minimal consideration to regulating AI based technologies where there is a reciprocate relationship between human and machine (symbiosis). The research field of symbiotic AI is technologically challenging. In <ref type="bibr" target="#b10">[11]</ref>, we have undertaken a foundational study with the aim of conceptualizing and designing a comprehensive symbiotic approach to AI, with the goal of producing fair, legitimate, and effective outcomes while ensuring their ethical and legal acceptability. This theoretical research is expected to influence the development of Symbiotic AI systems and technological governance through model assessment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Towards Operational Definitions of Ethical Acceptability of SAI</head><p>The ethical implications of Human-AI symbiosis are multifaceted and complex. Thus, it has become increasingly paramount to take in consideration the ethical issues surrounding SAI development, deployment, and impact. The concept of 'SAI Ethics' offers a nuanced perspective that emphasizes the harmonious coexistence and collaboration between humans and AI systems. Operationalizing SAI Ethics involves translating abstract ethical principles and values into concrete guidelines and practices that govern every stage of the AI lifecycle, including data collection, algorithm design, model training, evaluation, and deployment <ref type="bibr" target="#b11">[12]</ref>. It requires a multidisciplinary approach, involving collaboration between computer scientists, ethicists, policymakers, and other stakeholders to ensure their alignment with societal values and human well-being, and to foster harmony and mutual benefit between humans and machines.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Operationalizing SAI Ethics</head><p>From a practical perspective, operationalizing SAI Ethics requires the establishment of governance frameworks, standards, and regulations to govern the responsible development, deployment, and use of AI technologies. This includes the development of ethical guidelines, codes of conduct, and best practices to guide AI practitioners and organizations in navigating ethical dilemmas and decision-making processes <ref type="bibr" target="#b12">[13]</ref>. These tools should be domain specific. Moreover, fostering interdisciplinary collaboration and stakeholder engagement is essential to ensure that ethical considerations are adequately addressed and that AI technologies serve the broader societal interest.</p><p>One key aspect of operationalizing SAI Ethics is the development of robust frameworks and methodologies for ethical risk assessment and mitigation. This involves identifying potential ethical risks associated with AI systems, such as bias, discrimination, privacy violations, and unintended consequences, and implementing strategies to address these risks proactively <ref type="bibr" target="#b13">[14]</ref>. Thus, it is important to design algorithms and systems that are transparent, interpretable, and accountable, enabling stakeholders to understand how AI decisions are made and to detect and rectify ethical issues when they arise. Here we would like to highlight the role of logic programming for designing such models <ref type="bibr" target="#b14">[15]</ref>. Additionally, operationalizing SAI Ethics requires ongoing monitoring and evaluation of AI systems in real-world contexts to ensure that they continue to operate ethically and responsibly throughout their lifecycle. From a technical perspective, operationalization should focus on humancentricity through the development of AI systems that are transparent, interpretable, and accountable. This entails implementing mechanisms for explainability and interpretability, allowing users to understand how AI algorithms make decisions and providing insights into their underlying processes. Techniques such as model interpretability, transparency tools, and algorithmic audits enable stakeholders to scrutinize AI systems and identify potential biases, errors, or unintended consequences. Additionally, ensuring the robustness and reliability of AI systems through rigorous testing, validation, and verification processes is essential to minimize the risk of harmful outcomes and instil confidence in their use.</p><p>Furthermore, operationalizing SAI Ethics necessitates the integration of ethical principles into the design and development of AI algorithms and models. This means translating ethical principles, values, and guidelines into actionable and measurable practices or procedures. We need to define specific rules, standards, or protocols that guide the behavior and decision-making in ethical dilemmas or concrete situations <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>. Moreover, SAI Ethics emphasizes the importance of continuous learning and adaptation. As AI technologies evolve and their societal impact unfolds, ethical standards and norms must evolve in tandem <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>. This requires interdisciplinary research, ethical reflection, and stakeholder engagement to address emerging challenges and dilemmas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Building a Computational Model of SAI Ethics</head><p>Ethical Principles are abstract rules intended for guiding ethical decision making and judgement. There are a variety of techniques used for technical implementation of ethical principles. In the previous literature of machine ethics, ethical principles are integrated into machines in a top-down, bottom-up, or hybrid architectures (see <ref type="bibr" target="#b19">[20]</ref> for a survey). However, so far, no model seems to satisfy ethical judgement and decision making needs for an acceptable and responsible AI system. Approaches to encode principles into a format that computers can understand include logical reasoning, probabilistic reasoning, learning, optimisation, and case-based reasoning <ref type="bibr" target="#b20">[21]</ref>.</p><p>We argue that it is impossible to build a 'general ethical AI', i.e., a machine that is generally ethical, a machine that can reason and take ethical decisions in any domain and in every context. We believe that we need to concentrate on building domain-based ethical machines, i.e., machines that are able of ethical reasoning and decision making in any context and situation in a specific domain, which is, any way, still a very challenging task. Considering the purpose and the specific domain for which the AI system is developed, developers should consider codes of ethics and conduct of the domain (domain ethics, e.g. medical ethics) as a guiding framework. Furthermore, the key aspects of SAI, such as the collaborative and cooperative nature between human and machine, the human-centric approach, the mutual benefit, the adaptability and responsiveness of SAI, and the interdisciplinary perspective, should be taken in consideration in the design decisions to be taken by the developers.</p><p>To build a computational model of domain ethics to be integrated into the AI system; the ethical principles of the domain should be operationalized. The operationalization task should be carried out involving all stakeholders and domain ethical experts. Developers should also decide on the architecture to adopt for integrating the ethical principles. Being clear about which princi-ple is being used will help designers to further specify what inputs are necessary for their application, which in turn will improve the ethical reasoning capabilities and explainability of how decisions have been made <ref type="bibr" target="#b21">[22]</ref>.</p><p>However, defining principles in an intentional manner so that they may be applied in a deductive manner, is often challenging and, in many cases, appears to be an impossible task. The issue lies in the gap between abstract, open-textured principles and tangible, concrete facts. The abstract principles should be operationalized by linking them to the facts. When ethical experts justify their conclusions in particular cases, they frequently connect ethical principles directly to the specific facts of those cases. Essentially, these established connections between ethical principles and relevant facts serve as operational (concrete) definitions of the principles. The experts operationalize the abstract principles by tying them directly to the factual context.</p><p>We are going to investigate, computationally, the possibility of operationalizing abstract ethical principles by inducing practical rules for ethical judgement and decision making in SAI systems from real-life interactions between human and machine in different domains <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b22">23]</ref>. These rules evolve overtime through the interaction between human and machine which is an important aspect to SAI ethics. SAI recognizes the dynamic nature of human-AI interactions and the need for AI systems to adapt and respond to human preferences, values, and feedback overtime. To achieve this, we are going to consider different domains as case studies, collect and analyze a large set of domain ethics cases and build a computational model employing different operationalization techniques. Then, we are planning to carry out experiments to test our hypothesis that the computational model will accurately classify actions as ethical or unethical. The model will be developed using a foundational set of cases that will be collected for this purpose. The system performance will be evaluated using quantitative measures like precision and recall.</p><p>An important aspect, mentioned above, is the model adaptability overtime. In the context of SAI systems, human and machine (as agents) work as a team, collaborate and learn from each other, evolve together. The machine (as well as the human) will learn concrete ethical rules from interaction with humans, the machine will apply the previously learned ethical rules on concrete cases, will also revise and update the previously learned rules if needed. Here, it is important to emphasize the collaborative aspect of SAI in revising and correcting the ethical behavior overtime by both the human and the machine. In fact, this task is, in reality, a collaborative task, the machine will extract the case facts (the facts of the real-life case at hand), present them to the human, the human will provide an ethical judgment of the case at hand. Then the machine will learn a new rule and/or revise a previously learned rule and present it to the human. Through a collaborative dialogue, The human can correct the ethical behavior of the machine, but also the machine can automatically demonstrate to the humans their errors in reasoning. In this way both will learn and improve their reasoning capabilities (mutual benefit). This adaptability aspect will be tested and evaluated in our experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and Future Work</head><p>In this work, we reported on ongoing work in the Work-Package 6.5 of the project FAIR. A model of ethical acceptability of SAI was outlined. Many legal issues raised by SAI systems were addressed. Currently, we are concentrating on SAI ethics operationalization. Next, we will work on the operationalization of legal aspects in SAI by the development of a framework for embedding the considerations of legal issues in SAI, then on realizing a computational model of legal reasoning for our SAI system to be ultimately integrated in the SAI system together with the ethical model.</p><p>By operationalizing SAI Ethics and legal issues, we can foster a collaborative and mutually beneficial relationship between humans and AI systems, promoting responsible and trustworthy AI development for the benefit of the society. This requires a multifaceted approach that integrates technical, organizational, regulatory, and societal perspectives.</p><p>A socio-technical approach to SAI systems development will be adopted which leads to an increased acceptability of these systems <ref type="bibr" target="#b23">[24]</ref>. To capture the sociotechnical complexity we are planning to adopt Multi-Agent Systems (MAS) for modelling the SAI system at hand <ref type="bibr" target="#b24">[25]</ref>. The ethical and legal components in the system will be implemented as a MAS, which will act as an ethical and legal over-layer in the overall decision making process. A starting point might be the MAS prototype presented in <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27]</ref> for the ethical evaluation and monitoring of dialogue systems.</p><p>Finally, since a human-centric approach is central to SAI, transparency and explainability are key requirements for establishing trust in SAI systems which leads to acceptability. We would like to emphasize the the prominent role of computational logic in the development of the computational model of ethical and legal acceptability of SAI. Logic Programming (LP) has a great potential for developing such perspective ethical and legal SAI systems, as in fact logic rules are easily comprehensible by humans. Furthermore, LP is able to model causality, which is crucial for ethical and legal decision making <ref type="bibr" target="#b14">[15]</ref>.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was partially supported by the project FAIR -Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Man-computer symbiosis</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C R</forename><surname>Licklider</surname></persName>
		</author>
		<idno type="DOI">10.1109/THFE2.1960.4503259</idno>
	</analytic>
	<monogr>
		<title level="j">IRE Transactions on Human Factors in Electronics HFE</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="4" to="11" />
			<date type="published" when="1960">1960</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Artificial intelligence for advanced human-machine symbiosis</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Grigsby</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-91470-1_22</idno>
	</analytic>
	<monogr>
		<title level="m">Augmented Cognition: Intelligent Technologies</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">D</forename><surname>Schmorrow</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Fidopiastis</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">10915</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Condizione e struttura del nostro rapporto con le macchine. dieci proposizioni per una filosofia critica dell&apos;intelligenza artificiale antropocentrica</title>
		<author>
			<persName><forename type="first">A</forename><surname>Carnevale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">L&apos;uomo animale tecnologico</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Barone</surname></persName>
		</editor>
		<meeting><address><addrLine>Caltanissetta-Rome</addrLine></address></meeting>
		<imprint>
			<publisher>Sciascia Editore</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>Invited chapter, accepted. in publication</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">L&apos;origine dell&apos;io. il &quot;mistero&quot; dell&apos;intelligenza da Darwin al riduzionismo contemporaneo</title>
		<author>
			<persName><forename type="first">A</forename><surname>Lombardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Studium/Ricerca</title>
		<imprint>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="651" to="688" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Exploring ethical and conceptual foundations of human-centred symbiosis with artificial intelligence</title>
		<author>
			<persName><forename type="first">A</forename><surname>Carnevale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lombardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3615/paper3.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Boella</surname></persName>
		</editor>
		<meeting>the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3615</biblScope>
			<biblScope unit="page" from="30" to="43" />
		</imprint>
	</monogr>
	<note>CEUR Workshop Proceedings</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A humancentred approach to symbiotic AI: Questioning the ethical and conceptual foundation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Carnevale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lombardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligenza Artificiale</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>Invited paper. in publication</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Assessing the impacts of symbiotic AI (SAI) on individual and societal well-being</title>
		<author>
			<persName><forename type="first">A</forename><surname>Carnevale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AI Impact Assessment: methods and practices</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Webb</surname></persName>
		</editor>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>Invited chapter, accepted. in publication</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Beyond one-size-fits-all: Precision medicine and novel technologies for sex and gender-inclusive covid-19 pandemic management</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Falchi</forename><surname>Delgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ferretti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Carnevale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Innovating Health against Future Pandemics, Elsevier</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Cirillo</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>Invited chapter, accepted. in publication</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Effectiveness as Threat to Constitutional Systems</title>
		<author>
			<persName><forename type="first">P</forename><surname>Marra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Galatola</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-31739-7_142-1</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="1" to="19" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Pulito</surname></persName>
		</author>
		<title level="m">Algoritmi predittivi e valutazione della pericolosità, L&apos;Ircocervo</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>Invited essay. submitted</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A procedural idea of decisionmaking in the context of symbiotic ai</title>
		<author>
			<persName><forename type="first">P</forename><surname>Marra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pulito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Carnevale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lisi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lombardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<ptr target="https://synergy.trx.li/ceur-ws/paper9.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st International Workshop on Designing and Building Hybrid Human-AI Systems, co-located with 17th International Conference on Advanced Visual Interfaces (AVI 2024)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<meeting>the 1st International Workshop on Designing and Building Hybrid Human-AI Systems, co-located with 17th International Conference on Advanced Visual Interfaces (AVI 2024)<address><addrLine>Arenzano (Genoa), Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024-06-03">June 3rd, 2024. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Operationalising AI ethics: barriers, enablers and next steps</title>
		<author>
			<persName><forename type="first">J</forename><surname>Morley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kinsey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Elhalal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ziosi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<idno type="DOI">10.1007/S00146-021-01308-8</idno>
		<ptr target="https://doi.org/10.1007/s00146-021-01308-8.doi:10.1007/S00146-021-01308-8" />
	</analytic>
	<monogr>
		<title level="j">AI Soc</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="411" to="423" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Operationalising AI governance through ethics-based auditing: an industry case study</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mökander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<idno type="DOI">10.1007/S43681-022-00171-7</idno>
		<ptr target="https://doi.org/10.1007/s43681-022-00171-7.doi:10.1007/S43681-022-00171-7" />
	</analytic>
	<monogr>
		<title level="j">AI Ethics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="451" to="468" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">AI risk assessment: A scenario-based, proportional methodology for the AI act</title>
		<author>
			<persName><forename type="first">C</forename><surname>Novelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Casolari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rotolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Taddeo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<idno type="DOI">10.1007/S44206-024-00095-1</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/S44206-024-00095-1" />
	</analytic>
	<monogr>
		<title level="j">Digit. Soc</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page">13</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Logic programming and machine ethics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<idno type="DOI">10.4204/EPTCS.325.6</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings 36th International Conference on Logic Programming (Technical Communications), ICLP Technical Communications 2020</title>
				<meeting>36th International Conference on Logic Programming (Technical Communications), ICLP Technical Communications 2020<address><addrLine>Rende (CS), Italy</addrLine></address></meeting>
		<imprint>
			<publisher>EPTCS</publisher>
			<date type="published" when="2020-09">September 2020. 2020</date>
			<biblScope unit="volume">325</biblScope>
			<biblScope unit="page" from="6" to="17" />
		</imprint>
	</monogr>
	<note>Technical Communications) UNICAL</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Learning answer set programming rules for ethical machines</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2396/paper14.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 34th Italian Conference on Computational Logic</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">A</forename><surname>Casagrande</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><forename type="middle">G</forename><surname>Omodeo</surname></persName>
		</editor>
		<meeting>the 34th Italian Conference on Computational Logic<address><addrLine>Trieste, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">June 19-21, 2019. 2396. 2019</date>
			<biblScope unit="page" from="300" to="315" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Towards an ILP application in machine ethics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-49210-6</idno>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming -29th International Conference, ILP 2019</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Plovdiv, Bulgaria; Netherlands</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">September 3-5, 2019. 2019</date>
			<biblScope unit="volume">11770</biblScope>
			<biblScope unit="page" from="26" to="35" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Care robots learning rules of ethical behavior under the supervision of an ethical teacher (short paper)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Letteri</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-3281/paper1.pdf" />
	</analytic>
	<monogr>
		<title level="m">Joint Proceedings of the 1st International Workshop on HYbrid Models for Coupling Deductive and Inductive ReAsoning (HY-DRA 2022) and the 29th RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion (RCRA 2022) co-located with the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR 2022)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">P</forename><surname>Bruno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Calimeri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Cauteruccio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Maratea</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Terracina</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Vallati</surname></persName>
		</editor>
		<meeting><address><addrLine>Genova Nervi, Italy; Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-09-05">September 5, 2022. 2022</date>
			<biblScope unit="volume">3281</biblScope>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Learning domain ethical principles from interactions with users</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<idno type="DOI">10.1007/s44206-022-00026-y</idno>
	</analytic>
	<monogr>
		<title level="j">Digital Society</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">28</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Implementations in machine ethics: A survey</title>
		<author>
			<persName><forename type="first">S</forename><surname>Tolmeijer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kneer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sarasua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Christen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bernstein</surname></persName>
		</author>
		<idno type="DOI">10.1145/3419633</idno>
		<ptr target="10.1145/3419633" />
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Norvig</surname></persName>
		</author>
		<title level="m">Artificial Intelligence: A Modern Approach</title>
				<imprint>
			<publisher>Pearson Education Limited</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Normative principles for evaluating fairness in machine learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Leben</surname></persName>
		</author>
		<idno type="DOI">10.1145/3375627.3375808</idno>
		<idno>doi:10.1145/3375627.3375808</idno>
		<ptr target="https://doi.org/10.1145/3375627.3375808" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;20</title>
				<meeting>the AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;20<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="86" to="92" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Logicbased machine learning for transparent ethical agents</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Letteri</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-2710/paper11.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th Italian Conference on Computational Logic -CILC 2020</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">F</forename><surname>Calimeri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Perri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Zumpano</surname></persName>
		</editor>
		<meeting>the 35th Italian Conference on Computational Logic -CILC 2020<address><addrLine>Rende, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">October 13-15, 2020. 2710. 2020</date>
			<biblScope unit="page" from="169" to="183" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Socio-technical systems: From design methods to systems engineering</title>
		<author>
			<persName><forename type="first">G</forename><surname>Baxter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sommerville</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.intcom.2010.07.003</idno>
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="4" to="17" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Agent-Based Modelling of Socio-Technical Systems</title>
		<author>
			<persName><forename type="first">K</forename><surname>Van Dam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Nikolic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lukszo</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-94-007-4933-7</idno>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">A logic-based multi-agent system for ethical monitoring and evaluation of dialogues</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Letteri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<idno type="DOI">10.4204/EPTCS.345.32</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings 37th International Conference on Logic Programming (Technical Communications), ICLP Technical Communications 2021, Porto (virtual event)</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Formisano</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><forename type="middle">A</forename><surname>Liu</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Bogaerts</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Brik</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Dahl</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Dodaro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Fodor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">L</forename><surname>Pozzato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Vennekens</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Zhou</surname></persName>
		</editor>
		<meeting>37th International Conference on Logic Programming (Technical Communications), ICLP Technical Communications 2021, Porto (virtual event)</meeting>
		<imprint>
			<publisher>EPTCS</publisher>
			<date type="published" when="2021-09">September 2021. 2021</date>
			<biblScope unit="volume">345</biblScope>
			<biblScope unit="page" from="182" to="188" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Demo paper: Monitoring and evaluation of ethical behavior in dialog systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dyoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">De</forename><surname>Gasperis</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-49778-1_35</idno>
	</analytic>
	<monogr>
		<title level="m">Advances in Practical Applications of Agents, Multi-Agent Systems, and Trustworthiness. The PAAMS Collection -18th International Conference, PAAMS 2020</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">Y</forename><surname>Demazeau</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Holvoet</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Corchado</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Costantini</surname></persName>
		</editor>
		<meeting><address><addrLine>L&apos;Aquila, Italy; UK</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">October 7-9, 2020. 2020</date>
			<biblScope unit="volume">12092</biblScope>
			<biblScope unit="page" from="403" to="407" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
