<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.4204/EPTCS</article-id>
      <title-group>
        <article-title>Acceptability of Symbiotic Artificial Intelligence: Highlights from the FAIR project</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Francesca Alessandra Lisi</string-name>
          <email>francesca.lisi@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Carnevale</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abeer Dyoub</string-name>
          <email>abeer.dyoub@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Lombardi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Piero Marra</string-name>
          <email>piero.marra@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Pulito</string-name>
          <email>lorenzo.pulito@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Symbiotic AI, AI Ethics, Trustworthy AI, Philosophical foundations of AI</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Bari Aldo Moro, DIRIUM Dept.</institution>
          ,
          <addr-line>Piazza Umberto I, Bari, 70121</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Bari Aldo Moro, DJSGE Dept.</institution>
          ,
          <addr-line>Via Duomo 259, Taranto, 74123</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Bari Aldo Moro, DiB Dept.</institution>
          ,
          <addr-line>via E. Orabona 4, Bari, 70125</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Bari Aldo Moro, LAW Dept.</institution>
          ,
          <addr-line>Piazza C. Battisti 1, Bari, 70121</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>Workshop Proce dings</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>345</volume>
      <fpage>29</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>In this work we report the highlights of the work done at the University of Bari within the FAIR project and concerning the ∗Corresponding author.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The notion of symbiosis originated in the 19th century
to indicate a relationship between two taxonomically
separate life forms that nevertheless give rise to a
single organism. Life forms in a symbiotic relationship are
not isolated but coexist in ways that are more or less
essential to their survival and development. The first
to advocate a symbiosis between humans and machines
was J.C.R Licklider in 1960 [1]. In his view, this kind
of symbiosis would allow the computer to become an
active part of the thinking process that leads to resolving
technical problems and not just an executor of solutions
thought up beforehand. Licklider was mainly thinking
of human-computer interfaces that would allow greater
real-time collaboration and shorten the distance between
human and machine language. He was pointing to a
road that has since been successfully travelled, bringing
Human-AI symbiosis promises to boost human-machine
collaboration and socio-technical teaming, with mutually
beneficial relationships, by augmenting (and valuing)
human cognitive abilities rather than replacing them [2]. In
CEUR
htp:/ceur-ws.org
ISN1613-073</p>
      <p>CEUR</p>
      <p>Workshop Proceedings (CEUR-WS.org)
tive partnership between humans and machines within a
broader social and technological context, where the focus
is not on a substantial peer-to-peer relationship but on
integrating technology into human-centric processes and
systems. In this context, symbiosis involves humans and
machines working together as a cohesive unit, each
playing a specific role and contributing to the team’s overall
performance. On one hand, humans provide the
cognitive and emotional capabilities necessary for creativity,
empathy, ethical decision-making, and adaptability. On
the other hand, machines ofer computational power, data
processing, and automation capabilities that can handle
repetitive and data-intensive tasks eficiently.</p>
      <p>When applied to AI, the concept of symbiosis becomes
more complex, posing a whole series of foundational
questions. Addressing these questions is one of the goals
of the research done by the University of Bari (together
with INFN) within the project Future AI Research (FAIR).
research for our investigation within a dedicated work
package (WP 6.5) of FAIR. Acceptability involves value
alignment between AI and humans. It is related, e.g., to
understanding AI decisions, the algorithmic bias, the
rethe struggle between security ensured by AI systems and
fundamental freedoms, the mitigation of possible safety
and health risks. In FAIR, studies on the acceptability
of SAI adopt an interdisciplinary approach involving
researchers in AI, Law, and Philosophy.</p>
      <p>In this paper, we briefly report the main achievements
of our research on ethical and legal acceptability of SAI in
the 1st year of the project (Sections 2-3) and outline the
steps needed to go from general principles to operational
definitions for ethical acceptability (Section 4). Section 5
us to the so-called Symbiotic Artificial Intelligence (SAI). In particular, the acceptability of SAI is the subject of
particular, socio-technical teaming refers to the collabora- spect of privacy policies for data collected by AI systems,
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License concludes the paper with final remarks.</p>
      <p>Attribution 4.0 International (CC BY 4.0).</p>
    </sec>
    <sec id="sec-3">
      <title>2. Ethical acceptability of SAI</title>
      <p>but are an integral part of the same evolutionary process
and are responsible for it. We think that this approach is
The philosophical approach to AI is contributing to the keeping with the provisions, ex multis, of memorandum
debate on the identification and analysis of the ethical no. 38 of the Proposal for a EU Reg. on artificial
intelliimplications of algorithms. We have continued the inves- gence. A procedural condition ensures the fairness and
tigation aiming to build the proposal of a methodological transparency of decision-making and it allows recipients
framework grounded in process-oriented evaluations to to understand and respect the decision itself. Indeed, in
assess the human-centricity and acceptability of SAIs law, it is not suficient the content of the decision, but
together with their societal benefit. also its enforcement. Thus, the efectiveness remains a</p>
      <p>The research carried out concerned two diferent sci- constitutive element of legality [9].
entific lines: Furthermore, some legal issues raised by the
interac</p>
      <p>Questioning the notion of “symbiosis” in SAI sys- tion between humans and AI were addressed in some
tems. The research focused mainly on the meaning of areas of law (such as those that most require judgments
“symbiosis” and its applicability to AI [3]. To this end, of predictive type, like the assessment of dangerousness
preliminary research has been carried out on the trans- aimed, for example, at commensurate punishment and/or
formation of the concept of intelligence in the history granting alternative measures). It has been so possible
of ideas [4]. In several internal meetings, the notion to observe and identify some essential conditions that
of symbiosis was explored both from a biological and should be taken into account in designing AI systems in
phenomenological point of view, with reference to the this field, necessary to promote the symbiosis between
key recent AI-driven technological developments (AI and humans and AI as well as to improve the trustworthiness,
drones, AI and robotics, LLM, ML, etc.). fairness and eficiency of the interaction (for example,</p>
      <p>Assessing the ethical impact of SAI in terms of enriching the methods of responding to the crime in
acceptability and human-centricity. Defining the fun- compliance with the fundamental principles of
propordamental conceptual stages of a methodology for eval- tionality and dignity of the person, realizing the requests
uating AI systems involves comparing and studying a for individualization of the punishment) [10].
series of international regulatory frameworks – inter alia Finally, we would like to mention that, the European
AI HLEG, Ethics Guidelines for Trustworthy AI (2018- legal framework for AI gives minimal consideration to
19). We have outlined a model with diferent fundamen- regulating AI based technologies where there is a
reciprotal steps: (a) onto-epistemic foundation of the method; cate relationship between human and machine
(symbio(b) screening; (c) risk evaluation; (d) impact assessment. sis). The research field of symbiotic AI is technologically
Now, we need to work within each step to refine proce- challenging. In [11], we have undertaken a foundational
dures and metrics further. study with the aim of conceptualizing and designing a</p>
      <p>The eforts in this direction have led to a joint paper comprehensive symbiotic approach to AI, with the goal of
presented at the BEWARE workshop organized in Rome producing fair, legitimate, and efective outcomes while
within the 22nd International Conference of the Italian ensuring their ethical and legal acceptability. This
theoAssociation for Artificial Intelligence (AI*IA 2023) [ 5], an retical research is expected to influence the development
article accepted for publication in the journal Intelligenza of Symbiotic AI systems and technological governance
Artificiale [6], and diferent book chapters in the final through model assessment.
stages of publication [7], [8].</p>
    </sec>
    <sec id="sec-4">
      <title>4. Towards Operational</title>
    </sec>
    <sec id="sec-5">
      <title>Definitions of Ethical</title>
    </sec>
    <sec id="sec-6">
      <title>Acceptability of SAI</title>
    </sec>
    <sec id="sec-7">
      <title>3. Legal acceptability of SAI</title>
      <p>In line with the ethical and philosophical considerations
on symbiosis, moving from the perspective of
humanmachine interaction to a procedural model of construc- The ethical implications of Human-AI symbiosis are
multion and assessment of SAI decision, within a legal tifaceted and complex. Thus, it has become increasingly
methodology theory we have identified the first legal paramount to take in consideration the ethical issues
surpragmatic conditions of algorithmic decision-making, rounding SAI development, deployment, and impact. The
such as that of the significant human control , a notion concept of ‘SAI Ethics’ ofers a nuanced perspective that
borrowed from the international debate within the UN emphasizes the harmonious coexistence and
collaboraon autonomous weapons. In this way, symbiosis trans- tion between humans and AI systems. Operationalizing
lates also a techno-procedural legal principle capable of SAI Ethics involves translating abstract ethical
princiformalizing a human-centric value where persons do not ples and values into concrete guidelines and practices
remain behind technological development and society that govern every stage of the AI lifecycle, including
data collection, algorithm design, model training, evalua- the integration of ethical principles into the design and
tion, and deployment [12]. It requires a multidisciplinary development of AI algorithms and models. This means
approach, involving collaboration between computer sci- translating ethical principles, values, and guidelines into
entists, ethicists, policymakers, and other stakeholders to actionable and measurable practices or procedures. We
ensure their alignment with societal values and human need to define specific rules, standards, or protocols that
well-being, and to foster harmony and mutual benefit guide the behavior and decision-making in ethical
dilembetween humans and machines. mas or concrete situations [16, 17]. Moreover, SAI Ethics
emphasizes the importance of continuous learning and
4.1. Operationalizing SAI Ethics adaptation. As AI technologies evolve and their societal
impact unfolds, ethical standards and norms must evolve
From a practical perspective, operationalizing SAI Ethics in tandem [18, 19]. This requires interdisciplinary
rerequires the establishment of governance frameworks, search, ethical reflection, and stakeholder engagement
standards, and regulations to govern the responsible de- to address emerging challenges and dilemmas.
velopment, deployment, and use of AI technologies. This
includes the development of ethical guidelines, codes 4.2. Building a Computational Model of
of conduct, and best practices to guide AI practitioners
and organizations in navigating ethical dilemmas and SAI Ethics
decision-making processes [13]. These tools should be Ethical Principles are abstract rules intended for guiding
domain specific. Moreover, fostering interdisciplinary ethical decision making and judgement. There are a
varicollaboration and stakeholder engagement is essential ety of techniques used for technical implementation of
to ensure that ethical considerations are adequately ad- ethical principles. In the previous literature of machine
dressed and that AI technologies serve the broader soci- ethics, ethical principles are integrated into machines
etal interest. in a top-down, bottom-up, or hybrid architectures (see</p>
      <p>One key aspect of operationalizing SAI Ethics is the [20] for a survey). However, so far, no model seems to
development of robust frameworks and methodologies satisfy ethical judgement and decision making needs for
for ethical risk assessment and mitigation. This involves an acceptable and responsible AI system. Approaches
identifying potential ethical risks associated with AI sys- to encode principles into a format that computers can
tems, such as bias, discrimination, privacy violations, and understand include logical reasoning, probabilistic
reaunintended consequences, and implementing strategies soning, learning, optimisation, and case-based reasoning
to address these risks proactively [14]. Thus, it is im- [21].
portant to design algorithms and systems that are trans- We argue that it is impossible to build a ’general ethical
parent, interpretable, and accountable, enabling stake- AI’, i.e., a machine that is generally ethical, a machine that
holders to understand how AI decisions are made and to can reason and take ethical decisions in any domain and
detect and rectify ethical issues when they arise. Here in every context. We believe that we need to concentrate
we would like to highlight the role of logic program- on building domain-based ethical machines, i.e., machines
ming for designing such models [15]. Additionally, op- that are able of ethical reasoning and decision making in
erationalizing SAI Ethics requires ongoing monitoring any context and situation in a specific domain, which is,
and evaluation of AI systems in real-world contexts to any way, still a very challenging task. Considering the
ensure that they continue to operate ethically and re- purpose and the specific domain for which the AI system
sponsibly throughout their lifecycle. From a technical is developed, developers should consider codes of ethics
perspective, operationalization should focus on human- and conduct of the domain (domain ethics, e.g. medical
centricity through the development of AI systems that ethics) as a guiding framework. Furthermore, the key
are transparent, interpretable, and accountable. This en- aspects of SAI, such as the collaborative and cooperative
tails implementing mechanisms for explainability and nature between human and machine, the human-centric
interpretability, allowing users to understand how AI approach, the mutual benefit, the adaptability and
responalgorithms make decisions and providing insights into siveness of SAI, and the interdisciplinary perspective,
their underlying processes. Techniques such as model in- should be taken in consideration in the design decisions
terpretability, transparency tools, and algorithmic audits to be taken by the developers.
enable stakeholders to scrutinize AI systems and iden- To build a computational model of domain ethics to
tify potential biases, errors, or unintended consequences. be integrated into the AI system; the ethical principles of
Additionally, ensuring the robustness and reliability of the domain should be operationalized. The
operationalAI systems through rigorous testing, validation, and ver- ization task should be carried out involving all
stakeification processes is essential to minimize the risk of holders and domain ethical experts. Developers should
harmful outcomes and instil confidence in their use. also decide on the architecture to adopt for integrating</p>
      <p>Furthermore, operationalizing SAI Ethics necessitates the ethical principles. Being clear about which
principle is being used will help designers to further specify revise a previously learned rule and present it to the
huwhat inputs are necessary for their application, which in man. Through a collaborative dialogue, The human can
turn will improve the ethical reasoning capabilities and correct the ethical behavior of the machine, but also the
explainability of how decisions have been made [22]. machine can automatically demonstrate to the humans</p>
      <p>However, defining principles in an intentional manner their errors in reasoning. In this way both will learn
so that they may be applied in a deductive manner, is and improve their reasoning capabilities (mutual benefit).
often challenging and, in many cases, appears to be an This adaptability aspect will be tested and evaluated in
impossible task. The issue lies in the gap between ab- our experiments.
stract, open-textured principles and tangible, concrete
facts. The abstract principles should be operationalized
by linking them to the facts. When ethical experts jus- 5. Conclusions and Future Work
tify their conclusions in particular cases, they frequently
connect ethical principles directly to the specific facts of In this work, we reported on ongoing work in the
Workthose cases. Essentially, these established connections Package 6.5 of the project FAIR. A model of ethical
acbetween ethical principles and relevant facts serve as ceptability of SAI was outlined. Many legal issues raised
operational (concrete) definitions of the principles. The by SAI systems were addressed. Currently, we are
conexperts operationalize the abstract principles by tying centrating on SAI ethics operationalization. Next, we
them directly to the factual context. will work on the operationalization of legal aspects in</p>
      <p>We are going to investigate, computationally, the pos- SAI by the development of a framework for embedding
sibility of operationalizing abstract ethical principles by the considerations of legal issues in SAI, then on
realinducing practical rules for ethical judgement and deci- izing a computational model of legal reasoning for our
sion making in SAI systems from real-life interactions be- SAI system to be ultimately integrated in the SAI system
tween human and machine in diferent domains [ 19, 23]. together with the ethical model.</p>
      <p>These rules evolve overtime through the interaction be- By operationalizing SAI Ethics and legal issues, we
tween human and machine which is an important aspect can foster a collaborative and mutually beneficial
relato SAI ethics. SAI recognizes the dynamic nature of tionship between humans and AI systems, promoting
human-AI interactions and the need for AI systems to responsible and trustworthy AI development for the
benadapt and respond to human preferences, values, and efit of the society. This requires a multifaceted approach
feedback overtime. To achieve this, we are going to con- that integrates technical, organizational, regulatory, and
sider diferent domains as case studies, collect and ana- societal perspectives.
lyze a large set of domain ethics cases and build a com- A socio-technical approach to SAI systems
developputational model employing diferent operationalization ment will be adopted which leads to an increased
actechniques. Then, we are planning to carry out exper- ceptability of these systems [24]. To capture the
socioiments to test our hypothesis that the computational technical complexity we are planning to adopt
Multimodel will accurately classify actions as ethical or uneth- Agent Systems (MAS) for modelling the SAI system at
ical. The model will be developed using a foundational hand [25]. The ethical and legal components in the
sysset of cases that will be collected for this purpose. The tem will be implemented as a MAS, which will act as
system performance will be evaluated using quantitative an ethical and legal over-layer in the overall decision
measures like precision and recall. making process. A starting point might be the MAS
pro</p>
      <p>An important aspect, mentioned above, is the model totype presented in [26, 27] for the ethical evaluation and
adaptability overtime. In the context of SAI systems, hu- monitoring of dialogue systems.
man and machine (as agents) work as a team, collaborate Finally, since a human-centric approach is central
and learn from each other, evolve together. The machine to SAI, transparency and explainability are key
require(as well as the human) will learn concrete ethical rules ments for establishing trust in SAI systems which leads to
from interaction with humans, the machine will apply acceptability. We would like to emphasize the the
promithe previously learned ethical rules on concrete cases, nent role of computational logic in the development of
will also revise and update the previously learned rules the computational model of ethical and legal
acceptabilif needed. Here, it is important to emphasize the col- ity of SAI. Logic Programming (LP) has a great potential
laborative aspect of SAI in revising and correcting the for developing such perspective ethical and legal SAI
ethical behavior overtime by both the human and the systems, as in fact logic rules are easily comprehensible
machine. In fact, this task is, in reality, a collaborative by humans. Furthermore, LP is able to model causality,
task, the machine will extract the case facts (the facts of which is crucial for ethical and legal decision making
the real-life case at hand), present them to the human, [15].
the human will provide an ethical judgment of the case
at hand. Then the machine will learn a new rule and/or
ing, Cham, 2022, pp. 1–19. doi:10.1007/978- 3- 3
19- 31739- 7_142- 1.</p>
      <p>This work was partially supported by the project FAIR - [10] L. Pulito, Algoritmi predittivi e valutazione della
Future AI Research (PE00000013), under the NRRP MUR pericolosità, L’Ircocervo (2024). Invited essay,
subprogram funded by the NextGenerationEU. mitted.
[11] P. Marra, L. Pulito, A. Carnevale, F. Lisi, A.
LomReferences bardi, A. Dyoub, A procedural idea of
decisionmaking in the context of symbiotic ai, in:
Pro[1] J. C. R. Licklider, Man-computer symbiosis, IRE ceedings of the 1st International Workshop on
DeTransactions on Human Factors in Electronics HFE- signing and Building Hybrid Human-AI Systems,
1 (1960) 4–11. doi:10.1109/THFE2.1960.4503259. co-located with 17th International Conference on
[2] S. S. Grigsby, Artificial intelligence for advanced Advanced Visual Interfaces (AVI 2024), Arenzano
human-machine symbiosis, in: D. Schmorrow, (Genoa), Italy, June 3rd, 2024., CEUR Workshop
ProC. Fidopiastis (Eds.), Augmented Cognition: In- ceedings, 2024. URL: https://synergy.trx.li/ceur-w
telligent Technologies, volume 10915 of Lecture s/paper9.pdf.</p>
      <p>Notes in Computer Science, Springer, Cham, 2018. [12] J. Morley, L. Kinsey, A. Elhalal, F. Garcia, M. Ziosi,
doi:10.1007/978- 3- 319- 91470- 1_22. L. Floridi, Operationalising AI ethics: barriers,
en[3] A. Carnevale, Condizione e struttura del nos- ablers and next steps, AI Soc. 38 (2023) 411–423.
tro rapporto con le macchine. dieci proposizioni URL: https://doi.org/10.1007/s00146-021-01308-8.
per una filosofia critica dell’intelligenza artificiale doi:10.1007/S00146- 021- 01308- 8.
antropocentrica, in: S. Barone, et al. (Eds.), L’uomo [13] J. Mökander, L. Floridi, Operationalising AI
govanimale tecnologico, Sciascia Editore, Caltanissetta- ernance through ethics-based auditing: an
indusRome, 2024. Invited chapter, accepted, in publica- try case study, AI Ethics 3 (2023) 451–468. URL:
tion. https://doi.org/10.1007/s43681- 022- 00171-7.
[4] A. Lombardi, L’origine dell’io. il “mistero” dell’intel- doi:10.1007/S43681- 022- 00171- 7.
ligenza da Darwin al riduzionismo contemporaneo, [14] C. Novelli, F. Casolari, A. Rotolo, M. Taddeo,
Studium/Ricerca 119 (2023) 651–688. L. Floridi, AI risk assessment: A scenario-based,
[5] A. Carnevale, A. Lombardi, F. A. Lisi, Exploring eth- proportional methodology for the AI act, Digit. Soc.
ical and conceptual foundations of human-centred 3 (2024) 13. URL: https://doi.org/10.1007/s44206-0
symbiosis with artificial intelligence, in: G. Boella, 24-00095-1. doi:10.1007/S44206- 024- 00095- 1.
et al. (Eds.), Proceedings of the 2nd Workshop on [15] A. Dyoub, S. Costantini, F. A. Lisi, Logic
proBias, Ethical AI, Explainability and the role of Logic gramming and machine ethics, in: Proceedings
and Logic Programming co-located with the 22nd 36th International Conference on Logic
ProgramInternational Conference of the Italian Association ming (Technical Communications), ICLP
Technifor Artificial Intelligence (AI*IA 2023), volume 3615 cal Communications 2020, (Technical
Communicaof CEUR Workshop Proceedings, 2023, pp. 30–43. tions) UNICAL, Rende (CS), Italy, 18-24th
SeptemURL: https://ceur-ws.org/Vol-3615/paper3.pdf. ber 2020, volume 325 of EPTCS, 2020, pp. 6–17.
[6] A. Carnevale, A. Lombardi, F. A. Lisi, A human- doi:10.4204/EPTCS.325.6.</p>
      <p>centred approach to symbiotic AI: Questioning the [16] A. Dyoub, S. Costantini, F. A. Lisi, Learning answer
ethical and conceptual foundation, Intelligenza set programming rules for ethical machines, in:
Artificiale (2024). Invited paper, in publication. A. Casagrande, E. G. Omodeo (Eds.), Proceedings
[7] A. Carnevale, Assessing the impacts of symbiotic of the 34th Italian Conference on Computational
AI (SAI) on individual and societal well-being, in: Logic, Trieste, Italy, June 19-21, 2019, volume 2396
H. Webb, et al. (Eds.), AI Impact Assessment: meth- of CEUR Workshop Proceedings, CEUR-WS.org, 2019,
ods and practices, Oxford University Press, 2024. pp. 300–315. URL: http://ceur-ws.org/Vol-2396/pa
Invited chapter, accepted, in publication. per14.pdf.
[8] C. Falchi Delgado, M. T. Ferretti, A. Carnevale, Be- [17] A. Dyoub, S. Costantini, F. A. Lisi, Towards an
yond one-size-fits-all: Precision medicine and novel ILP application in machine ethics, in: Inductive
technologies for sex and gender-inclusive covid-19 Logic Programming - 29th International
Conferpandemic management, in: D. Cirillo, et al. (Eds.), ence, ILP 2019, Plovdiv, Bulgaria, September 3-5,
Innovating Health against Future Pandemics, Else- 2019, Proceedings, volume 11770 of Lecture Notes
vier, 2024. Invited chapter, accepted, in publication. in Computer Science, Springer, Netherlands, 2019,
[9] P. Marra, I. Galatola, Efectiveness as Threat to Con- pp. 26–35. doi:10.1007/978- 3- 030- 49210- 6.
stitutional Systems, Springer International Publish- [18] A. Dyoub, S. Costantini, I. Letteri, Care robots
learning rules of ethical behavior under the supervision</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>