=Paper= {{Paper |id=Vol-3762/522 |storemode=property |title=Acceptability of Symbiotic Artificial Intelligence: Highlights from the FAIR project |pdfUrl=https://ceur-ws.org/Vol-3762/522.pdf |volume=Vol-3762 |authors=Francesca Alessandra Lisi,Antonio Carnevale,Abeer Dyoub,Antonio Lombardi,Piero Marra,Lorenzo Pulito |dblpUrl=https://dblp.org/rec/conf/ital-ia/LisiCDLMP24 }} ==Acceptability of Symbiotic Artificial Intelligence: Highlights from the FAIR project== https://ceur-ws.org/Vol-3762/522.pdf
                                Acceptability of Symbiotic Artificial Intelligence:
                                Highlights from the FAIR project
                                Francesca Alessandra Lisi1,∗ , Antonio Carnevale2 , Abeer Dyoub1 , Antonio Lombardi2 ,
                                Piero Marra3 and Lorenzo Pulito4
                                1
                                  University of Bari Aldo Moro, DiB Dept., via E. Orabona 4, Bari, 70125, Italy
                                2
                                  University of Bari Aldo Moro, DIRIUM Dept., Piazza Umberto I, Bari, 70121, Italy
                                3
                                  University of Bari Aldo Moro, LAW Dept., Piazza C. Battisti 1, Bari, 70121, Italy
                                4
                                  University of Bari Aldo Moro, DJSGE Dept., Via Duomo 259, Taranto, 74123, Italy


                                                                          Abstract
                                                                          In this work we report the highlights of the work done at the University of Bari within the FAIR project and concerning the
                                                                          acceptability of Symbiotic Artificial Intelligence.

                                                                          Keywords
                                                                          Symbiotic AI, AI Ethics, Trustworthy AI, Philosophical foundations of AI



                                1. Introduction                                                                                                  tive partnership between humans and machines within a
                                                                                                                                                 broader social and technological context, where the focus
                                The notion of symbiosis originated in the 19th century is not on a substantial peer-to-peer relationship but on
                                to indicate a relationship between two taxonomically integrating technology into human-centric processes and
                                separate life forms that nevertheless give rise to a sin- systems. In this context, symbiosis involves humans and
                                gle organism. Life forms in a symbiotic relationship are machines working together as a cohesive unit, each play-
                                not isolated but coexist in ways that are more or less ing a specific role and contributing to the team’s overall
                                essential to their survival and development. The first performance. On one hand, humans provide the cogni-
                                to advocate a symbiosis between humans and machines tive and emotional capabilities necessary for creativity,
                                was J.C.R Licklider in 1960 [1]. In his view, this kind empathy, ethical decision-making, and adaptability. On
                                of symbiosis would allow the computer to become an the other hand, machines offer computational power, data
                                active part of the thinking process that leads to resolving processing, and automation capabilities that can handle
                                technical problems and not just an executor of solutions repetitive and data-intensive tasks efficiently.
                                thought up beforehand. Licklider was mainly thinking                                                                When applied to AI, the concept of symbiosis becomes
                                of human-computer interfaces that would allow greater more complex, posing a whole series of foundational
                                real-time collaboration and shorten the distance between questions. Addressing these questions is one of the goals
                                human and machine language. He was pointing to a of the research done by the University of Bari (together
                                road that has since been successfully travelled, bringing with INFN) within the project Future AI Research (FAIR).
                                us to the so-called Symbiotic Artificial Intelligence (SAI). In particular, the acceptability of SAI is the subject of
                                Human-AI symbiosis promises to boost human-machine research for our investigation within a dedicated work
                                collaboration and socio-technical teaming, with mutually package (WP 6.5) of FAIR. Acceptability involves value
                                beneficial relationships, by augmenting (and valuing) hu- alignment between AI and humans. It is related, e.g., to
                                man cognitive abilities rather than replacing them [2]. In understanding AI decisions, the algorithmic bias, the re-
                                particular, socio-technical teaming refers to the collabora- spect of privacy policies for data collected by AI systems,
                                                                                                                                                 the struggle between security ensured by AI systems and
                                Ital-IA 2024: 4th National Conference on Artificial Intelligence, orga- fundamental freedoms, the mitigation of possible safety
                                nized by CINI, May 29-30, 2024, Naples, Italy                                                                    and health risks. In FAIR, studies on the acceptability
                                ∗
                                     Corresponding author.
                                Envelope-Open francesca.lisi@uniba.it (F. A. Lisi); antonio.carnevale@uniba.it
                                                                                                                                                 of SAI adopt an interdisciplinary approach involving re-
                                (A. Carnevale); abeer.dyoub@uniba.it (A. Dyoub);                                                                 searchers in AI, Law, and Philosophy.
                                antonio.lombardi@uniba.it (A. Lombardi); piero.marra@uniba.it                                                       In this paper, we briefly report the main achievements
                                (P. Marra); lorenzo.pulito@uniba.it (L. Pulito)                                                                  of our research on ethical and legal acceptability of SAI in
                                Orcid 0000-0001-5414-5844 (F. A. Lisi); 0000-0003-2538-5579                                                      the 1st year of the project (Sections 2-3) and outline the
                                (A. Carnevale); 0000-0003-0329-2419 (A. Dyoub);
                                0000-0003-1803-5423 (A. Lombardi); 0009-0003-6365-2129
                                                                                                                                                 steps needed to go from general principles to operational
                                (P. Marra); 0009-0000-3979-8716 (L. Pulito)                                                                      definitions for ethical acceptability (Section 4). Section 5
                                                    © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License
                                                    Attribution 4.0 International (CC BY 4.0).
                                                                                                                                                 concludes the paper with final remarks.
                                    CEUR
                                    Workshop
                                    Proceedings
                                                  http://ceur-ws.org
                                                  ISSN 1613-0073
                                                                       CEUR Workshop Proceedings (CEUR-WS.org)




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. Ethical acceptability of SAI                                but are an integral part of the same evolutionary process
                                                               and are responsible for it. We think that this approach is
The philosophical approach to AI is contributing to the        keeping with the provisions, ex multis, of memorandum
debate on the identification and analysis of the ethical       no. 38 of the Proposal for a EU Reg. on artificial intelli-
implications of algorithms. We have continued the inves-       gence. A procedural condition ensures the fairness and
tigation aiming to build the proposal of a methodological      transparency of decision-making and it allows recipients
framework grounded in process-oriented evaluations to          to understand and respect the decision itself. Indeed, in
assess the human-centricity and acceptability of SAIs          law, it is not sufficient the content of the decision, but
together with their societal benefit.                          also its enforcement. Thus, the effectiveness remains a
   The research carried out concerned two different sci-       constitutive element of legality [9].
entific lines:                                                    Furthermore, some legal issues raised by the interac-
   Questioning the notion of “symbiosis” in SAI sys-           tion between humans and AI were addressed in some
tems. The research focused mainly on the meaning of            areas of law (such as those that most require judgments
“symbiosis” and its applicability to AI [3]. To this end,      of predictive type, like the assessment of dangerousness
preliminary research has been carried out on the trans-        aimed, for example, at commensurate punishment and/or
formation of the concept of intelligence in the history        granting alternative measures). It has been so possible
of ideas [4]. In several internal meetings, the notion         to observe and identify some essential conditions that
of symbiosis was explored both from a biological and           should be taken into account in designing AI systems in
phenomenological point of view, with reference to the          this field, necessary to promote the symbiosis between
key recent AI-driven technological developments (AI and        humans and AI as well as to improve the trustworthiness,
drones, AI and robotics, LLM, ML, etc.).                       fairness and efficiency of the interaction (for example,
   Assessing the ethical impact of SAI in terms of             enriching the methods of responding to the crime in
acceptability and human-centricity. Defining the fun-          compliance with the fundamental principles of propor-
damental conceptual stages of a methodology for eval-          tionality and dignity of the person, realizing the requests
uating AI systems involves comparing and studying a            for individualization of the punishment) [10].
series of international regulatory frameworks – inter alia        Finally, we would like to mention that, the European
AI HLEG, Ethics Guidelines for Trustworthy AI (2018-           legal framework for AI gives minimal consideration to
19). We have outlined a model with different fundamen-         regulating AI based technologies where there is a recipro-
tal steps: (a) onto-epistemic foundation of the method;        cate relationship between human and machine (symbio-
(b) screening; (c) risk evaluation; (d) impact assessment.     sis). The research field of symbiotic AI is technologically
Now, we need to work within each step to refine proce-         challenging. In [11], we have undertaken a foundational
dures and metrics further.                                     study with the aim of conceptualizing and designing a
   The efforts in this direction have led to a joint paper     comprehensive symbiotic approach to AI, with the goal of
presented at the BEWARE workshop organized in Rome             producing fair, legitimate, and effective outcomes while
within the 22nd International Conference of the Italian        ensuring their ethical and legal acceptability. This theo-
Association for Artificial Intelligence (AI*IA 2023) [5], an   retical research is expected to influence the development
article accepted for publication in the journal Intelligenza   of Symbiotic AI systems and technological governance
Artificiale [6], and different book chapters in the final      through model assessment.
stages of publication [7], [8].

                                                               4. Towards Operational
3. Legal acceptability of SAI
                                                                  Definitions of Ethical
In line with the ethical and philosophical considerations         Acceptability of SAI
on symbiosis, moving from the perspective of human-
machine interaction to a procedural model of construc-         The ethical implications of Human-AI symbiosis are mul-
tion and assessment of SAI decision, within a legal            tifaceted and complex. Thus, it has become increasingly
methodology theory we have identified the first legal          paramount to take in consideration the ethical issues sur-
pragmatic conditions of algorithmic decision-making,           rounding SAI development, deployment, and impact. The
such as that of the significant human control, a notion        concept of ‘SAI Ethics’ offers a nuanced perspective that
borrowed from the international debate within the UN           emphasizes the harmonious coexistence and collabora-
on autonomous weapons. In this way, symbiosis trans-           tion between humans and AI systems. Operationalizing
lates also a techno-procedural legal principle capable of      SAI Ethics involves translating abstract ethical princi-
formalizing a human-centric value where persons do not         ples and values into concrete guidelines and practices
remain behind technological development and society            that govern every stage of the AI lifecycle, including
data collection, algorithm design, model training, evalua-  the integration of ethical principles into the design and
tion, and deployment [12]. It requires a multidisciplinary  development of AI algorithms and models. This means
approach, involving collaboration between computer sci-     translating ethical principles, values, and guidelines into
entists, ethicists, policymakers, and other stakeholders to actionable and measurable practices or procedures. We
ensure their alignment with societal values and human       need to define specific rules, standards, or protocols that
well-being, and to foster harmony and mutual benefit        guide the behavior and decision-making in ethical dilem-
between humans and machines.                                mas or concrete situations [16, 17]. Moreover, SAI Ethics
                                                            emphasizes the importance of continuous learning and
4.1. Operationalizing SAI Ethics                            adaptation. As AI technologies evolve and their societal
                                                            impact unfolds, ethical standards and norms must evolve
From a practical perspective, operationalizing SAI Ethics in tandem [18, 19]. This requires interdisciplinary re-
requires the establishment of governance frameworks, search, ethical reflection, and stakeholder engagement
standards, and regulations to govern the responsible de- to address emerging challenges and dilemmas.
velopment, deployment, and use of AI technologies. This
includes the development of ethical guidelines, codes
                                                            4.2. Building a Computational Model of
of conduct, and best practices to guide AI practitioners
and organizations in navigating ethical dilemmas and               SAI Ethics
decision-making processes [13]. These tools should be Ethical Principles are abstract rules intended for guiding
domain specific. Moreover, fostering interdisciplinary ethical decision making and judgement. There are a vari-
collaboration and stakeholder engagement is essential ety of techniques used for technical implementation of
to ensure that ethical considerations are adequately ad- ethical principles. In the previous literature of machine
dressed and that AI technologies serve the broader soci- ethics, ethical principles are integrated into machines
etal interest.                                              in a top-down, bottom-up, or hybrid architectures (see
   One key aspect of operationalizing SAI Ethics is the [20] for a survey). However, so far, no model seems to
development of robust frameworks and methodologies satisfy ethical judgement and decision making needs for
for ethical risk assessment and mitigation. This involves an acceptable and responsible AI system. Approaches
identifying potential ethical risks associated with AI sys- to encode principles into a format that computers can
tems, such as bias, discrimination, privacy violations, and understand include logical reasoning, probabilistic rea-
unintended consequences, and implementing strategies soning, learning, optimisation, and case-based reasoning
to address these risks proactively [14]. Thus, it is im- [21].
portant to design algorithms and systems that are trans-       We argue that it is impossible to build a ’general ethical
parent, interpretable, and accountable, enabling stake- AI’, i.e., a machine that is generally ethical, a machine that
holders to understand how AI decisions are made and to can reason and take ethical decisions in any domain and
detect and rectify ethical issues when they arise. Here in every context. We believe that we need to concentrate
we would like to highlight the role of logic program- on building domain-based ethical machines, i.e., machines
ming for designing such models [15]. Additionally, op- that are able of ethical reasoning and decision making in
erationalizing SAI Ethics requires ongoing monitoring any context and situation in a specific domain, which is,
and evaluation of AI systems in real-world contexts to any way, still a very challenging task. Considering the
ensure that they continue to operate ethically and re- purpose and the specific domain for which the AI system
sponsibly throughout their lifecycle. From a technical is developed, developers should consider codes of ethics
perspective, operationalization should focus on human- and conduct of the domain (domain ethics, e.g. medical
centricity through the development of AI systems that ethics) as a guiding framework. Furthermore, the key
are transparent, interpretable, and accountable. This en- aspects of SAI, such as the collaborative and cooperative
tails implementing mechanisms for explainability and nature between human and machine, the human-centric
interpretability, allowing users to understand how AI approach, the mutual benefit, the adaptability and respon-
algorithms make decisions and providing insights into siveness of SAI, and the interdisciplinary perspective,
their underlying processes. Techniques such as model in- should be taken in consideration in the design decisions
terpretability, transparency tools, and algorithmic audits to be taken by the developers.
enable stakeholders to scrutinize AI systems and iden-         To build a computational model of domain ethics to
tify potential biases, errors, or unintended consequences. be integrated into the AI system; the ethical principles of
Additionally, ensuring the robustness and reliability of the domain should be operationalized. The operational-
AI systems through rigorous testing, validation, and ver- ization task should be carried out involving all stake-
ification processes is essential to minimize the risk of holders and domain ethical experts. Developers should
harmful outcomes and instil confidence in their use.        also decide on the architecture to adopt for integrating
   Furthermore, operationalizing SAI Ethics necessitates the ethical principles. Being clear about which princi-
ple is being used will help designers to further specify       revise a previously learned rule and present it to the hu-
what inputs are necessary for their application, which in      man. Through a collaborative dialogue, The human can
turn will improve the ethical reasoning capabilities and       correct the ethical behavior of the machine, but also the
explainability of how decisions have been made [22].           machine can automatically demonstrate to the humans
   However, defining principles in an intentional manner       their errors in reasoning. In this way both will learn
so that they may be applied in a deductive manner, is          and improve their reasoning capabilities (mutual benefit).
often challenging and, in many cases, appears to be an         This adaptability aspect will be tested and evaluated in
impossible task. The issue lies in the gap between ab-         our experiments.
stract, open-textured principles and tangible, concrete
facts. The abstract principles should be operationalized
by linking them to the facts. When ethical experts jus-        5. Conclusions and Future Work
tify their conclusions in particular cases, they frequently
                                                               In this work, we reported on ongoing work in the Work-
connect ethical principles directly to the specific facts of
                                                               Package 6.5 of the project FAIR. A model of ethical ac-
those cases. Essentially, these established connections
                                                               ceptability of SAI was outlined. Many legal issues raised
between ethical principles and relevant facts serve as
                                                               by SAI systems were addressed. Currently, we are con-
operational (concrete) definitions of the principles. The
                                                               centrating on SAI ethics operationalization. Next, we
experts operationalize the abstract principles by tying
                                                               will work on the operationalization of legal aspects in
them directly to the factual context.
                                                               SAI by the development of a framework for embedding
   We are going to investigate, computationally, the pos-
                                                               the considerations of legal issues in SAI, then on real-
sibility of operationalizing abstract ethical principles by
                                                               izing a computational model of legal reasoning for our
inducing practical rules for ethical judgement and deci-
                                                               SAI system to be ultimately integrated in the SAI system
sion making in SAI systems from real-life interactions be-
                                                               together with the ethical model.
tween human and machine in different domains [19, 23].
                                                                  By operationalizing SAI Ethics and legal issues, we
These rules evolve overtime through the interaction be-
                                                               can foster a collaborative and mutually beneficial rela-
tween human and machine which is an important aspect
                                                               tionship between humans and AI systems, promoting
to SAI ethics. SAI recognizes the dynamic nature of
                                                               responsible and trustworthy AI development for the ben-
human-AI interactions and the need for AI systems to
                                                               efit of the society. This requires a multifaceted approach
adapt and respond to human preferences, values, and
                                                               that integrates technical, organizational, regulatory, and
feedback overtime. To achieve this, we are going to con-
                                                               societal perspectives.
sider different domains as case studies, collect and ana-
                                                                  A socio-technical approach to SAI systems develop-
lyze a large set of domain ethics cases and build a com-
                                                               ment will be adopted which leads to an increased ac-
putational model employing different operationalization
                                                               ceptability of these systems [24]. To capture the socio-
techniques. Then, we are planning to carry out exper-
                                                               technical complexity we are planning to adopt Multi-
iments to test our hypothesis that the computational
                                                               Agent Systems (MAS) for modelling the SAI system at
model will accurately classify actions as ethical or uneth-
                                                               hand [25]. The ethical and legal components in the sys-
ical. The model will be developed using a foundational
                                                               tem will be implemented as a MAS, which will act as
set of cases that will be collected for this purpose. The
                                                               an ethical and legal over-layer in the overall decision
system performance will be evaluated using quantitative
                                                               making process. A starting point might be the MAS pro-
measures like precision and recall.
                                                               totype presented in [26, 27] for the ethical evaluation and
   An important aspect, mentioned above, is the model
                                                               monitoring of dialogue systems.
adaptability overtime. In the context of SAI systems, hu-
                                                                  Finally, since a human-centric approach is central
man and machine (as agents) work as a team, collaborate
                                                               to SAI, transparency and explainability are key require-
and learn from each other, evolve together. The machine
                                                               ments for establishing trust in SAI systems which leads to
(as well as the human) will learn concrete ethical rules
                                                               acceptability. We would like to emphasize the the promi-
from interaction with humans, the machine will apply
                                                               nent role of computational logic in the development of
the previously learned ethical rules on concrete cases,
                                                               the computational model of ethical and legal acceptabil-
will also revise and update the previously learned rules
                                                               ity of SAI. Logic Programming (LP) has a great potential
if needed. Here, it is important to emphasize the col-
                                                               for developing such perspective ethical and legal SAI
laborative aspect of SAI in revising and correcting the
                                                               systems, as in fact logic rules are easily comprehensible
ethical behavior overtime by both the human and the
                                                               by humans. Furthermore, LP is able to model causality,
machine. In fact, this task is, in reality, a collaborative
                                                               which is crucial for ethical and legal decision making
task, the machine will extract the case facts (the facts of
                                                               [15].
the real-life case at hand), present them to the human,
the human will provide an ethical judgment of the case
at hand. Then the machine will learn a new rule and/or
Acknowledgments                                                     ing, Cham, 2022, pp. 1–19. doi:10.1007/978- 3- 3
                                                                    19- 31739- 7_142- 1 .
This work was partially supported by the project FAIR - [10] L. Pulito, Algoritmi predittivi e valutazione della
Future AI Research (PE00000013), under the NRRP MUR                pericolosità, L’Ircocervo (2024). Invited essay, sub-
program funded by the NextGenerationEU.                            mitted.
                                                              [11] P. Marra, L. Pulito, A. Carnevale, F. Lisi, A. Lom-
                                                                   bardi, A. Dyoub, A procedural idea of decision-
References                                                         making in the context of symbiotic ai, in: Pro-
 [1] J. C. R. Licklider, Man-computer symbiosis, IRE               ceedings of the 1st International Workshop on De-
     Transactions on Human Factors in Electronics HFE-             signing and Building Hybrid Human-AI Systems,
     1 (1960) 4–11. doi:10.1109/THFE2.1960.4503259 .               co-located with 17th International Conference on
 [2] S. S. Grigsby, Artificial intelligence for advanced           Advanced    Visual Interfaces (AVI 2024), Arenzano
     human-machine symbiosis, in: D. Schmorrow,                    (Genoa), Italy, June 3rd, 2024., CEUR Workshop Pro-
     C. Fidopiastis (Eds.), Augmented Cognition: In-               ceedings, 2024. URL: https://synergy.trx.li/ceur-w
     telligent Technologies, volume 10915 of Lecture               s/paper9.pdf.
     Notes in Computer Science, Springer, Cham, 2018.         [12] J. Morley, L. Kinsey, A. Elhalal, F. Garcia, M. Ziosi,
     doi:10.1007/978- 3- 319- 91470- 1_22 .                        L. Floridi, Operationalising AI ethics: barriers, en-
 [3] A. Carnevale, Condizione e struttura del nos-                 ablers and next steps, AI Soc. 38 (2023) 411–423.
     tro rapporto con le macchine. dieci proposizioni              URL: https://doi.org/10.1007/s00146-021-01308-8.
     per una filosofia critica dell’intelligenza artificiale       doi:10.1007/S00146- 021- 01308- 8 .
     antropocentrica, in: S. Barone, et al. (Eds.), L’uomo    [13] J. Mökander, L. Floridi, Operationalising AI gov-
     animale tecnologico, Sciascia Editore, Caltanissetta-         ernance through ethics-based auditing: an indus-
     Rome, 2024. Invited chapter, accepted, in publica-            try case study, AI Ethics 3 (2023) 451–468. URL:
     tion.                                                         https://doi.org/10.1007/s43681- 022- 00171-7.
 [4] A. Lombardi, L’origine dell’io. il “mistero” dell’intel-      doi:10.1007/S43681- 022- 00171- 7 .
     ligenza da Darwin al riduzionismo contemporaneo, [14] C. Novelli, F. Casolari, A. Rotolo, M. Taddeo,
     Studium/Ricerca 119 (2023) 651–688.                           L. Floridi, AI risk assessment: A scenario-based,
 [5] A. Carnevale, A. Lombardi, F. A. Lisi, Exploring eth-         proportional methodology for the AI act, Digit. Soc.
     ical and conceptual foundations of human-centred              3 (2024) 13. URL: https://doi.org/10.1007/s44206-0
     symbiosis with artificial intelligence, in: G. Boella,        24-00095-1. doi:10.1007/S44206- 024- 00095- 1 .
     et al. (Eds.), Proceedings of the 2nd Workshop on        [15] A. Dyoub, S. Costantini, F. A. Lisi, Logic pro-
     Bias, Ethical AI, Explainability and the role of Logic        gramming and machine ethics, in: Proceedings
     and Logic Programming co-located with the 22nd                36th International Conference on Logic Program-
     International Conference of the Italian Association           ming (Technical Communications), ICLP Techni-
     for Artificial Intelligence (AI*IA 2023), volume 3615         cal Communications 2020, (Technical Communica-
     of CEUR Workshop Proceedings, 2023, pp. 30–43.                tions) UNICAL, Rende (CS), Italy, 18-24th Septem-
     URL: https://ceur-ws.org/Vol-3615/paper3.pdf.                 ber 2020, volume 325 of EPTCS, 2020, pp. 6–17.
 [6] A. Carnevale, A. Lombardi, F. A. Lisi, A human-               doi:10.4204/EPTCS.325.6 .
     centred approach to symbiotic AI: Questioning the [16] A. Dyoub, S. Costantini, F. A. Lisi, Learning answer
     ethical and conceptual foundation, Intelligenza               set programming rules for ethical machines, in:
     Artificiale (2024). Invited paper, in publication.            A. Casagrande, E. G. Omodeo (Eds.), Proceedings
 [7] A. Carnevale, Assessing the impacts of symbiotic              of the 34th Italian Conference on Computational
     AI (SAI) on individual and societal well-being, in:           Logic, Trieste, Italy, June 19-21, 2019, volume 2396
     H. Webb, et al. (Eds.), AI Impact Assessment: meth-           of CEUR Workshop Proceedings, CEUR-WS.org, 2019,
     ods and practices, Oxford University Press, 2024.             pp. 300–315. URL: http://ceur-ws.org/Vol-2396/pa
     Invited chapter, accepted, in publication.                    per14.pdf.
 [8] C. Falchi Delgado, M. T. Ferretti, A. Carnevale, Be-     [17] A. Dyoub, S. Costantini, F. A. Lisi, Towards an
     yond one-size-fits-all: Precision medicine and novel          ILP application in machine ethics, in: Inductive
     technologies for sex and gender-inclusive covid-19            Logic Programming - 29th International Confer-
     pandemic management, in: D. Cirillo, et al. (Eds.),           ence, ILP 2019, Plovdiv, Bulgaria, September 3-5,
     Innovating Health against Future Pandemics, Else-             2019, Proceedings, volume 11770 of Lecture Notes
     vier, 2024. Invited chapter, accepted, in publication.        in Computer Science, Springer, Netherlands, 2019,
 [9] P. Marra, I. Galatola, Effectiveness as Threat to Con-        pp. 26–35. doi:10.1007/978- 3- 030- 49210- 6 .
     stitutional Systems, Springer International Publish-     [18] A. Dyoub, S. Costantini, I. Letteri, Care robots learn-
                                                                   ing rules of ethical behavior under the supervision
      of an ethical teacher (short paper), in: P. Bruno,     tions), ICLP Technical Communications 2021, Porto
      F. Calimeri, F. Cauteruccio, M. Maratea, G. Ter-       (virtual event), 20-27th September 2021, volume 345
      racina, M. Vallati (Eds.), Joint Proceedings of the    of EPTCS, 2021, pp. 182–188. doi:10.4204/EPTCS.
      1st International Workshop on HYbrid Models for        345.32 .
      Coupling Deductive and Inductive ReAsoning (HY- [27] A. Dyoub, S. Costantini, F. A. Lisi, G. De Gasperis,
      DRA 2022) and the 29th RCRA Workshop on Exper-         Demo paper: Monitoring and evaluation of ethi-
      imental Evaluation of Algorithms for Solving Prob-     cal behavior in dialog systems, in: Y. Demazeau,
      lems with Combinatorial Explosion (RCRA 2022)          T. Holvoet, J. M. Corchado, S. Costantini (Eds.), Ad-
      co-located with the 16th International Conference      vances in Practical Applications of Agents, Multi-
      on Logic Programming and Non-monotonic Reason-         Agent Systems, and Trustworthiness. The PAAMS
      ing (LPNMR 2022), Genova Nervi, Italy, September       Collection - 18th International Conference, PAAMS
      5, 2022, volume 3281 of CEUR Workshop Proceed-         2020, L’Aquila, Italy, October 7-9, 2020, Proceed-
      ings, CEUR-WS.org, Germany, 2022, pp. 1–8. URL:        ings, volume 12092 of Lecture Notes in Computer
      http://ceur-ws.org/Vol-3281/paper1.pdf.                Science, Springer, UK, 2020, pp. 403–407. doi:10.1
[19] A. Dyoub, S. Costantini, F. A. Lisi, Learning domain    007/978- 3- 030- 49778- 1\_35 .
      ethical principles from interactions with users, Dig-
      ital Society 1 (2022) 28. doi:10.1007/s44206- 022
     - 00026- y .
[20] S. Tolmeijer, M. Kneer, C. Sarasua, M. Christen,
     A. Bernstein, Implementations in machine ethics:
     A survey, ACM Computing Surveys 53 (2020) 1–38.
      URL: http://dx.doi.org/10.1145/3419633. doi:10.114
      5/3419633 .
[21] S. J. Russell, P. Norvig, Artificial Intelligence: A
      Modern Approach, Pearson Education Limited,
      2016.
[22] D. Leben, Normative principles for evaluating
      fairness in machine learning, in: Proceedings of
      the AAAI/ACM Conference on AI, Ethics, and So-
      ciety, AIES ’20, Association for Computing Ma-
      chinery, New York, NY, USA, 2020, p. 86–92. URL:
      htt ps: //d oi. org /10 .11 45/ 337 562 7.3 375 808.
      doi:10.1145/3375627.3375808 .
[23] A. Dyoub, S. Costantini, F. A. Lisi, I. Letteri, Logic-
      based machine learning for transparent ethical
      agents, in: F. Calimeri, S. Perri, E. Zumpano (Eds.),
      Proceedings of the 35th Italian Conference on Com-
      putational Logic - CILC 2020, Rende, Italy, October
      13-15, 2020, volume 2710 of CEUR Workshop Pro-
      ceedings, CEUR-WS.org, 2020, pp. 169–183. URL:
      https://ceur-ws.org/Vol-2710/paper11.pdf.
[24] G. Baxter, I. Sommerville, Socio-technical sys-
      tems: From design methods to systems engineer-
      ing, Interacting with Computers 23 (2011) 4–17.
      doi:10.1016/j.intcom.2010.07.003 .
[25] K. Van Dam, I. Nikolic, Z. Lukszo, Agent-Based
      Modelling of Socio-Technical Systems, 2013. doi:10
      .1007/978- 94- 007- 4933- 7 .
[26] A. Dyoub, S. Costantini, I. Letteri, F. A. Lisi, A
      logic-based multi-agent system for ethical monitor-
      ing and evaluation of dialogues, in: A. Formisano,
     Y. A. Liu, B. Bogaerts, A. Brik, V. Dahl, C. Do-
      daro, P. Fodor, G. L. Pozzato, J. Vennekens, N. Zhou
     (Eds.), Proceedings 37th International Conference
      on Logic Programming (Technical Communica-