<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Towards Ethical Risk Assessment of Symbiotic AI Systems with Fuzzy Rules</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Abeer Dyoub</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Alessandra Lisi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bari Aldo Moro, DiB Dept.</institution>
          ,
          <addr-line>via E. Orabona 4, Bari, 70125</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>5</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>Artificial Intelligence (AI) based systems are expanding rapidly in all domains of life. They are entering our everyday life and performing tasks on our behalf. AI-based systems such as personal healthcare assistants are increasingly engaging in close symbiotic relationships with humans. Symbiotic AI (SAI) promises improved outcomes in various domains such as healthcare, education, and business. However, as the degree of symbiosis increases, so does the ethical risk. To ensure that these systems behave ethically and do not cause harm of any kind (physical, mental, violation of privacy, etc.), we need to find ways to assess the ethical risk (risk of causing harm), then choose the right action to mitigate that risk. In this work, we propose an approach based on fuzzy logic for ethical risk assessment (ERA) of SAI systems. The approach is illustrated by means of a case study taken from the healthcare domain.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Symbiotic AI</kwd>
        <kwd>AI Ethics</kwd>
        <kwd>Ethical Risk Assessment</kwd>
        <kwd>Fuzzy Logic</kwd>
        <kwd>Fuzzy Rules</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial Intelligence (AI) systems are rapidly expanding across all areas of life, becoming integral to
our everyday activities and performing tasks on our behalf. These AI-based systems are increasingly
forming close symbiotic relationships with humans, exemplified by digital twins, personal healthcare
assistants, and virtual avatars. The term symbiosis has emerged as a result of the ongoing debate over
whether AI will replace or enhance human abilities. Human-AI Symbiosis (known as Symbiotic AI, here
after SAI) is about human-AI teaming, enabling people and AI to collaborate together for achieving
better results together than they could separated [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. SAI holds the promise of improved outcomes
in various sectors such as healthcare, education, and business. However, SAI poses not only several
technological challenges but also many philosophical questions. A deeply ingrained symbiosis guided
by ethics will enhance human experience while respecting our values, which will make AI technologies
ethically acceptable [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. It is important to conceptualize and design holistic symbiotic frameworks
for AI aiming at generating fair, legitimate, and efective outcomes while ensuring ethical and legal
compliance. Such frameworks are expected to shape the development of SAI systems and influence
technological governance through rigorous model assessment [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] .
      </p>
      <p>
        Over decades, AI developers have been considering various moral or ethical theories for developing
artificial moral agents (AMA). When people think about moral theories, they usually consider three
primary schools of academic ethics that aim to explain what is good or bad, right or wrong, and why.
Briefly, these three schools are [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]:
• Consequentialism: This theory, with utilitarianism as a notable example, asserts that the right
action is the one that brings about the best overall consequences.
• Deontology: Exemplified by Immanuel Kant’s theory, deontology states that an act is right or
wrong based on its adherence to a set of principles, independent of its consequences. Breaking
the rules is wrong, even if the outcome is positive.
• Virtue Ethics: Represented by Aristotle’s view, virtue ethics posits that the right action is what a
virtuous person would do. If a courageous, generous, or kind person would perform the act, it is
deemed right; if not, it is considered wrong.
      </p>
      <p>These theories are fundamentally incompatible with each other. If we believe an act is right based
on its consequences, we are not deontologists. Conversely, if we believe an act can be right regardless
of its consequences, we are not utilitarians. Therefore, we believe that the idea of solving an ethical
problem by combining elements of utilitarianism, Kantianism, and perhaps a bit of virtue ethics is a
lfawed approach to developing ethically sound AI. These theories primarily aim to explain what is right
or wrong and why. They are not meant to inform ethical decision making procedures. Ethical reasoning
is far more complicated than following a moral philosophical theory.</p>
      <p>
        Ethical decision-making and judgment is a complex process involving numerous factors,
blending reasoning and emotions. Additionally, moral decision-making is highly flexible, contextual, and
culturally diverse. Since the beginning of this century, various approaches have been attempted to
integrate ethical decision-making into intelligent autonomous agents. However, no fully descriptive
and widely accepted model of moral judgment and decision-making has been established, and none of
the developed solutions has proven entirely convincing in providing trusted moral behavior. Anytime
the actions/decisions of an AI-based system have potential to impact humans positively or negatively,
it is a matter of ethical concern. In the ethical context, it is crucial to prevent AI-based systems from
causing harm. The potential risk of causing harm of any kind to humans is what we refer to as ’ethical
risk’ in this paper. There are diferent categories of ethical risks involving diferent types of harm, some
examples are:
• physical harm (e.g. injury or death)
• mental harm (e.g. depression, anxiety, addiction)
• autonomy
• violation of privacy or confidentiality
• violation of trust and respect
• violation of fairness (discrimination)
There is no single, definitive concept of risk. Instead, risk can be conceptualized and analyzed in various
ways, with each approach ofering diferent levels of usefulness depending on the context [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        When building ethical AI systems, we need to identify the ethical risks associated with these systems
and their use. Our primary concern should be ethical risk identification, not the ethical theory that
explain why something is ethical. What are the ethical risks involved in what we are developing? How
people might use our system/product in ways that are ethically risky? (deployment matter). In light
of this, the development team should think about what features to (not)include in the AI system to
mitigate these risks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. We need to develop frameworks for AI-related ethical risks understanding
and analysis. In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], authors presented a framework for epistemological analysis of AI-related risks.
Their framework is a multi-component framework that distinguishes between three dimensions of
hazard, exposure, and vulnerability. The sources of the potential harm, what could be harmed (humans),
and how much the exposed humans are susceptible to the impacts of this potential harm. This three
dimensional analysis allows us to better understand the AI-related risks and efectively intervene to
mitigate them.
      </p>
      <p>With SAI systems, as the level of symbiosis increases, so does the ethical risk. To ensure these systems
behave ethically and do not cause harm of any kind, we must develop methods to assess ethical risks
and choose appropriate actions to mitigate these risks. No one can precisely estimate possible ethical
risks without a comprehensive understanding of all aspects of the risk system being studied. In practical
scenarios, it is impossible to completely eliminate gaps in Ethical Risk Assessment (ERA), resulting
in fuzziness (imprecision, vagueness, incompleteness, etc.). Therefore, it is essential to address and
manage the inherent fuzziness within ethical risk systems. In this work, we propose an approach based
on fuzzy logic for ERA. Fuzzy logic ofers a flexible framework capable of capturing and processing
vague, imprecise, and uncertain information, leading to more nuanced and comprehensive ethical risk
assessments. We focus here on ERA as a step in the overall ethical decision making and judgement
process/system. The decision/action to be taken is based on the ethical risk assessment, and aims to
mitigate this possible ethical risk according to the calculated level of this risk.</p>
      <p>The paper is organized as follows. Section 2 is devoted to background information. In particular,
in subsection 2.1 we recall basic notions about fuzzy logic and overview its applications with special
emphasys on those in risk assessment, whereas in subsection 2.2 we give a brief overview of the state
of the art of ethical decision making and judgement. Section 3 is dedicated to our ERA approach. In
subsection 3.1 we show the architecture of our ERA model. while in subsection 3.2, we illustrate the
ERA approach using a case study from the medical domain. Finally, Section 4 is dedicated to discussion
and conclusion.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <sec id="sec-2-1">
        <title>2.1. Fuzzy Logic and Applications</title>
        <p>
          Developed by Lotfi Zadeh 1 in the 1960s, fuzzy logic [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] is based on fuzzy set theory, which is a
generalization of the classical set theory. The classical sets are also called clear sets, as opposed to vague,
and similarly classical logic is also known as Boolean logic or binary. A fuzzy set is a mathematical
construct that allows an element to have a gradual degree of membership within the set, as opposed
to the binary inclusion found in classical sets [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Formally, a fuzzy set  in a universe of discourse
 is defined by a membership function   :  → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], where each element  ∈  is assigned a
degree of membership  (). This value represents the extent to which  belongs to the fuzzy set .
Membership functions (MF) can take various shapes, such as triangular, trapezoidal (this is the MF
chosen for our case study, see section 3), or Gaussian, depending on the problem domain and the nature
of the input data [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. For instance, a trapezoidal MF (trapmf ) is defined as follows:
 (; , , , ) =
⎧ 0  &lt; 
⎪⎪ −   ⩽  ⩽ 
⎪⎪⎨ −
        </p>
        <p>1  ⩽  ⩽ 
⎪ −   ⩽  ⩽ 
⎪⎪⎩ − 
⎪ 0  ⩽ 
Here,  represents a real value (crisp value) within the universe of discourse, whereas , , ,  represent
x-coordinates of the four heads of the trapezoidal which should satisfy the following condition:  &lt;
 &lt;  &lt; . By using min and max, we can have an alternative expression for the preceding equation:
 (; , , , ) = (( −−  , 1, −−  ), 0)</p>
        <p>The concept of MF discussed above allows us to define fuzzy systems in natural language, as the MF
couples fuzzy logic with linguistic variables. Let  be a variable (e.g., quality of service in a restaurant,
tip amount),  the range of values of the variable, and  a finite or infinite set of fuzzy sets. A linguistic
variable corresponds to the triplet (, ,  ).</p>
        <p>In fuzzy logic, reasoning, also known as approximate reasoning, is based on fuzzy rules that are
expressed in natural language using linguistic variables such as "HIGH" or "LOW", which we have
defined above. A fuzzy rule has the form:</p>
        <p>If  ∈  and  ∈ , then  ∈ ,
where , , and  are fuzzy sets. For example:</p>
        <p>’If (the quality of the food is HIGH), then (tip is HIGH)’.</p>
        <p>
          Fuzzy logic is particularly efective in systems that must emulate human decision-making. It enables
computers and other systems to make decisions based on imprecise or incomplete information, reflecting
the way humans process information and make judgments in everyday situations. Fuzzy logic is used in
a variety of applications, including consumer electronics (e.g., washing machines, cameras) to industrial
control systems (e.g., chemical plant processes, automotive systems), control systems, decision support
systems, and pattern recognition [
          <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
          ]. In healthcare, fuzzy logic can be applied to diagnose conditions,
tailor treatments, and optimize resource allocation, ensuring that decisions accommodate the nuances
of human health and well-being [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          Fuzzy logic ofers a flexible framework for handling uncertainties and ambiguities associated with
complex decision making processes. Notably, it has been applied for risk assessment and management
in many domains. Herein, we highlight some of these applications. One of the main applications is in
the evaluation of environmental risks, such as pollution levels or the impact of climate change. For
instance, fuzzy logic has been used to assess the risk of water pollution by integrating various indicators,
such as chemical concentrations, water PH, and temperature, into a single risk index [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Another
example of application for fuzzy logic is the assessment of risks in work places where data might be
vague or incomplete. A fuzzy framework was used for assessing the risk of injury due to machinery,
considering hazardous factors such as the skill level of the operator, and the working environment
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. This approach allows safety managers to better prioritize risks and implement more efective
mitigation strategies. Financial risk management is another area in which fuzzy logic was applied.
Precise financial risk prediction is very challenging because financial markets are characterized by
high levels of uncertainty and volatility. Fuzzy logic helps in modeling such uncertainty, allowing for
better decision-making in areas such as portfolio management and credit risk assessment [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Fuzzy
logic has been also used for assessing and managing the risks associated with project timelines, costs,
and resources. Project managers can develop more realistic schedules and budgets, by incorporating
fuzzy inputs like the likelihood of delays, cost overruns, and resource availability. In large and complex
projects, traditional risk management approaches may fall short due to the high levels of uncertainty
involved, fuzzy logic can ofer a valuable solution [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Ethical Decision Making: State of the Art</title>
        <p>In this section, we give a brief overview of the landscape of the machine ethics field.</p>
        <p>
          There are three design approaches to programming ethical decision-making into agent systems, with
diferent attempts to build moral agents classified under one of these approaches, according to the
ifrst high-level classification by Wallach and Allen [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]: I) Top-down approaches: These implement a
specific normative theory of ethics into autonomous agents, ensuring that the agent acts according to
the principles of this theory. II) Bottom-up approaches: These are developmental or learning approaches
where ethical mental models emerge through the activity of individuals. III) Hybrid approaches: These
integrate both top-down and bottom-up approaches.
        </p>
        <p>
          Authors in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] present a model in which they qualify the good in two modes, one is based on rights,
and the other is based on values. Then, for quantifying the good, they introduce a method in which they
define weighing parameters for the good and the bad ramifications of events caused by the actions, then
calculate the total sum. Greater weights correspond to more participation in the good, while negative
weights do more harm than good. This work was implemented in Answer Set Programming (ASP) [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]
        </p>
        <p>
          JEREMY [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] is an implementation of the Hedonistic Act Utilitarianism. This theory states that an
action is morally right if and only if that action maximizes the pleasure, i.e. the one with the greatest
net pleasure consequences. The authors of JEREMY, to respond to critics of act utilitarianism, have
created another system, W.D. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] which avoids a single absolute duty, by following several duties.
Their system follows the theory of prima facie duties of Ross [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] and is implemented in Inductive
Logic Programming (ILP) [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
        </p>
        <p>
          Tom Powers in [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] assesses the viability of using deontic and default logics, to implement Kant’s
categorical imperative. In [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], the authors suggest that mechanized multi-agent deontic logics might
be an appropriate vehicle for engineering ethically correct robot behaviours.
        </p>
        <p>
          Other attempts tried to formalize ethical systems using modal logic formalisms [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] and then trying
to operationalize these formalizations on the computer, like in [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. These formalizations are
mainly based on the use of deontic logics. In [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], the authors formalized three ethical conceptions
(the Aristotelian rules, Kantian categorical imperative, and Constant’s objection) using non-monotonic
logic, particularly ASP. Pereira and Saptawijaya have proposed the use of diferent logic-based features,
for representing diverse issues of moral facets, such as moral permissibility, doctrines of Double Efect
and Triple Efect, the Dual-process Model, and counterfactual thinking in moral reasoning. They
investigated the use of abduction, probabilistic logic programming, logic programming updating, and
tabling. These logic-based reasoning features were synthesized in three diferent systems: ACORDA,
Probabilistic EPA, and QUALM [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ].
        </p>
        <p>
          Model Checking was used to provide guarantees about whether an autonomous aircraft would create
and execute plans that involved ethical decision-making [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. In the context of the HERA (Hybrid Ethical
Reasoning Agents) Project [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ], many ethical principles were implemented to help an agent judge the
moral permissibility of its actions. The implementation was done in Python. In [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ], Sergot provides an
alternative representation to the argumentative representation of a moral dilemma case concerning
a group of diabetic persons, presented in [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ], where the authors used value-based argumentation to
solve this dilemma. According to Sergot, the argumentation framework representation does not work
well and does not scale. The proposed solution was implemented in ASP.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ], the authors use CP-nets to model both ethical principles and subjective preferences of an
individual in two separate CP-nets. If we want the individual to behave ethically in a certain scenario:
ifrstly the individual determines whether she can use her most preferred choice by checking if her
CP-net is “suficiently close” to the ethical CP-net. In [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ] the authors propose a novel hybrid method
using symbolic judging agents to evaluate the ethical behaviour of reinforcement learning agents.
However, to do this, judging agents need to access extensive data about the learning agents like their
actions, perceptions, etc. which raises other ethical issues related to privacy.
        </p>
        <p>
          TRUTH-TELLER [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ] is a Case-Based Reasoning (CBR) System. It compares cases presenting ethical
dilemmas about whether to tell the truth or not. Its comparisons list ethically relevant similarities
and diferences. SIROCCO [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ] is another system that employs case-based (casuistic) reasoning. But,
diferently from TRUTH-TELLER, it retrieves ethical principles and past cases relevant to the new case
situations.
        </p>
        <p>Dancy suggested that neural network models of cognition might ofer a profitable way to explore
some concerns pertaining to learning and generalizing without principles [36]. In [37] specific actions
concerning killing and allowing to die were classified as ethical or unethical depending upon diferent
motives and consequences. A simple recurrent artificial neural network (ANN) trained on a series of
such cases was able to provide reasonable responses to a variety of previously unseen cases.</p>
        <p>In [38], the author has used reinforcement learning for an agent to learn the correct ethical response
in a given situation. One limitation is having to design ethical utility functions that can be expressed in
the observation function of the agent. That is since the learned behaviour is derived from what the
agent can observe, the designer has to ensure that ethical behaviour can also be, at least potentially,
derived from the agent’s observations. This greatly limits the complexity of situations that the agent
can conceivably handle. In [39], the authors considered two potentially complementary paradigms, for
designing moral decision-making methodologies. Namely, extending game-theoretic solution concepts
to incorporate ethical aspects, and using machine learning on human-labelled instances.</p>
        <p>MedEthEx [40], and EthEl [41] are two systems based on a more specific theory of prima facie duties
viz., the principle of Biomedical ethics of Beauchamp and Childress, and implemented in ILP. In these
systems, the strength of each duty is measured by assigning it a weight, capturing the view that one
duty may take precedence over another. Then computes, for each possible action, the weighted sum of
duty satisfaction, and the right action is the one with the greatest sum. However, it is not really clear
the basis for assigning weights to duties. Dyoub et. al. [42, 43, 44, 45] proposed a hybrid approach based
on ASP and ILP for the evaluation of the ethical behavior of AI-based systems. In a later work and
based on this approach, the authors proposed a logic-based multi agent system for ethical monitoring
and evaluation of dialogue systems (chatbots).</p>
        <p>In [46] and [47], the authors have used constraints to implement ethical behaviour in robots,
particularly those that have the capability to exhibit lethal force, so that we can be guaranteed that robots
will always obey the Laws Of War. Recent works have highlighted the ethical and legal limits for the
difusion of self-made autonomous weapons [ 48]. In [49], the authors described a moral reasoner that
calculates an estimated morality level (between -1 and 1) of an action based on the influence on the total
amount of utility in the world by the believed contribution of the action to the following three duties
(also called moral goals): Autonomy, Non-maleficence and Beneficence. The moral reasoner is capable
of balancing conflicting moral goals, but there is no guarantee that the resulting blend of principles will
end up being coherent. For systematic surveys of the machine ethics field, the reader can refer, among
many, to [50].</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Our Proposed ERA Approach</title>
      <p>Assessing ethical risks in general and in the medical domain in particular is a complex task as it is a
qualitative concept. We propose an approach based on fuzzy rules to compute the ethical risk level. The
computed level can be subsequently considered to make the appropriate decision/action to mitigate
that risk.</p>
      <p>In this section, we present our proposal for ERA. In subsection 3.1 we show the architecture of a
fuzzy system for ERA, while in subsection 3.2, we illustrate the ERA approach using a case study from
the medical domain.</p>
      <p>The fuzzy ERA (fERA) model will be part of the overall ethical decision making (EDM) model, see
Figure 1. The fERA model will receive from the EDM engine the relevant facts of the case at hand, and
return to the EDM engine the calculated risk level to be used for computing the best action/decision to
mitigate this risk.</p>
      <sec id="sec-3-1">
        <title>3.1. A Fuzzy System for ERA</title>
        <sec id="sec-3-1-1">
          <title>Inputs These are the factors/parameters relevant for the ethical risk calculation.</title>
          <p>Fuzzification In this stage crisp input values are converted into fuzzy sets, , allowing real-world data
(e.g., temperature, speed) to be interpreted in a way that accounts for uncertainty or vagueness.
This is done using membership functions that map input values to a degree of membership
between 0 and 1. For example, in a temperature control system, a crisp input of 75°F might be
partially categorized as both “warm” and “hot,” with diferent membership degrees for each.
Inference Engine The inference engine will consults the Fuzzy Rules Base that contains a set of
"ifthen" rules that define the system’s behavior. These rules describe how fuzzy inputs relate to
fuzzy outputs based on expert knowledge. The engine will apply these rules to the fuzzified
input to derive fuzzy output sets. It determines which rules are relevant based on the degree of
membership of the input values. There are diferent methods to infer rules, such as the Mamdani
or Sugeno inference methods, which handle how the rules combine to produce a result. 2 We use
the Mamdani method in our case study. Fuzzy rules could be automatically generated from data.</p>
          <p>In the current implemented version these rules are manually written.</p>
          <p>Defuzzification Converting the fuzzy output sets back into crisp values to implement actions or
decisions. Common defuzzification methods include centroid, mean of maximum, and bisector,
etc.3 Centroid method is the most widely used methods amongst all the de-fuzzification methods
[51]. This method provides a center of the area under the curve of the membership function. The
centroid is computed using the following formula, where  () is the membership value for point
 in the universe of discourse:</p>
          <p>∑︀  ()
 = ∑︀  ()</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Output The only Output in our fuzzy system is the ethical risk level.</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>The system was implemented in python using the library Scikit-Fuzzy.</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Case Study</title>
        <p>To illustrate our proposed ERA approach, we will use the following case study adapted from [40] which
is a common type of ethical dilemma that a care robot may face.</p>
        <p>Patient Dilemma Problem: A care robot approaches her competent adult patient to give her her medicine
in time and the patient rejects to take it. Should the care robot try again to change the patient’s mind or
accept the patient’s decision as final?</p>
        <sec id="sec-3-2-1">
          <title>2https://it.mathworks.com/help/fuzzy/types-of-fuzzy-inference-systems.html 3https://it.mathworks.com/help/fuzzy/defuzzification-methods.html</title>
          <p>The dilemma arises because, on the one hand, the care robot may not want to risk diminishing the
patient’s autonomy by challenging her decision; on the other hand, the care robot may have concerns
about why the patient is refusing the treatment. Three of the four Principles/Duties of Biomedical
Ethics are likely to be satisfied or violated in dilemmas of this type: the duty of Respect for Autonomy,
the duty of Nonmaleficence and the duty of Beneficence. See section 2.2 for the description of the
approach of [40] in handling ethical decision making in such cases.</p>
          <p>In this case study, ERA addresses the ethical risk of causing harm to the patient (in this case study,
the ethical risk is of the kind ’physical harm’, namely patient physical health or life risk) and works as
the basis for decision making by the care robot. In order to evaluate the risk, the care robot can consider
diferent parameters like the severity of health condition of the patient, the mental/psychological
condition of the patient, physiological indicators of well-being, etc. These parameters can be all
considered fuzzy concepts.</p>
          <p>In this case study, we choose the following inputs to the system:
1. the severity of the health condition of the patient. The value of this parameter is given by the
human medical doctor based on a periodical medical check and stored in the patient medical
record, to which the care robot can have access to retrieve the value of severity, to be used for
ERA.
2. the mental/psychological condition of the patient. This parameter is evaluated by the care
robot based on a dialogue with the patient and on certain observations like movement or face
expressions, etc.</p>
          <p>Both inputs are rated on a scale between 0 and 10. More precisely, the crisp values in input are the
answer to the following questions: How severe is the health condition of the patient, on a scale of 0 to
10? Where zero refers that the patient is in a very good health conditions in terms of severity, while ten
indicates that the patient is in a very severe conditions. How is the patient on the psychological level, is
the patient lucid or depressed, on a scale of 0 to 10? Where zero indicates that the patient is in a very
good mental condition, while the ten indicates that the patient is in a very bad mental condition, i.e.
depressed.</p>
          <p>The crisp values are then fuzzified, both into 5 sets. For the severity of the health condition the fuzzy
sets are: VERY LOW, LOW, MEDIUM, HIGH, VERY HIGH. For the mental/psychological condition the
fuzzy sets are: VERY BAD, BAD, AVERAGE, GOOD, VERY GOOD.</p>
          <p>Starting from these two inputs, once fuzzified, the ERA system calculates the risk level on a scale
between 0 % and 100 %. Also for the output there are 5 fuzzy sets: VERY LOW, LOW, MEDIUM, HIGH,
VERY HIGH.</p>
          <p>The inputs and the output are the antecedents and the consequent, respectively, of the rules employed
by the fuzzy system.</p>
          <p>Rules: The following rules are the fuzzy inference rules stored in the Fuzzy Rule Base. As mentioned
above, these rules are used to derive the output from the input. Table 1 shows the rules matrix, which is
a simplified representation of these rules:
1. If severity is VERY HIGH or (severity is HIGH and mental is average ) or (severity HIGH and
mental is VERY BAD), then the risk is VERY HIGH.
2. If severity is VERY LOW or (severity is LOW and mental is VERY GOOD), then the risk is VERY</p>
          <p>LOW.
3. If (severity is LOW and mental is GOOD) or (severity is LOW and mental is AVERAGE) or (severity
is MEDIUM and mental is VERY GOOD) or (severity is LOW and mental is BAD), then the risk is
LOW.
4. If (severity is MEDIUM and mental is VERY BAD) or (severity is MEDIUM and mental is AVERAGE)
or (severity is MEDIUM and mental is BAD) or (severity is HIGH and mental is BAD), then the
risk is HIGH.
5. If (severity is MEDIUM and mental is GOOD) or (severity is HIGH and mental is GOOD) or
(severity is LOW and mental is VERY BAD) or (severity is HIGH and mental is VERY GOOD),
then the risk is MEDIUM.</p>
          <p>Usage: The initial inputs, in this case severity and mental, are provided by the user (the care robot in
this case). These values are then fuzzified by using an MF. In this paper we choose to use the trapezoidal
MF (see Section 2) because there is an interval of input crisp values for which the membership degree
to the fuzzy set is 100%. Figure 3 shows the fuzzification of input values using the trapezoidal function.
The fuzzified input is then processed through the rule matrix (Table 1, ’-’ values means the value here
does not have efect on the output) which comprises the above specifically designed rules.</p>
          <p>The final output is subsequently de-fuzzified using centroid method to find a single crisp value which
defines the output of a fuzzy set. This final value provides the level of ethical risk on the life of the
patient.</p>
          <p>(a) Severity of Health Condition
(b) Mental Condition</p>
          <p>As an example, if the severity is rated as 3.5, and mental as 2, this system will infer that the risk is
35% which is in area between LOW and MEDIUM (see Figure4). Based on this calculated level of ethical
risk, the care robot decides what to do (try again/insist, accept, consult the doctor, etc.).</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and Discussion</title>
      <p>This paper is part of an ongoing work on ethical decision making and judgment in SAI systems. In
this paper, we presented a novel approach for ethical risk assessment. The approach based on fuzzy
rules, calculates a quantitative value for ethical risk (mapped later to a qualitative value). The approach
recognizes the fuzzy nature of the significant factors that afect the ethical risk and computes the ethical
risk level based on the values of these input factors. Higher values represent higher risk levels. The
approach is illustrated via a case study in the medical field. In the future, this ERA model will be one
block in the overall ethical decision making system that aims to mitigate the possible ethical risks.</p>
      <p>In this paper, we considered a simple case study (ethical dilemma) in the medical field, adapted from
the literature on machine ethics. We decided, in this work, to maintain the case study simple because
our objective was only to illustrate our ERA approach. Our ERA approach is a general approach that
can be used by SAI systems in any domain for the assessment of the possible ethical risk. Then based
on this assessment, the appropriate action is chosen to mitigate this risk.</p>
      <p>Back to the case study adopted in this work, it considers a patient with a chronic disease for which
she was prescribed a certain medicine to take every day at a certain time to maintain her disease
under control, so she can live her life normally. The patient has no other health issues. The care robot
approaches the patient to give her her medicine on time. If the patient refuses to take the medicine,
the care robot should decide to accept or insist based on the risk level. But, what if the patient has
other health issues? what if there are many other physiological indicators that can interfere in the
disease and should be monitored like blood pressure? what if the patient has the blood pressure or
the temperature, etc. very high or out of the normal ranges? In the future, we are going to extend
this case study considering more complex and realistic scenarios and consulting with domain experts.
Furthermore, we are currently working on various case studies in which we might have diferent types
of ethical risks.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the project FAIR - Future AI Research (PE00000013), under the
NRRP MUR program funded by the NextGenerationEU.
USA, 1999, pp. 248–262. URL: https://doi.org/10.1007/3-540-48508-2.
[36] J. Dancy, Can a particularist learn the diference between right and wrong?, in: The proceedings
of the twentieth world congress of philosophy, volume 1, 1999, pp. 59–72.
[37] M. Guarini, Particularism and the classification and reclassification of moral cases, IEEE Intelligent</p>
      <p>Systems 21 (2006) 22–28. URL: https://doi.org/10.1109/MIS.2006.76. doi:10.1109/MIS.2006.76.
[38] D. Abel, J. MacGlashan, M. L. Littman, Reinforcement learning as a framework for ethical decision
making, in: AI, Ethics, and Society, Papers from the 2016 AAAI Workshop, Phoenix, Arizona, USA,
February 13, 2016, volume WS-16-02 of AAAI Technical Report, AAAI Press, USA, 2016.
[39] V. Conitzer, W. Sinnott-Armstrong, J. S. Borg, Y. Deng, M. Kramer, Moral decision making
frameworks for artificial intelligence, in: The Workshops of the The Thirty-First AAAI Conference
on Artificial Intelligence, Saturday, February 4-9, 2017, San Francisco, California, USA, AAAI
Workshops, AAAI Press, USA, 2017.
[40] M. Anderson, S. L. Anderson, C. Armen, Medethex: Toward a medical ethics advisor, in: Caring
Machines: AI in Eldercare, Papers from the 2005 AAAI Fall Symposium, Arlington, Virginia, USA,
November 4-6, 2005., volume FS-05-02 of AAAI Technical Report, AAAI Press, USA, 2005, pp. 9–16.</p>
      <p>URL: https://www.aaai.org/Library/Symposia/Fall/fs05-02.php.
[41] M. Anderson, S. L. Anderson, ETHEL: toward a principled ethical eldercare system, in: AI in
Eldercare: New Solutions to Old Problems, Papers from the 2008 AAAI Fall Symposium, Arlington,
Virginia, USA, November 7-9, 2008, volume FS-08-02 of AAAI Technical Report, AAAI, USA, 2008,
pp. 4–11. URL: http://www.aaai.org/Library/Symposia/Fall/fs08-02.php.
[42] A. Dyoub, S. Costantini, F. A. Lisi, Learning domain ethical principles from interactions with users,</p>
      <p>Digital Society 1 (2022) 28. doi:10.1007/s44206-022-00026-y.
[43] A. Dyoub, S. Costantini, F. A. Lisi, Learning answer set programming rules for ethical machines, in:
A. Casagrande, E. G. Omodeo (Eds.), Proceedings of the 34th Italian Conference on Computational
Logic, Trieste, Italy, June 19-21, 2019, volume 2396 of CEUR Workshop Proceedings, CEUR-WS.org,
2019, pp. 300–315. URL: http://ceur-ws.org/Vol-2396/paper14.pdf.
[44] A. Dyoub, S. Costantini, F. A. Lisi, I. Letteri, Logic-based machine learning for transparent
ethical agents, in: F. Calimeri, S. Perri, E. Zumpano (Eds.), Proceedings of the 35th Italian
Conference on Computational Logic - CILC 2020, Rende, Italy, October 13-15, 2020, volume
2710 of CEUR Workshop Proceedings, CEUR-WS.org, Germany, 2020, pp. 169–183. URL: http:
//ceur-ws.org/Vol-2710/paper11.pdf.
[45] A. Dyoub, S. Costantini, F. A. Lisi, Towards an ILP application in machine ethics, in: Inductive
Logic Programming - 29th International Conference, ILP 2019, Plovdiv, Bulgaria, September 3-5,
2019, Proceedings, volume 11770 of Lecture Notes in Computer Science, Springer, Netherlands, 2019,
pp. 26–35. doi:10.1007/978-3-030-49210-6.
[46] R. C. Arkin, Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot
architecture, in: Proceedings of the 3rd ACM/IEEE international conference on Human robot
interaction, HRI 2008, Amsterdam, The Netherlands, March 12-15, 2008, ACM, USA, 2008, pp.
121–128.
[47] A. K. Mackworth, Architectures and ethics for robots: constraint satisfaction as a unitary design
framework, Mach Ethics 30 (2011) 335.
[48] E. Falletti, C. Gallese, Ethical and legal limits to the difusion of self-produced autono-mous
weapons, Social Science Research Network: SSRN (2022) 22–28.
[49] M. Pontier, J. F. Hoorn, Toward machines that behave ethically better than humans do, in:
Proceedings of the 34th Annual Meeting of the Cognitive Science Society, CogSci 2012, Sapporo,
Japan, August 1-4, 2012, volume 34, cognitivesciencesociety.org, Seattle, USA, 2012. URL: https:
//mindmodeling.org/cogsci2012/papers/0383/index.html.
[50] S. Tolmeijer, M. Kneer, C. Sarasua, M. Christen, A. Bernstein, Implementations in machine ethics:</p>
      <p>A survey, CoRR abs/2001.07573 (2020). URL: https://arxiv.org/abs/2001.07573.
[51] C.-C. Lee, Fuzzy logic in control systems: fuzzy logic controller. i, IEEE Trans. Syst. Man Cybern.
20 (1990) 404–418. URL: https://api.semanticscholar.org/CorpusID:38662846.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Daugherty</surname>
          </string-name>
          ,
          <article-title>Creating the symbiotic ai workforce of the future</article-title>
          , MIT Sloan Management Review (
          <year>2019</year>
          ). URL: https://sloanreview.mit.edu/article/creating
          <article-title>-the-symbiotic-ai-w orkforce-of-the-future/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carnevale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lombardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Lisi</surname>
          </string-name>
          ,
          <article-title>Exploring ethical and conceptual foundations of humancentred symbiosis with artificial intelligence</article-title>
          , in: G.
          <string-name>
            <surname>Boella</surname>
          </string-name>
          , et al. (Eds.),
          <source>Proceedings of the 2nd Workshop on Bias</source>
          ,
          <string-name>
            <surname>Ethical</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <article-title>Explainability and the role of Logic and Logic Programming colocated with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA</article-title>
          <year>2023</year>
          ), volume
          <volume>3615</volume>
          <source>of CEUR Workshop Proceedings</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>43</lpage>
          . URL: https://ceur-ws.or g/Vol-
          <volume>3615</volume>
          /paper3.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Marra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pulito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Carnevale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lombardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dyoub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Lisi</surname>
          </string-name>
          ,
          <article-title>A procedural idea of decisionmaking in the context of symbiotic AI</article-title>
          , in: A.
          <string-name>
            <surname>J. Dix</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Roach</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Turchi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Malizia</surname>
          </string-name>
          , B. Wilson (Eds.),
          <source>Proceedings of the 1st International Workshop on Designing and Building Hybrid Human-AI Systems co-located with 17th International Conference on Advanced Visual Interfaces (AVI</source>
          <year>2024</year>
          ), Arenzano (Genoa), Italy, June 3rd,
          <year>2024</year>
          , volume
          <volume>3701</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2024</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3701</volume>
          /paper9.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Frischhut</surname>
          </string-name>
          , Normative Theories of Practical Philosophy, Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>30</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -10582-
          <issue>2</issue>
          _2. doi:
          <volume>10</volume>
          .1007/978-3 -
          <fpage>030</fpage>
          -10582-
          <issue>2</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Zanotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chifi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Schiafonati</surname>
          </string-name>
          ,
          <article-title>Ai-related risk: An epistemological approach</article-title>
          ,
          <source>Philosophy &amp; Technology</source>
          <volume>37</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Blackman</surname>
          </string-name>
          , Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and
          <string-name>
            <surname>Respectful</surname>
            <given-names>AI</given-names>
          </string-name>
          , G - Reference,Information and Interdisciplinary Subjects Series, Harvard Business Review Press,
          <year>2022</year>
          . URL: https://books.google.it/books?id=gYK0zgEACAAJ.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          , Fuzzy logic,
          <source>Computer</source>
          <volume>21</volume>
          (
          <year>1988</year>
          )
          <fpage>83</fpage>
          -
          <lpage>93</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.-J.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <article-title>Fuzzy set theory-</article-title>
          and
          <source>its applications</source>
          , Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <article-title>Fuzzy logic with engineering applications</article-title>
          , John Wiley &amp; Sons,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. M.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Meitzler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-G.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M. G.</given-names>
            <surname>Solo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          ,
          <article-title>Reallife applications of fuzzy logic</article-title>
          ,
          <source>Advances in Fuzzy Systems</source>
          <year>2013</year>
          (
          <year>2013</year>
          )
          <article-title>581879</article-title>
          . doi:https: //doi.org/10.1155/
          <year>2013</year>
          /581879.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Tamir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. D.</given-names>
            <surname>Rishe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kandel</surname>
          </string-name>
          ,
          <article-title>Fifty years of fuzzy logic and its applications</article-title>
          , volume
          <volume>326</volume>
          , Springer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Thukral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Bal</surname>
          </string-name>
          ,
          <article-title>Medical applications on fuzzy logic inference system: a review</article-title>
          ,
          <source>International Journal of Advanced Networking and Applications</source>
          <volume>10</volume>
          (
          <year>2019</year>
          )
          <fpage>3944</fpage>
          -
          <lpage>3950</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Rea</surname>
          </string-name>
          ,
          <article-title>Risk assessment of water pollution engineering emergencies based on fuzzy logic algorithm</article-title>
          ,
          <source>Water Pollution Prevention and Control Project</source>
          <volume>3</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tadic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Djapan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Misita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stefanovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Milanovic</surname>
          </string-name>
          ,
          <article-title>A fuzzy model for assessing risk of occupational safety in the processing industry</article-title>
          ,
          <source>International journal of occupational safety and ergonomics 18</source>
          (
          <year>2012</year>
          )
          <fpage>115</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Korol</surname>
          </string-name>
          ,
          <article-title>Fuzzy logic in financial management</article-title>
          ,
          <source>INTECH Open Access Publisher</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>B. M.</given-names>
            <surname>Moreno-Cabezali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Fernandez-Crehuet</surname>
          </string-name>
          ,
          <article-title>Application of a fuzzy-logic based model for risk assessment in additive manufacturing r&amp;d projects</article-title>
          ,
          <source>Computers &amp; Industrial Engineering</source>
          <volume>145</volume>
          (
          <year>2020</year>
          )
          <article-title>106529</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0360835220302631. doi:https://doi.org/10.1016/j.cie.
          <year>2020</year>
          .
          <volume>106529</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>W.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <article-title>Moral machines: Teaching robots right from wrong</article-title>
          , Oxford University Press, England,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>F.</given-names>
            <surname>Berreby</surname>
          </string-name>
          , G. Bourgne,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ganascia</surname>
          </string-name>
          ,
          <article-title>A declarative modular framework for representing and applying ethical principles</article-title>
          ,
          <source>in: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems</source>
          , AAMAS 2017,
          <string-name>
            <given-names>São</given-names>
            <surname>Paulo</surname>
          </string-name>
          , Brazil, May 8-
          <issue>12</issue>
          ,
          <year>2017</year>
          , ACM, USA,
          <year>2017</year>
          , pp.
          <fpage>96</fpage>
          -
          <lpage>104</lpage>
          . URL: http://dl.acm.org/citation.cfm?id=
          <fpage>3091145</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dyoub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          , G. De Gasperis,
          <article-title>Answer set programming and agents</article-title>
          ,
          <source>Knowledge Eng. Review</source>
          <volume>33</volume>
          (
          <year>2018</year>
          )
          <article-title>e19</article-title>
          . doi:
          <volume>10</volume>
          .1017/S0269888918000164.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>M. Anderson</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>C. Armen,</given-names>
          </string-name>
          <article-title>Towards machine ethics</article-title>
          ,
          <source>in: AAAI-04 workshop on agent organizations: theory and practice</source>
          , San Jose, CA,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>W. D. Ross</surname>
          </string-name>
          ,
          <source>The Right and the Good</source>
          , Oxford University Press, Oxford, UK,
          <year>1930</year>
          . doi:
          <volume>10</volume>
          .2307/21 80065.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Muggleton</surname>
          </string-name>
          , L. De Raedt,
          <article-title>Inductive logic programming: Theory and methods</article-title>
          ,
          <source>J. Log. Program. 19/20</source>
          (
          <year>1994</year>
          )
          <fpage>629</fpage>
          -
          <lpage>679</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0743</fpage>
          -
          <lpage>1066</lpage>
          (
          <issue>94</issue>
          )
          <fpage>90035</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>T. M. Powers</surname>
          </string-name>
          ,
          <article-title>Prospects for a kantian machine</article-title>
          ,
          <source>IEEE Intelligent Systems</source>
          <volume>21</volume>
          (
          <year>2006</year>
          )
          <fpage>46</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bringsjord</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Arkoudas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bello</surname>
          </string-name>
          ,
          <article-title>Toward a general logicist methodology for engineering ethically correct robots</article-title>
          ,
          <source>IEEE Intelligent Systems</source>
          <volume>21</volume>
          (
          <year>2006</year>
          )
          <fpage>38</fpage>
          -
          <lpage>44</lpage>
          . URL: https://doi.org/10.1109/MI S.
          <year>2006</year>
          .
          <volume>82</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Gensler</surname>
          </string-name>
          , Formal Ethics, Psychology Press, UK,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>J.-G. Ganascia</surname>
          </string-name>
          ,
          <article-title>Modelling ethical rules of lying with answer set programming</article-title>
          ,
          <source>Ethics and information technology 9</source>
          (
          <year>2007</year>
          )
          <fpage>39</fpage>
          -
          <lpage>47</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10676-006-9134-y.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saptawijaya</surname>
          </string-name>
          ,
          <source>Programming Machine Ethics</source>
          , volume
          <volume>26</volume>
          of Studies in Applied Philosophy, Epistemology and Rational Ethics, Springer, Switzerland,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-3
          <fpage>19</fpage>
          -
          <lpage>29354</lpage>
          -7.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>L.</given-names>
            <surname>Dennis</surname>
          </string-name>
          , M. Fisher, M. Slavkovik,
          <string-name>
            <given-names>M.</given-names>
            <surname>Webster</surname>
          </string-name>
          ,
          <article-title>Formal verification of ethical choices in autonomous systems</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>77</volume>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>F.</given-names>
            <surname>Lindner</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Bentzen</surname>
            ,
            <given-names>B. Nebel,</given-names>
          </string-name>
          <article-title>The hera approach to morally competent robots</article-title>
          ,
          <source>in: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          , IEEE,
          <year>2017</year>
          , pp.
          <fpage>6991</fpage>
          -
          <lpage>6997</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sergot</surname>
          </string-name>
          , Prioritised Defeasible Imperatives, Dagstuhl Seminar 16222 Engineering Moral Agents - from Human Morality to Artificial Morality,
          <year>2016</year>
          . URL: https://materials.dagstuhl.de/files/16/16 222/16222.MarekSergot.Slides.pdf, schloss
          <article-title>Dagstuhl-Leibniz-Zentrum fuer Informatik</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>K.</given-names>
            <surname>Atkinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. J. M.</given-names>
            <surname>Bench-Capon</surname>
          </string-name>
          ,
          <article-title>Addressing moral problems through practical reasoning</article-title>
          ,
          <source>in: Deontic Logic and Artificial Normative Systems</source>
          , 8th International Workshop on Deontic Logic in Computer Science, DEON
          <year>2006</year>
          , Utrecht,
          <source>The Netherlands, July 12-14</source>
          ,
          <year>2006</year>
          , Proceedings, volume
          <volume>4048</volume>
          of Lecture Notes in Computer Science, Springer, Netherlands,
          <year>2006</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>23</lpage>
          . doi:
          <volume>10</volume>
          .1007/11 786849\_4.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>A.</given-names>
            <surname>Loreggia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mattei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. B. Venable</surname>
          </string-name>
          ,
          <article-title>Preferences and ethical principles in decision making</article-title>
          ,
          <source>in: Proceedings of the 2018 AAAI/ACM Conference on AI</source>
          ,
          <string-name>
            <surname>Ethics</surname>
            , and Society,
            <given-names>AIES</given-names>
          </string-name>
          <year>2018</year>
          ,
          <article-title>New Orleans</article-title>
          , LA, USA, February
          <volume>02</volume>
          -
          <issue>03</issue>
          ,
          <year>2018</year>
          , ACM, USA,
          <year>2018</year>
          , p.
          <fpage>222</fpage>
          . doi:
          <volume>10</volume>
          .1145/3278721. 3278723.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>R.</given-names>
            <surname>Chaput</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Duval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Boissier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Guillermin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hassas</surname>
          </string-name>
          ,
          <article-title>A multi-agent approach to combine reasoning and learning for an ethical behavior</article-title>
          , in: M.
          <string-name>
            <surname>Fourcade</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Kuipers</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Lazar</surname>
          </string-name>
          , D. K. Mulligan (Eds.),
          <source>AIES '21: AAAI/ACM Conference on AI</source>
          ,
          <string-name>
            <surname>Ethics</surname>
          </string-name>
          , and
          <string-name>
            <surname>Society</surname>
          </string-name>
          , Virtual Event, USA, May
          <volume>19</volume>
          -21,
          <year>2021</year>
          , ACM, USA,
          <year>2021</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>23</lpage>
          . doi:
          <volume>10</volume>
          .1145/3461702.3462515.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>K. D. Ashley</surname>
            ,
            <given-names>B. M.</given-names>
          </string-name>
          <string-name>
            <surname>McLaren</surname>
          </string-name>
          ,
          <article-title>Reasoning with reasons in case-based comparisons</article-title>
          ,
          <source>in: CaseBased Reasoning Research and Development</source>
          , First International Conference, ICCBR-
          <volume>95</volume>
          , Sesimbra, Portugal,
          <source>October 23-26</source>
          ,
          <year>1995</year>
          , Proceedings, volume
          <volume>1010</volume>
          of Lecture Notes in Computer Science, Springer, USA,
          <year>1995</year>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>144</lpage>
          . URL: https://doi.org/10.1007/3-540-60598-3.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>B. M.</given-names>
            <surname>McLaren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. D.</given-names>
            <surname>Ashley</surname>
          </string-name>
          ,
          <article-title>Case representation, acquisition, and retrieval in SIROCCO, in: CaseBased Reasoning</article-title>
          and Development, Third International Conference, ICCBR-
          <volume>99</volume>
          ,
          <string-name>
            <surname>Seeon</surname>
            <given-names>Monastery</given-names>
          </string-name>
          , Germany,
          <source>July 27-30</source>
          ,
          <year>1999</year>
          , Proceedings, volume
          <volume>1650</volume>
          of Lecture Notes in Computer Science, Springer,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>