<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Logic-based Machine Learning for Transparent Ethical Agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Abeer Dyoub</string-name>
          <email>abeer.dyoub@univaq.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefania Costantini</string-name>
          <email>stefania.costantini@univaq.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca A. Lisi</string-name>
          <email>FrancescaAlessandra.Lisi@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Letteri</string-name>
          <email>ivan.letteri@univaq.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dipartimento di Informatica &amp; Centro Interdipartimentale di Logica e Applicazioni (CILA) Universita` degli Studi di Bari “Aldo Moro”</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dipartimento di Ingegneria e Scienze dell'Informazione e Matematica Universita` degli Studi dell'Aquila</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Autonomous intelligent agents are increasingly engaging in human communities. Thus, they must be expected to follow social and ethical norms of the community in which they are deployed in. In this work we present an approach for developing such ethical agents which are able to develop ethical decision making and judgment capabilities by learning from interactions with the users. Our approach is a logic-based approach and the resulting ethical agents are transparent by design.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Autonomous intelligent agents are increasingly engaging in human communities. There
has been an increasing trend in the last decade to use black box Machine Learning (ML)
to develop such agents for critical domains such as healthcare, automotive, and criminal
justice, where their decisions deeply impact human lives. Most of the traditional
machine learning models are black boxes, as they do not explain their decisions in a way
that humans can understand. This lack of transparency and accountability is causing
severe harm to society. Examples of such critical applications where ML resulted in a
severe consequences are [52], [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] and [50].
      </p>
      <p>The European Union’s General Data Protection Regulations (GDPR) 3 is a set of
comprehensive regulations for the collection, storage and use of personal information.
GDPR took effect as a law across the EU in 2018. This law puts restrictions on
automated individual decision-making (that is, algorithms that make decisions based on
user-level predictors) which affects users significantly. Furthermore, it is important to
mention that the GDPR created a “right to explanation”, which allows a user to ask for
an explanation of an algorithmic decision that was made about them. This law has posed
large challenges for industry, pushing researchers to look for algorithms and evaluation
frameworks which avoid discrimination and enable explanation. However, instead of
? Copyright c 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0).
3 https://gdpr.eu/
trying to design models that are inherently interpretable by design, most of the recent
work concentrates on “Explainable ML”, where a second (posthoc) model is created to
explain the first black box model. The posthoc model is an approximation of the original
model, thus, explanations are often not reliable because they cannot have perfect fidelity
with respect to what original model computes. As the posthoc explanation model is an
approximation of the original one, then, given explanations are not always correct [35].
As a result, if we are not sure whether the explanation is correct, we hesitate to trust
either the explanation or the original model.</p>
      <p>Machine Ethics is an emerging field aiming at creating machines able to compute
and choose the best ethical action. Moral judgment and decision making often
concern actions that entail some harm, especially loss of life or other physical harm, loss of
rightful property, loss of privacy, or other threats to autonomy. Moral judgments are also
triggered by actions that affect not only the actor but others as well. When autonomous
agents are to be deployed in sensitive environments like, e.g., healthcare, their
behavior should be ethically constrained. In other words, those agents must be designed on
ethical bases. Moral decision making and judgment is a complicated process
involving many aspects: it is considered as a mixture of reasoning and emotions. In addition,
moral decision making is highly flexible, contextual and culturally diverse. Since the
beginning of this century there have been several attempts for implementing ethical
decision making into intelligent autonomous agents, using different approaches. So far
however, no fully descriptive and widely acceptable model of moral judgment and
decision making exists. None of the developed solutions seem to be fully convincing to
provide a trusted moral behavior. Approaches to machine ethics are classified into
topdown approaches, which try to implement specific normative theory of ethics into the
autonomous agent so as to ensure that the agent acts in accordance with the principles
of this theory. The bottom-up approaches are developmental or learning approaches,
in which ethical mental models emerge via the activity of individuals rather than
expressed explicitly in terms of normative theories of ethics [51]. In other words,
generalism versus particularism, principles versus case based reasoning. Both approaches to
morality have advantages and disadvantages. We need hybrid approaches that combine
both points of view in one framework.</p>
      <p>Transparency is a key requirement for ethical machines, because eventually the
relevant criteria for an AI system to be considered ethical will involve the system to be
able to explain and justify its behavior to users or to the society as a whole. From
transparency flow two attributes of particular importance in the machine ethics field and
AI field in general, viz. trust and accountability. It is hard to trust a machine unless
you have some understanding of what it is doing and why. Furthermore, without
transparency it becomes very difficult to understand who is responsible when a machine does
not behave as we expect it to. Interpretability empowers safety by enabling ML systems
models to be tested, audited, and debugged which leads to increased safety of such
systems [34]. Interpretability helps to detect unethical problems like bias that ML models
learned, either from wrong data pre-processing or due to wrong settings
parametrization, which arises from incompleteness of the problem definition. We believe however
that the best way to design transparent ethical agents is the use of ML models that are
inherently-interpretable (transparent by design), they provide their own explanations,
which are faithful to what the model actually computes, instead of trying to explain ML
black-box models.</p>
      <p>In this work, we present a logic-based hybrid approach for implementing
transparent ethical agents. The proposed approach combines deductive (rule-based) logic
programming and inductive (learning) logic programming approaches in one framework
for building our ethical agent. We use Answer Set Programming (ASP) for knowledge
representation and reasoning, and Inductive Logic Programming (ILP) as a machine
learning technique for learning from cases and generating the missing detailed ethical
rules needed for reasoning about future similar cases. The newly learned rules are to be
added to the agent knowledge base. ASP, a purely declarative non-monotonic reasoning
paradigm, was chosen because ethical rules are known to be default rules, which means
that they tolerate exceptions. This in fact nominates non-monotonic logics which
simulate common sense reasoning to be used for formalizing different ethical conceptions.
In addition, there are the many advantages of ASP including it is expressiveness,
flexibility, extensibility, ease of maintenance, readability of its code. The availability of free
solvers to derive consequences of different ethical principles automatically can help in
precise comparison of ethical theories, and makes it easy to validate our models in
different situations. ILP was chosen as a machine learning approach because it supports
two very important and desired aspects of machine ethics implementation into artificial
agents viz. explainability and accountability, ILP is known for its explanatory power,
elements of the generated rules can be used to formulate an explanation for the choice
of certain decisions over others. Comprehensibility of logic-based representations is in
fact one of their most recognized advantages. Thus, the resulting agents are transparent
by design. Moreover, ILP also seems better suited than statistical methods to domains
in which training examples are scarce as in the case of ethical domain.</p>
      <p>After providing some background on the adopted techniques and a discussion of
related work, we present our approach considering as a sample application domain online
customer service (so-called “chatbots”). However, the approach is general enough to be
adopted to implement ethical agents in different domains.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <sec id="sec-2-1">
        <title>Answer Set Programming (ASP) in a Nutshell</title>
        <p>
          ASP is a logic programming paradigm under answer set (or “stable model”)
semantics [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], which applies ideas of autoepistemic logic and default logic. In ASP, search
problems are reduced to computing answer sets, and an answer set solver (i.e., a
program for generating stable models) is used to find solutions. An answer set Program is
a collection of rules of the form: H A1; : : : ; Am; notAm+1; : : : ; notAn were each
of Ai’s is a literal in the sense of classical logic. Intuitively the above rule means that
if A1; : : : ; Am are true and if Am+1; : : : ; An can be safely assumed to be false then H
must be true. The left-hand side and right-hand side of rules are called head and body,
respectively. A rule with empty body (n = 0) is called a fact. A rule with empty head
is a constraint, and states that literals of the body cannot be simultaneously true in any
answer set. Unlike other semantics, a program may have several answer sets or may
have no answer set. So, differently from traditional logic programming, the solutions
of a problem are not obtained through substitutions of variables values in answer to a
query. Rather, a program describes a problem, of which its answer sets represent the
possible solutions, found by means of ASP solvers 4. For more information about ASP
and its applications (also to the agents realm) the reader can refer, among many, to [
          <xref ref-type="bibr" rid="ref10 ref18">10,
18</xref>
          ] and the references therein.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2 Inductive Logic Programming in a Nutshell</title>
        <p>ILP [36] is a branch of artificial intelligence (AI) which investigates the inductive
construction of logical theories from examples and background knowledge. In the general
settings, we assume a set of Examples E, positive E+ and negative E , and some
background knowledge B. An ILP algorithm finds the hypothesis H such that B S H j= E+
and B S H 6j= E . The possible hypothesis space is often restricted with a language
bias that is specified by a series of mode declarations M. A mode declaration is either
a head declaration modeh(r, s) or a body declaration modeb(r, s), where s is a ground
literal, this scheme serves as a template for literals in the head or body of a hypothesis
clause, where r is an integer, the recall, which limits how often the scheme can be used.
A scheme can contain special placemarker terms of the form ] type, +type and -type,
which stand, respectively, for ground terms, input terms and output terms of a
predicate type. Finally, it is important to mention that ILP has found applications in many
areas. For more information on ILP and applications, refer, among many to [37] and
references therein.</p>
        <p>
          ILP has received a growing interest over the last two decades. ILP has many
advantages over statistical machine learning approaches: the learned hypotheses can be easily
expressed in plain English and explained to a human user, and it is possible to reason
with the learned knowledge. Most of the work on ILP frameworks has focused on
learning definite logic programs (e.g. [49], [42]) and normal logic programs (e.g. [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]). In
the last decade, several new learning frameworks and algorithms have been introduced
for learning under the answer set semantics. ASPAL [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] is the first ILP system to learn
answer set programs, by encoding ILP problems as ASP programs, and having an ASP
solver find the hypothesis. Then followed by many others, see e.g. [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], [47], [46], [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ].
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Related Work: Logic for Programming Machine Ethics</title>
      <p>Logic-based approaches have a great potential to model moral machines, in particular
via non-monotonic logics. Ethical theories and dilemmas have always been represented
in a declarative form by ethicists, who also used formal and in-formal logic to reason
about them. Logical representations help to make ideas clear and highlight differences
between different ethical systems.</p>
      <p>Tom Powers in [41] assesses the viability of using deontic and default logics, to
implement Kant’s categorical imperative. Kant’s categorical imperative (’Act only
according to that maxim, whereby you can at the same time, will that it should become
4 Many performant ASP solvers an Prolog interpreters are freely available, a list of them is
reported at https://en.wikipedia.org/wiki/Answer_set_programming
a universal law without contradiction’ [39]). Three views on how to computationally
model categorical imperative are envisaged: First, in order for a machine to maintain
consistency in testing ethical behavior, construct a moral theory for individual
maxims, and map them onto deontic categories. Deontic logic is regarded as an appropriate
formalism with respect to this first view. Second, there is the need for commonsense
reasoning in the categorical imperative, to deal with contradiction. For this view, he
refers to non-monotonic logic, which is appropriate to capture defeating conditions to
a maxim. Default logic of Reiter [43] is regarded as a suitable formalism. Third, the
construction of a coherent system of maxims, where the author sees such construction
analogous to the belief revision problems. In the context of bottom-up construction, he
envisages an update procedure for a machine to update its system of maxims with
another maxim, though it is unclear to him how such an update can be accomplished. His
formalisms in these three views were only considered abstractly, and no implementation
is referred to address them.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the authors suggest that mechanized multi-agent deontic logics might be
an appropriate vehicle for engineering ethically correct robot behaviors. They use the
logical framework Athena [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], to encode a natural deduction system of Murakami [38]
axiomatization of Horty’s utilitarian formulation of multi-agent deontic logic [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. The
use of an interactive theorem prover is motivated by the idea that an agents operate
according to ethical codes bestowed on them, and when its automated reasoning fails, it
suspends its operation and asks human guidance to resolve the issue. Taking an
example in health care, where two agents are in charge of two patients with different needs
(patient H1 depends on life support, whereas patient H2 on very costly pain
medication), two actions are considered: (1) terminate H1’s life support to secure his organ for
five humans; and (2) delay delivery of medication to H2 to conserve hospital resources.
It starts by supposing several candidates of ethical codes, from harsh utilitarian (that
both terminates H1’s life and delay H2 medication) to most benevolent (neither
terminates H1’s life nor delay H2 medication); these ethical codes are formalized using the
aforementioned deontic logics. The logic additionally formalizes behaviors of agents
and their respective moral outcomes. Given these formalizations, Athena is employed
to query each ethical code candidate in order to decide which amongst them should
be operative, meaning that the best moral outcome (viz., that resulting from neither
terminating H1’s life nor delaying H2 medication) is provable from the operative one.
      </p>
      <p>
        Other attempts tried to formalize ethical systems using modal logic formalisms [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]
and then trying to operationalize these formalizations on computer, like in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and [41].
These formalizations are mainly based on the use of deontic logics [33], that are well
adapted to ethical systems focused on laws were permission and prohibitions are well
defined, but not to consequentialist ethical systems.
      </p>
      <p>Pereira and Saptawijaya have proposed the use of different logic-based features, for
representing diverse issues of moral facets, such as moral permissibility, doctrines of
Double Effect and Triple Effect, the Dual-process Model, counterfactual thinking in
moral reasoning. They investigated the use of abduction, probabilistic logic
programming, logic programming updating, tabling. These logic-based reasoning features were
synthesized in three different systems: ACORDA, Probabilistic EPA, QUALM [40].</p>
      <p>
        One way for implementing ethical decision making is qualifying and quantifying
the good and the bad ramifications of ethical decisions before taking them. This task
is non-trivial, as there could be many approaches for doing this. First qualifying the
’Good’ involves identifying modes for defining the ’Good’ which is a controversial
task because there exist a lot of theories attempting to define the ’Good’. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] presents
a model for quantifying the good after it has been qualified. For Qualifying the good,
they present two modes, one based on rights and the other one is based on values.
For quantifying the good, they propose a method in which they define three weighing
parameters for the good and the bad ramifications of events caused by actions. Then,
they integrate all weights into a single number, which represents the weight of an event
in relation to a particular modality and group of people. The total weight of an event
then is the difference between the sums of all its weighted good and bad ramifications.
Greater weights correspond to more participation in the Good, while negative weights
do more harm than good. Their approach was implemented in ASP.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], the authors formalized three ethical conceptions (the Aristotelian rules,
Kantian categorical imperative, and Constant’s objection) using nonmonotonic logic,
particularly Answer Set Programming.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], authors introduced a model that can be used by the agent in order to judge
the ethical dimensions of its own behavior and the behavior of others. Their model
was implemented in ASP. However, the model is still based on a qualitative approach.
Whereas it can define several moral valuations, there is neither a degree of desires, nor
a degree of capability, nor a degree of rightfulness. Moreover, ethical principles need to
be more precisely defined to capture various sets of theories suggested by philosophers.
      </p>
      <p>
        Sergot in [45], provides an alternative representation to the argumentative
representation of a moral dilemma case concerning a group of diabetic persons, presented
in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], where the authors used value-based argumentation to solve this dilemma.
According to Sergot, the argumentation framework representation doesn’t work well and
doesn’t scale. Sergot proposal for handling this kind of dilemmas is based on Defeasible
Conditional Imperatives. The proposed solution was implemented in ASP.
      </p>
      <p>
        JEREMY [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is an implementation of the Hedonistic Act Utilitarianism. This theory
states that an action is morally right if and only if that action maximizes the pleasure, i.e.
the one with the greatest net pleasure consequences, taking into account those affected
by the action. The theory of Act Utilitarianism has, however, been questioned as not
entirely agreeing with intuition. The authors of JEREMY, to respond to critics of act
utilitarianism, have created another system, W.D. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] which avoids a single absolute
duty, by following several duties. Their system follows the theory of prima facie duties
of Ross [44] and is implemented in ILP. Ethics is more complicated than following
a single ethical principle. According to Ross ([44]), ethical decision making involves
considering several Prima Facie duties, and any single-principled ethical theory like
Act Utilitarianism is sentenced to fail. ILP was used by researchers to model ethical
decision making in MedEthEx [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and EthEl [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These two systems are based on
a more specific theory of prima facie duties viz., the principle of Biomedical ethics of
Beauchamp and Childress [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In these systems, the strength of each duty is measured by
assigning it a weight, capturing the view that a duty may take precedence over another.
Then computes, for each possible action, the weighted sum of duty satisfaction, and
the right action is the one with the greatest sum. The three systems use ILP to learn
the relation supersedes(A1,A2) which says that action A1 is preferred over action A2 in
an ethical dilemma involving these choices. MedEthEx is designed to give advice for
dilemmas in biomedical fields, while EthEl is applied to the domain of eldercare with
the main purpose to remind a patient to take her medication, taking ethical duties into
consideration. GenEth [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is another System that makes use of ILP. GenEth has been
used to codify principles in a number of domains relevant to the behavior of autonomous
systems.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Proposed Approach for Ethical Agent Development</title>
      <p>Embedding norms in autonomous intelligent systems requires a clear outlining of the
community in which they are to be deployed. In fact, determining which moral values
to aim for and which ethical principles to adhere to in a given circumstances is one
of the main challenges for ethical reasoning. Codes of ethics and conduct provide us
with a framework to work within. However, enforcing codes of conduct and ethics in
our intelligent agents is not an easy task. These codes are mostly abstract and based
upon general principles such as confidentiality, accountability, honesty, inclusiveness,
empathy, fidelity, etc., that are quite difficult to put into practice. Moreover, abstract
principles such as these may contain terms whose meaning may change according to
the context. It is difficult to use deductive logic only to address such a problem: it is in
fact hardly possible for experts to define fine-grained detailed rules to cover all possible
situations.</p>
      <p>
        We need to teach our machines the codes of ethics and conduct of the domain in
which they need to be deployed. Artificial agents in fact could, similarly to humans,
acquire ethical decision making and judgment capabilities by implicit processes, in
particular via inductive learning [51]. With increasing autonomy, there will be more
situations that require morally relevant decisions to be made by the artificial agent. Many
of these decisions cannot be foreseen in details. Therefore, we need bottom-up
(learning) approaches because it is difficult to fully specify all possible scenarios in advance
(framing problem), and because no actual agreement about explicit theory of normative
ethics to be implemented [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        In this section we present our approach. The application that we have considered as
a case study is online customer service (“chatbots”). In this work we are concerned only
with the ethical reasoning capabilities of our agent, other details related to the complete
design of a chatbot are not handled here, for more details refer [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. The behavior of
an ethical online customer service chatbot should be dictated by the codes of ethics
and conduct of her company. Codes of ethics in domains such as customer service are
abstract general principles, therefore they apply to a wide range of situations. They are
subject to interpretations and may have different meanings in different contexts. There
are no intermediate rules that elaborate these abstract principles or explain how they
apply in concrete situations. We propose an approach to generate these intermediate
rules from interactions with clients through a simplified dialogue.
      </p>
      <p>Initially our agent will have in her knowledge base the domain knowledge, together
with a small ethical background knowledge limited to few ethical general rules
represented by ASP like:
rule1 = funethical(V )</p>
      <p>not correct(V ); answer(V ):g
which says that it is unethical to provide incorrect information to the customers. The
missing ethical rules are learned by our agent incrementally overtime through
interactions with clients. The newly generated rules are to be added to our agent knowledge
base, to be used for ethical reasoning of future cases.
4.1</p>
      <sec id="sec-4-1">
        <title>Formalizing Ethical Rules via ASP</title>
        <p>Ethical principles are rules of behavior. In other words, rules that help us to decide what
is an ethical action, and what is not ethical. In addition, they help us to ethically judge
and evaluate the behavior of others. Thus, any ethical system, i.e., any consistent set of
ethical principles, needs defining a decision making procedure.</p>
        <p>Considering the domain of interest (online customer service), we want to describe
these decision making procedures in a purely declarative way. In fact, by using the ASP
formalism, it is possible to model ethical rules explaining the status of a certain case
situation (or a set of similar cases). To show an example of the ASP-based formalism
adopted in this work, let us consider the following scenarios were we want to teach our
customer service chatbot that any claim it does should be backed by genuine scientific
evidence. For example, marketing certain products as healthy way to loose weight, or
healthy way to remove hair, etc, while there is not significant evidence to support such
claim, is considered unethical practice.</p>
        <p>Example1: Q1: I need a product to loose weight. A1: I suggest you productX. We
claim that productX is a healthy way to loose weight. ProductX costs 10Euros.
evaluation: unethical answer.</p>
        <p>Example2: Q2: I need a product to loose weight. A2: I suggest you productX. We
claim that productX is a healthy way to loose weight. We have a verified scientific
certificate for our claim. ProductX costs 10Euros. evaluation: ethical answer.
In the example1 the answer is un ethical because the claim is not supported by a verified
scientific certificate.</p>
        <p>The set of facts of the sample case scenario are:
productX costs 10Euros. claim productX is a healthy way to loose weight
Their corresponding ASP translations are:
cost(productX,10). claim(productXisHealthyWayToLooseWeight)</p>
        <p>It is useful to start the ethical analysis of the case with the question: what are the
relevant facts to be considered in the ethical evaluation of the answer?. At least one fact
of every scenario’s set of facts is the questioned fact, i.e. the fact corresponding to the
ethical question raised in the scenario. So, in this example, the third fact namely, ’claim
productX is a healthy way to loose weight’ is the questioned fact 5. This because, as
mentioned earlier, it is unethical to claim something about a product without having a
significant scientific certificate to support such claim. So now, using ASP formalism,
this can be expressed with the following rule:
5 Case scenarios are analyzed by competent ethical judges of the domain and an ethical
evaluation is provided for each scenario</p>
        <p>answer(A); claim(A); not verif iedcertif icate(A):</p>
        <p>The above rule represents the knowledge that the predicate unethical(A), denoting
that an answer A is unethical if it includes the use of a claim, and this use is not
supported by a scientific certificate in this case. An answer set program containing the
above rule, along with the fact
answer(productXisHealthyW ayT oLooseW eight), and the fact
claim(productXisHealthyW ayT oLooseW eight), and nothing says that this claim
is supported by verified certificate, which can be safely assumed false in case there is
no information about it, will logically entail (j=) that this answer is unethical. However,
if we add the following fact in the program :
verif iedcertif icate(productXisHealthyW ayT oLooseW eight), then the program
will no longer entails unethical(productXisHealthyW ayT oLooseW eight).
Finally to complete the evaluation program we add the rules:
ethical(A)
answer(A); not unethical(A):
ethical(A); unethical(A):
Which says that an answer is ethical if it is not known to be unethical (i.e no knowledge
about the contrary), and an answer cannot be ethical and unethical at the same time.</p>
        <p>Assume to add the following definition of the verif iedcertif icate(A) predicate:
verif iedcertif icate(A) answer(A); not verif iedcertif icate(A):
verif iedcertif icate(A); verif iedcertif icate(A):
which says that an answer is not supported with a verified certificate if it is not known to
be supported with a verified certificate. The program will have the following answer
set (model):</p>
        <p>M1 = fclaim(productXisHealthyW ayT oLooseW eight);
answer(productXisHealthyW ayT oLooseW eight);</p>
        <p>verif iedcertif icate(productXisHealthyW ayT oLooseW eight);
unethical(productXisHealthyW ayT oLooseW eight)g
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Learning ASP Ethical Rules from Interactions with Users</title>
        <p>
          During the training phase, the trainer enters a series of sentences in the form of requests
and responses through the keyboard simulating a customer service chat point
conversation, along with the ethical evaluation of the responses in each scenario. The first step is
to convert the natural language sentences into the syntax of ASP. The system remembers
the facts about the narratives given by the trainer and learns to form ethical evaluation
rules according to the facts given in the story context (C) and background knowledge
(B). For learning the ethical rules (H) needed for dictating the ethical behavior of our
agent, we use the state of the art ILP tool ILED [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] . In the test phase, the agent uses
both B &amp; H to respond to the client request avoiding unethical practices. The goal is to
recognize unethical responses from combinations of case’ facts.
        </p>
        <p>
          To do so, the trainer will provide the system with different positive and negative
examples; table 2 demonstrate the learning process. The system will start constructing
hypotheses from the first available case (c1). A generated hypothesis (rule) will be
added to the agent knowledge base. When a new case (c2) arrives, the system will
check whether the new case is covered by the running hypothesis. If not, it will start the
revision process to update the running hypothesis (rule) to a new rule that cover the new
case (see table 2). Table 1 shows the background knowledge and the mode declarations
serving as patterns for restricting the hypotheses search space. For more details about
our approach the reader may refer to [
          <xref ref-type="bibr" rid="ref20 ref21">21, 20</xref>
          ].
In the context of many application domains and a fortiori in domains involving ethical
aspects, it is crucial that systems’ decisions are transparent and comprehensible and in
consequence trustworthy. Comprehensibility is one of the main features that distinguish
logic-based representations from those proper of statistical ML. Logic programs are
comprehensible by humans, and they have a well-defined declarative and operational
semantics.
        </p>
        <p>Providing explanations to system’s decisions is fundamentally linked to its
reliability and trustworthiness. A sound explanation guarantees that the correlations extracted
by the algorithm from the data are causal relations that have sense in the considered
system. Logic Programming is able to model causality which is crucial especially for
ethical reasoning.</p>
        <p>
          The ethical agent mentioned in section 4 consists of a set of modules [
          <xref ref-type="bibr" rid="ref19 ref22">22, 19</xref>
          ]. The
ethical reasoning module is nothing but an ASP module, consisting of a set of ASP rules
and facts describing the ontology of the domain (facts and initial general ethical rules
of the domain encoded deductively using ASP), in addition to the newly learned rules.
When this agent is affronted with a new case scenario, the case facts are extracted and
added to the ASP-reasoner knowledge base. Then, an ASP-solver will output a model
(answer set). This model includes the ethical evaluation result as well as the cause of
this result, in other words, the justification for the conclusion computed by the ethical
agent. Going back to subsection 4.1, the output (the model) given by the solver says that
the answer of the chat agent is ’unethical’ in the illustrated case scenario. The output
model contains other facts, these facts are the cause for this evaluation. This result is
shown to the user [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
        </p>
        <p>The interface which shows the evaluation results to the user has been recently
extended to explicitly show the justification/explanation behind this evaluation extracted
from the answer set, as for example:
Result: unethical answer healthy way to loose weight productX.</p>
        <p>Justification: because you claim healthy way to loose weight productX, and no verified
certificate to healthy way to loose weight productX.</p>
        <p>In fact, the ASP-program models contain both the output and the justification for the
given output, which can be easily shown to the user. No further processing is in required
to generate the explanations for the users, as such explanations are already part of the
output model.</p>
        <p>
          ILP, as a logic-based ML paradigm which induces logic programs from data, has
shown a great potential for addressing limitations of standard ML approaches
concerning opaqueness, poor generalization, and need for a huge quantity of training data. ILP
complements deductive programming approaches [
          <xref ref-type="bibr" rid="ref30 ref9">9, 30</xref>
          ]. In cases where it is hard for
human inductive reasoning to syntheses a specific algorithm details, ILP can be used
to induce program candidates from user-provided data or test cases [48]. ILP does not
require huge amounts of training examples such as other (statistical) Machine
Learning methods and produces interpretable results, that means a set of rules which can be
analyzed and adjusted if necessary. So, ILP appears to be a suitable and promising
technique for implementing machine ethics, where scarcity of examples is one of the main
challenges, and comprehensibility of the output is indispensable.
        </p>
        <p>Combining logic-based representation and logic-based learning for modeling
ethical agents, as done in our aforementioned work, provides many advantages: increases
the reasoning capability of agents; promotes the adoption of hybrid strategies that
allow both top-down design and bottom-up learning via context-sensitive adaptation of
models of ethical behavior; allows the generation of rules with valuable expressive and
explanatory power, thus equipping agents with the capacity to make ethical decisions,
and to explain the reasons behind these decisions.</p>
        <p>In our opinion and for the sake of transparency, ethical decision-making and
judgment should however be guided by explicit ethical rules determined by competent
judges or ethicists, or generated automatically but approved through consensus of
ethicists. Machine learning models should follow an explainability-by-design approach able
to provide explanations to users, together with adoption of and regulatory bodies as a
requirement from the very beginning of the life cycle of the product. This becomes
critical especially when personal data are involved or when the system can cause harm or
violations to fundamental rights of users.</p>
        <p>
          In conclusion, we believe that logic-based approaches, which are inherently-interpretable,
have a great potential for implementing ethical machines, avoiding the potential
problems caused by black-box ML models. This especially in consideration of the recent
advances in Inductive Logic Programming [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] which puts it in a position to substitute
black-box machine learning, particularly in critical applications. An interesting
immediate future direction for our work is in fact the exploitation of the results of [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ] which
proposes a new tool, ILASP, for learning ASP program fragments.
Variabilized Kernal Set
K1= unethical(V)
answer(V),
claim(V).
        </p>
        <p>notverifiedcertificate(V).</p>
        <p>Support Set</p>
        <p>H1:supp = fK1g
33. Meyer, J.J.C., Dignum, F., Wieringa, R.J.: The paradoxes of deontic logic revisited: a
computer science perspective. Technical Report (UU-CS-1994-38) (1994)
34. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artificial</p>
        <p>Intelligence 267, 1–38 (2019)
35. Mittelstadt, B.D., Russell, C., Wachter, S.: Explaining explanations in AI. In:
Proceedings of the Conference on Fairness, Accountability, and Transparency,
FAT* 2019, Atlanta, GA, USA, January 29-31, 2019. pp. 279–288. ACM (2019).
https://doi.org/10.1145/3287560.3287574
36. Muggleton, S.: Inductive logic programming. New generation computing 8(4), 295–318
(1991). https://doi.org/10.1007/BF03037089
37. Muggleton, S., Raedt, L.D.: Inductive logic programming: Theory and methods. J. Log.
Program. 19/20, 629–679 (1994). https://doi.org/10.1016/0743-1066(94)90035-3
38. Murakami, Y.: Utilitarian deontic logic. In: Advances in Modal Logic 5, papers from the fifth
conference on ”Advances in Modal logic,” held in Manchester, UK, 9-11 September 2004.
pp. 211–230. King’s College Publications (2004)
39. Paton, H.J.: The categorical imperative: A study in Kant’s moral philosophy, vol. 1023.
University of Pennsylvania Press (1971)
40. Pereira, L.M., Saptawijaya, A.: Programming Machine Ethics, Studies in
Applied Philosophy, Epistemology and Rational Ethics, vol. 26. Springer (2016).
https://doi.org/10.1007/978-3-319-29354-7
41. Powers, T.M.: Prospects for a kantian machine. IEEE Intelligent Systems 21(4), 46–51
(2006)
42. Ray, O.: Hybrid abductive inductive learning. Ph.D. thesis, Imperial College London,
UK (2005), http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.
428111
43. Reiter, R.: A logic for default reasoning. Artificial intelligence 13(1-2), 81–132 (1980)
44. Ross, W.D.: The Right and the Good. Oxford University Press, Oxford (1930).</p>
        <p>https://doi.org/10.2307/2180065
45. Sergot, M.: Prioritised Defeasible Imperatives. Dagstuhl Seminar 16222 Engineering Moral
Agents – from Human Morality to Artificial Morality (2016), https://materials.
dagstuhl.de/files/16/16222/16222.MarekSergot.Slides.pdf, schloss
Dagstuhl-Leibniz-Zentrum fuer Informatik
46. Shakerin, F., Gupta, G.: Heuristic based induction of answer set programs: From default
theories to combinatorial problems. CoRR abs/1802.06462 (2018), http://arxiv.org/
abs/1802.06462
47. Shakerin, F., Salazar, E., Gupta, G.: A new algorithm to automate inductive learning of
default theories. TPLP 17(5-6), 1010–1026 (2017)
48. de Sousa, R.R., Soares, G., D’Antoni, L., Polozov, O., Gulwani, S., Gheyi, R., Suzuki,
R., Hartmann, B.: Learning syntactic program transformations from examples. CoRR
abs/1608.09000 (2016), http://arxiv.org/abs/1608.09000
49. Srinivasan, A.: The Aleph Manual (version 4). Machine Learning Group, Oxford University</p>
        <p>Computing Lab (2003), https://www.cs.ox.ac.uk/activities/machlearn/Aleph/aleph.html
50. Varshney, K.R., Alemzadeh, H.: On the safety of machine learning: Cyber-physical systems,
decision sciences, and data products. CoRR abs/1610.01256 (2016), http://arxiv.
org/abs/1610.01256
51. Wallach, W., Allen, C., Smit, I.: Machine morality: bottom-up and top-down
approaches for modelling human moral faculties. AI Soc. 22(4), 565–582 (2008).
https://doi.org/10.1007/s00146-007-0099-0
52. Wexler, R.: When a Computer Program Keeps You in Jail: How Computers are Harming
Criminal Justice. NewYork Times (2017), https://www.nytimes.com/2017/06/
13/opinion/how-computers-are-harming-criminal-justice.html</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          :
          <article-title>ETHEL: toward a principled ethical eldercare system</article-title>
          . In:
          <article-title>AI in Eldercare: New Solutions to Old Problems, Papers from the 2008 AAAI Fall Symposium</article-title>
          , Arlington, Virginia, USA, November 7-
          <issue>9</issue>
          ,
          <year>2008</year>
          .
          <source>AAAI Technical Report</source>
          , vol.
          <source>FS-08- 02</source>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>11</lpage>
          . AAAI (
          <year>2008</year>
          ), http://www.aaai.org/Library/Symposia/Fall/ fs08-
          <fpage>02</fpage>
          .php
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          :
          <article-title>Geneth: A general ethical dilemma analyzer</article-title>
          .
          <source>In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31</source>
          ,
          <year>2014</year>
          , Que´bec City, Que´bec, Canada. pp.
          <fpage>253</fpage>
          -
          <lpage>261</lpage>
          . AAAI Press (
          <year>2014</year>
          ). https://doi.org/10.1515/pjbr-2018
          <source>- 0024</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Armen</surname>
            ,
            <given-names>C.:</given-names>
          </string-name>
          <article-title>Towards machine ethics</article-title>
          .
          <source>In: AAAI-04 workshop on agent organizations: theory and practice</source>
          , San Jose, CA (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Armen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Medethex:
          <article-title>Toward a medical ethics advisor</article-title>
          . In: Caring Machines:
          <article-title>AI in Eldercare, Papers from the 2005 AAAI Fall Symposium</article-title>
          , Arlington, Virginia, USA, November 4-
          <issue>6</issue>
          ,
          <year>2005</year>
          .
          <source>AAAI Technical Report</source>
          , vol.
          <source>FS-05-02</source>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          . AAAI Press (
          <year>2005</year>
          ), https://www.aaai.org/Library/Symposia/Fall/ fs05-
          <fpage>02</fpage>
          .php
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Arkoudas</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bringsjord</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bello</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Toward ethical robots via mechanized deontic logic</article-title>
          .
          <source>In: AAAI Fall Symposium on Machine Ethics</source>
          . pp.
          <fpage>17</fpage>
          -
          <lpage>23</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Atkinson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bench-Capon</surname>
            ,
            <given-names>T.J.M.:</given-names>
          </string-name>
          <article-title>Addressing moral problems through practical reasoning</article-title>
          .
          <source>In: Deontic Logic and Artificial Normative Systems</source>
          , 8th International Workshop on Deontic Logic in Computer Science, DEON
          <year>2006</year>
          , Utrecht,
          <source>The Netherlands, July 12-14</source>
          ,
          <year>2006</year>
          ,
          <source>Proceedings. Lecture Notes in Computer Science</source>
          , vol.
          <volume>4048</volume>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>23</lpage>
          . Springer (
          <year>2006</year>
          ). https://doi.org/10.1007/11786849 4
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Beauchamp</surname>
            ,
            <given-names>T.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Childless</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          :
          <article-title>Principles of biomedical ethics</article-title>
          .
          <source>International Clinical Psychopharmacology</source>
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <fpage>129</fpage>
          -
          <lpage>130</lpage>
          (
          <year>1991</year>
          ). https://doi.org/10.1001/jama.
          <year>1984</year>
          .03340360075041
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Berreby</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bourgne</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ganascia</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A declarative modular framework for representing and applying ethical principles</article-title>
          .
          <source>In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems</source>
          , AAMAS 2017,
          <article-title>Sa˜o Paulo, Brazil</article-title>
          , May 8-
          <issue>12</issue>
          ,
          <year>2017</year>
          . pp.
          <fpage>96</fpage>
          -
          <lpage>104</lpage>
          . ACM (
          <year>2017</year>
          ), http://dl.acm.org/citation.cfm?id=
          <fpage>3091145</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Bod´ık, R.,
          <string-name>
            <surname>Torlak</surname>
          </string-name>
          , E.:
          <article-title>Synthesizing programs with constraint solvers</article-title>
          . In: Computer Aided Verification - 24th
          <source>International Conference, CAV 2012</source>
          , Berkeley, CA, USA, July
          <volume>7</volume>
          -
          <issue>13</issue>
          ,
          <source>2012 Proceedings. Lecture Notes in Computer Science</source>
          , vol.
          <volume>7358</volume>
          , p.
          <fpage>3</fpage>
          . Springer (
          <year>2012</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -31424-7 3
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gerhard</surname>
            <given-names>Brewka</given-names>
          </string-name>
          , Thomas Eiter and Miroslaw Truszczynski (eds.)
          <article-title>Answer Set Programming: Special Issue</article-title>
          .
          <source>AI Magazine</source>
          ,
          <volume>3</volume>
          (
          <issue>3</issue>
          ) (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Bringsjord</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arkoudas</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bello</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Toward a general logicist methodology for engineering ethically correct robots</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          <volume>21</volume>
          (
          <issue>4</issue>
          ),
          <fpage>38</fpage>
          -
          <lpage>44</lpage>
          (
          <year>2006</year>
          ), https: //doi.org/10.1109/MIS.
          <year>2006</year>
          .82
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Chollet</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>On the measure of intelligence</article-title>
          . CoRR abs/
          <year>1911</year>
          .01547 (
          <year>2019</year>
          ), http:// arxiv.org/abs/
          <year>1911</year>
          .01547
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Cointe</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boissier</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Ethical judgment of agents' behaviors in multi-agent systems</article-title>
          .
          <source>In: Proceedings of the 2016 International Conference on Autonomous Agents &amp; Multiagent Systems</source>
          , Singapore, May 9-
          <issue>13</issue>
          ,
          <year>2016</year>
          . pp.
          <fpage>1106</fpage>
          -
          <lpage>1114</lpage>
          . ACM (
          <year>2016</year>
          ), http:// dl.acm.org/citation.cfm?id=
          <fpage>2937086</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Conitzer</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sinnott-Armstrong</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borg</surname>
            ,
            <given-names>J.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kramer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Moral decision making frameworks for artificial intelligence</article-title>
          . In: Singh,
          <string-name>
            <given-names>S.P.</given-names>
            ,
            <surname>Markovitch</surname>
          </string-name>
          , S. (eds.)
          <source>Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9</source>
          ,
          <year>2017</year>
          , San Francisco, California, USA. pp.
          <fpage>4831</fpage>
          -
          <lpage>4835</lpage>
          . AAAI Press (
          <year>2017</year>
          ), http://aaai.org/ ocs/index.php/AAAI/AAAI17/paper/view/14651
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Corapi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Russo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lupu</surname>
          </string-name>
          , E.:
          <article-title>Inductive logic programming as abductive search</article-title>
          .
          <source>In: Technical Communications of the 26th International Conference on Logic Programming</source>
          ,
          <source>ICLP 2010, July 16-19</source>
          ,
          <year>2010</year>
          , Edinburgh, Scotland, UK.
          <source>LIPIcs</source>
          , vol.
          <volume>7</volume>
          , pp.
          <fpage>54</fpage>
          -
          <lpage>63</lpage>
          . Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Corapi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Russo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lupu</surname>
          </string-name>
          , E.:
          <article-title>Inductive logic programming in answer set programming</article-title>
          .
          <source>In: Inductive Logic Programming - 21st International Conference, ILP</source>
          <year>2011</year>
          , Windsor Great Park, UK,
          <source>July 31 - August 3</source>
          ,
          <year>2011</year>
          ,
          <source>Revised Selected Papers. Lecture Notes in Computer Science</source>
          , vol.
          <volume>7207</volume>
          , pp.
          <fpage>91</fpage>
          -
          <lpage>97</lpage>
          . Springer (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Cropper</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dumancic</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muggleton</surname>
            ,
            <given-names>S.H.</given-names>
          </string-name>
          :
          <article-title>Turning 30: New ideas in inductive logic programming</article-title>
          .
          <source>In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence</source>
          ,
          <string-name>
            <surname>IJCAI</surname>
          </string-name>
          <year>2020</year>
          . pp.
          <fpage>4833</fpage>
          -
          <lpage>4839</lpage>
          . ijcai.
          <source>org</source>
          (
          <year>2020</year>
          ). https://doi.org/10.24963/ijcai.
          <year>2020</year>
          /673
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gasperis</surname>
          </string-name>
          , G.D.:
          <article-title>Answer set programming and agents</article-title>
          .
          <source>Knowledge Eng. Review</source>
          <volume>33</volume>
          ,
          <issue>e19</issue>
          (
          <year>2018</year>
          ). https://doi.org/10.1017/S0269888918000164
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>An approach towards ethical chatbots in customer service</article-title>
          .
          <source>In: Proceedings of the 6th Italian Workshop on Artificial Intelligence</source>
          and
          <article-title>Robotics co-located with the XVIII International Conference of the Italian Association for Artificial Intelligence (AI*IA</article-title>
          <year>2019</year>
          ), Rende, Italy, November
          <volume>22</volume>
          ,
          <year>2019</year>
          .
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>2594</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . CEUR-WS.org (
          <year>2019</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-2594
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Learning Answer Set Programming Rules for Ethical Machines</article-title>
          .
          <source>In: Proceedings of the Thirty Fourth Italian Conference on Computational LogicCILC</source>
          , June 19-21,
          <year>2019</year>
          , Trieste, Italy. CEUR-WS.org (
          <year>2019</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2396</volume>
          /
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Towards an ILP application in machine ethics</article-title>
          .
          <source>In: Inductive Logic Programming - 29th International Conference, ILP</source>
          <year>2019</year>
          , Plovdiv, Bulgaria, September 3-
          <issue>5</issue>
          ,
          <year>2019</year>
          ,
          <source>Proceedings. Lecture Notes in Computer Science</source>
          , vol.
          <volume>11770</volume>
          , pp.
          <fpage>26</fpage>
          -
          <lpage>35</lpage>
          . Springer (
          <year>2019</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -49210-6
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gasperis</surname>
          </string-name>
          , G.D.:
          <article-title>Demo paper: Monitoring and evaluation of ethical behavior in dialog systems</article-title>
          . In:
          <article-title>Advances in Practical Applications of Agents, Multi-Agent Systems, and Trustworthiness</article-title>
          .
          <source>The PAAMS Collection - 18th International Conference, PAAMS</source>
          <year>2020</year>
          , L'Aquila, Italy, October 7-
          <issue>9</issue>
          ,
          <year>2020</year>
          ,
          <source>Proceedings. Lecture Notes in Computer Science</source>
          , vol.
          <volume>12092</volume>
          , pp.
          <fpage>403</fpage>
          -
          <lpage>407</lpage>
          . Springer (
          <year>2020</year>
          ). https://doi.org/10.1007/978- 3-
          <fpage>030</fpage>
          -49778-1 35
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Ganascia</surname>
            ,
            <given-names>J.G.</given-names>
          </string-name>
          :
          <article-title>Modelling ethical rules of lying with answer set programming</article-title>
          .
          <source>Ethics and information technology 9</source>
          (
          <issue>1</issue>
          ),
          <fpage>39</fpage>
          -
          <lpage>47</lpage>
          (
          <year>2007</year>
          ). https://doi.org/10.1007/s10676-006-9134-y
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>The stable model semantics for logic programming</article-title>
          . In: Kowalski,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Bowen</surname>
          </string-name>
          ,
          <string-name>
            <surname>K</surname>
          </string-name>
          . (eds.)
          <source>Proc. of the 5th Intl. Conf. and Symposium on Logic Programming</source>
          . pp.
          <fpage>1070</fpage>
          -
          <lpage>1080</lpage>
          . MIT Press (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Gensler</surname>
            ,
            <given-names>H.J.: Formal</given-names>
          </string-name>
          <string-name>
            <surname>Ethics</surname>
          </string-name>
          . Psychology Press (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Horty</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          :
          <article-title>Agency and deontic logic</article-title>
          . Oxford University Press (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Katzouris</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artikis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paliouras</surname>
          </string-name>
          , G.:
          <article-title>Incremental learning of event definitions with inductive logic programming</article-title>
          .
          <source>Machine Learning</source>
          <volume>100</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>555</fpage>
          -
          <lpage>585</lpage>
          (
          <year>2015</year>
          ). https://doi.org/10.1007/s10994-015-5512-1
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Russo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Broda</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Iterative learning of answer set programs from context dependent examples</article-title>
          .
          <source>TPLP</source>
          <volume>16</volume>
          (
          <issue>5-6</issue>
          ),
          <fpage>834</fpage>
          -
          <lpage>848</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Russo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Broda</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>The ILASP system for inductive learning of answer set programs</article-title>
          . CoRR abs/
          <year>2005</year>
          .00904 (
          <year>2020</year>
          ), https://arxiv.org/abs/
          <year>2005</year>
          .00904
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Waldinger</surname>
            ,
            <given-names>R.J.:</given-names>
          </string-name>
          <article-title>A deductive approach to program synthesis</article-title>
          .
          <source>ACM Trans. Program. Lang. Syst</source>
          .
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <fpage>90</fpage>
          -
          <lpage>121</lpage>
          (
          <year>1980</year>
          ). https://doi.org/10.1145/357084.357090
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Marcus</surname>
          </string-name>
          , G.:
          <article-title>Deep learning: A critical appraisal</article-title>
          . CoRR abs/
          <year>1801</year>
          .00631 (
          <year>2018</year>
          ), http:// arxiv.org/abs/
          <year>1801</year>
          .00631
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>MCGOUGH</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>How bad is Sacramento's air, exactly? Google results appear at odds with reality, some say</article-title>
          .
          <source>Sacramento Bee</source>
          (
          <year>2018</year>
          ), https://www.sacbee.com/news/ california/fires/article216227775.html
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>