<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ensuring Trustworthy and Ethical Behavior in Intelligent Logical Agents?</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita` degli Studi di L'Aquila Dipartimento di Ingegneria e Scienze dell'Informazione</institution>
          ,
          <addr-line>e Matematica Via Vetoio snc, Loc. Coppito, I-67010 L'Aquila -</addr-line>
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Autonomous Intelligent Agents are employed in many important autonomous applications upon which the life and welfare of living beings and vital social functions may depend. Therefore, agents should be trustworthy. Apriori certification techniques can be useful, but are not sufficient for agents that evolve, and thus modify their epistemic and belief state. In this paper we propose/refine/extend techniques for run-time assurance, based upon introspective self-monitoring and checking. The aim is to build a 'toolkit' to allow an agent designer/developer to ensure trustworthy and ethical behavior.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The development of methods for implementing Intelligent Agents so as to ensure
transparent, explainable, reliable and ethical behavior is due to the fact that agent systems are
being widely adopted for many important autonomous applications upon which the life
and welfare of living beings and vital social functions may depend. Therefore, agents
should be trustworthy in the sense that they could be relied upon to do what is expected
of them, while not exhibiting unwanted behavior. Equally importantly, they should not
behave in improper/unethical ways given the present context, and they should not devise
their own new objectives in contrast with the user’s interests. They should also be
transparent, in the sense of being able to explain their actions and choices when required.
Without that, it would be hard for human users to trust such systems. Agents should also
report to their users in case the interaction with the environment leads them to
indentify new objectives to pursue, as such objectives might not be in line to user’s interests.
We restrict ourselves to agent systems based upon computational logic, because they
provide transparency and explainability ‘by design’, as logical rules can easily be
transposed into natural-language explanations. Logic-based languages and architectures are
discussed in the survey [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. They are based (more or less directly) on the so-called BDI
(’Belief, Desires, Intention’) model of agency [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Most pre-deployment (or ’static’ or ’a priori’) verification methods for logical
agents rely upon model-checking (cf. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and the references therein), and some (e.g.,
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) upon theorem proving. These techniques are able to certify ’a priori’ that agents
fulfill certain requisites of trustworthiness, that means that they do what is expected
from them, and do not violate certain rules of behavior. However, they can be sufficient
for agents that keep their epistemic state constant during their operation, and interact
with the environment in a predefined way; so, they are not sufficient for agents that will
revise their beliefs and objectives in consequence of the interaction with a changing
not-always-predictable environment.
      </p>
      <p>The motivation of the work presented in the present paper is in fact that an agent’s
behavior is in general affected by its interaction with the external world, i.e., by which
events perceived by the agent and in which order. In many practical cases, the actual
arrival order of events and the set of possible events is so large that computing all
combinations would result in a combinatorial explosion, thus making ‘a priori’
verification techniques too heavy to apply and therefore unpractical. In agents that learn, it
is not even possible to predict the set of events that will be observed and considered
by an agent, that might devise new objective,s not necessarily in line with the expected
agent’s behavior. We believe in therefore that, in changing circumstances, agents should
be able to observe and if necessary modify their own behavior, i.e. they should reflect on
themselves. The methods that we propose are not alternative but rather complementary
to a-priori existing verification and testing methodologies.</p>
      <p>
        We find similarities between our approach and the point of view of Self-aware
computing: quoting [
        <xref ref-type="bibr" rid="ref7">5</xref>
        ], Self-aware and self-expressive computing describes an emerging
paradigm for systems and applications that proactively gather information; maintain
knowledge about their own internal states and environments; and then use this
knowledge to reason about behaviors, revise self-imposed goals, and self-adapt.: : : Systems
that gather unpredictable input data while responding and self-adapting in uncertain
environments are transforming our relationship with and use of computers. Reporting
from [
        <xref ref-type="bibr" rid="ref5 ref8">6</xref>
        ], From an autonomous agent view, a self-aware system must have sensors,
effectors, memory (including representation of state), conflict detection and handling,
reasoning, learning, goal setting, and an explicit awareness of any assumptions. The
system should be reactive, deliberative, and reflective.
      </p>
      <p>In this paper we propose new contributions to an envisaged toolkit for run-time
selfassurance of evolving agents. We specify techniques and tools for: (i) checking the
immediate, “instinctive” reactive behavior in a context-dependent way, and (ii) checking
and re-organizing an agent’s operation at a more global level. In particular, we introduce
meta-rules and meta-constraints for agents’ run-time self-checking (i.e., checking
occurs at run time at a certain –customizable– frequency), that can be exploited to ensure
respect of machine ethics principles. The proposed meta-constraints are based upon a
simple interval temporal logic (defined in previous work) particularly tailored to the
agent realm, that we called A-ILTL (‘Agent-Oriented Interval LTL’, LTL standing as
customary for ‘Linear Temporal Logic’). As A-ILTL constraints and evolutionary
expressions are defined over formulas of an underlying logic language L, one contribution
of this paper is to make A-ILTL independent of L.</p>
      <p>In A-ILTL, properties can be defined that should hold in specific time instants and
time intervals, according to past and future events. In fact, Evolutionary A-ILTL
Expressions are a novel feature that we introduce in this paper, and that we have implemented
and experimented. They are based upon specifying: (i) a sequence of events that are
supposed to have happened; represented via a notation obtained from regular
expressions: one does not need to completely specify a finite sequence, but is allowed to
define partially specified and even infinite sequences; (ii) a temporal-logic-like expression
defining a property that should hold (in given interval); (iii) a sequence of events that
are supposed to happen in the future, without affecting the property; (iv) optionally,
“repair” countermeasure to be undertaken if the property is violated. Counter measures can
be at the object-level, i.e., they can be related to the application, or at the meta-level,
e.g., they can even result in replacing a software component by a diverse alternative.
The act of checking temporal expressions can indeed be considered as an introspective
act, as an agent suspends its current activities in order to envision and possibly
selfmodify its own state. We do not aim to continuously monitor the entire system’s state,
but rather to monitor only the activities that a designer deems to be relevant for keeping
the system’s behavior within a desired range.</p>
      <p>
        Our work has a clear connection to the work of [
        <xref ref-type="bibr" rid="ref6 ref9">7</xref>
        ], which proposes to implement the
“restraining bolt”1 by conditioning reinforcement learning of reactive actions to obey
LTL specifications defining known ethical principles. This (very promising) method is
orthogonal to ours, because our checking is performed at run-time in order on the one
hand to enact ethical rules not hard-wired but learned by the agent, and on the other
hand to check the agent’s overall BDI behavior (desires and intentions/plans should not
be unethical).
      </p>
      <p>
        A toolkit for logical agents’ run-time self-assurance can be obtained by means of
the synergy between the new features proposed here with those introduced in past work
(notably [
        <xref ref-type="bibr" rid="ref11 ref12">9, 10</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">12</xref>
        ], and [
        <xref ref-type="bibr" rid="ref10">8</xref>
        ])2. The proposed approach can be seen under two
perspectives. On the one hand, as a means to define an enhanced “restraining bolt” capable
to prevent agents to enact unwanted behavior unpredictable at design time, as it was
learned later on (tipically via reinforcement learning). On the other hand, since “our”
agents can be also able to learn rules of behavior over time, we can have a “disobeying
robot” that can on occasion disallow behavior hardwired at design time, because in the
present agent’s context such behavior violates context-dependent learned ethical rules.
      </p>
      <p>The paper is organized as follows: in Section 2 we introduce metarules for
checking reactive behavior, and in Section 3 we introduce A-ILTL constraints in theory and
in practice. In Section 4 we briefly discuss related work and then conclude. In the
examples, we adopt a Prolog-like syntax of the form Head : Body , where Head is an
atom, Body is a conjunction of literals (atoms or negated atoms, that are often called
’subgoals’) and the comma stands for ^. The specific syntax that we use is however
purely aimed at illustration: the approach can be, ‘mutatis mutandis’, re-worked w.r.t. a
different syntax.
1 A “restraining bolt” as imagined in the Star Wars Science Fiction saga is a small cylindrical
device that, when activated, restricts a droid’s actions to a set of behaviors considered
desirable/acceptable by its owners.
2 The author acknowledges the co-authors of the aforementioned papers, that contributed to
various extents to the development of this research.</p>
    </sec>
    <sec id="sec-2">
      <title>Checking Agents’ Reactive Behavior</title>
      <p>In the BDI model, an agent will have objectives, and devise plan to reach these
objectives. However, most agent-oriented languages and framework also provide
mechanisms for ’pure’ reactivity, i.e. ’instinctive’ reaction to an event. The possible ethically
acceptable reactions that an agent can enact are in general strictly dependent on the
context, the agent’s role and the situation. Assume for sake of example an agent
(either human or artificial) which finds itself to face some other agent which might be, or
certainly is, a criminal:
– if the context is that of playing a videogame, every kind of reaction is allowed,
including beating, shooting or killing the ’enemy’, except when small children are
watching;
– if the context is a role game then the players can pretend to threaten, shout or kill
the other players, where every action is simulated and thus harmless;
– in reality, a citizen can shout, call the police, and try to defend itself; a policemen
can threathen, arrest, or in extreme cases shoot the suspect criminal.</p>
      <p>
        The reaction to enact in each situation can be ’hardwired’ by the agent’s designer, or
it can be learned, e.g., via reinforcement learning. While in the former case a-priori
verification can suffice, in the latter case run-time checking of agent’s behavior w.r.t. ethical
rules is in order, as the results of learning are in general unpredictable and to some
extent potentially unreliable. Even the method of conditioning reinforcement learning to
obey some properties proposed in [
        <xref ref-type="bibr" rid="ref6 ref9">7</xref>
        ] may not suffice, as it can hardly consider contexts
and roles.
      </p>
      <p>We introduce a mechanism to verify and possibly enforce desired properties by
means of metalevel rules. To define such new rules, we assume to augment the
underlying language L at hand with a naming (or “reification”) mechanism, and with
the introduction of two distinguished predicates, solve and solve not . These are
metapredicates, that can be employed to control the object-level behavior. In fact, solve,
applied to (the name of) an atom which represents an action of an objective of an agent,
may specify conditions for that action/goal to be enacted; vice-versa solve not specifies
under which conditions they should be blocked.</p>
      <p>
        Below is a simple example of the use of solve to specify that execution of an action
Act is allowed in the present agent’s context of operation C and role R only if such
action is allowed, and it is deemed to be ethical w.r.t. context and role. Any kind of
reasoning can be performed via metalevel rules in order to monitor and assess
baselevel ethical behavior Below, lowercase syntactic elements such as p0, c0 are names of
predicates and constants, and uppercase syntactic elements such as V 0 are metavariables
(naming mechanisms have been widely studied, cf., among many, [
        <xref ref-type="bibr" rid="ref15 ref16 ref17">13–15</xref>
        ], where
metalevel mechanisms such as those that we employ here are studied).
      </p>
      <p>solve(execute action0(Act 0)) :
present context (C ); agent role(R);
allowed (C ; R; Act 0); ethical (C ; R; Act 0):</p>
      <p>We assume that solve(execute action0(Act 0)) is automatically invoked whenever
subgoal (atom) execute action(Act ) is attempted at the object level. More generally,
given any subgoal A at the base level, if there exists an applicable solve rule such rule
is automatically applied, and A can succeed only if solve("A) succeeds, where "A is
the name of A according to the chosen naming mechanism. We assume, also, that the
present context and the agent’s role are kept in the agent’s knowledge base. Since both
parameters may change, the same action may be allowed in some circumstances and
not in others.</p>
      <p>Symmetrically we can define metarules to forbid unwanted object-level behavior,
e.g.:
solve not (execute action0(Act 0)) :</p>
      <p>present context (C ); ethical exception(C ; Act 0):
this rule prevents successful execution of its argument, in the example
execute action(Act ), whenever solve not ("A) succeeds. Then, action/goal A
can succeed (according to its base-level definition) only if solve("A) (if defined)
succeeds and solve not ("A) (if defined) does not succeed.</p>
      <p>The outlined functioning corresponds to upward reflection when the current
subgoal A is reified and applicable solve and solve not metarules are searched; if
applicable metarules are found, control in fact shifts from base level to metalevel (as solve
and solve not can rely upon a set of auxiliary metalevel rules). If no rule is found or
whenever solve and solve not metarules complete their execution, downward
reflection returns control to the base level, to execution of A if confirmed or to subsequent
goals/actions if A has been canceled by either failure of the applicable solve metarule
or success of the applicable solve not metarule.</p>
      <p>Via solve and solve not metarules, fine-grained activities of an agent can be
punctually checked and thus allowed and disallowed, according to the context an agent is
presently involved into with a certain role.</p>
      <p>
        Semantics of the proposed approach can be sketched as follows (a full semantic
definition can be found in [
        <xref ref-type="bibr" rid="ref18">16</xref>
        ]). According to [
        <xref ref-type="bibr" rid="ref19">17</xref>
        ], in general terms we understand a
semantics SEM for logic knowledge representation languages/formalisms as a function
which associates a theory/program with a set of sets of atoms, which constitute the
intended meaning. When saying that P is a program, we mean that it is a program/theory
in the (here unspecified) logic languages/formalism that one wishes to consider.
      </p>
      <p>We introduce the following restriction on sets of atoms I that should be
considered for the application of SEM . First, as customary we only consider sets of atoms I
composed of atoms occurring in the ground version of P . The ground version of
program P is obtained by substituting in all possible ways variables occurring in P by
constants also occurring in P . In our case, metavariables occurring in an atom must be
substituted by metaconstants, with the following obvious restrictions: a metavariable
occurring in the predicate position must be substituted by a metaconstant denoting a
predicate; a metavariable occurring in the function position must be substituted by a
metaconstant denoting a function; a metavariable occurring in the position
corresponding to a constant must be substituted by a metaconstant denoting a constant. According
to well-established terminology, we therefore require I BP , where BP is the
Herbrand Base of P . Then, we pose some more substantial requirements: we restrict SEM
to determine acceptable sets of atoms only3 where I is an acceptable set of atoms iff I
satisfies the axiom schemata:</p>
      <p>A solve("A) :A solve not ("A)</p>
      <p>
        So, we obtain the implementation of properties that have been defined via solve and
solve not rules without modifications to SEM for any formalism at hand. For clarity
however, one can assume to filter away solve and solve not atoms from acceptable
sets. In fact, the Base version IB of an acceptable set I can be obtained by omitting
from I all atoms of the form solve("A) and solve not ("A). Procedural semantics and
the specific naming relation that one intends to use remain to be defined, whereas the
above-introduced semantics is independent of the naming mechanism. For approaches
based upon (variants of) Resolution (like, e.g., Prolog and like many agent-oriented
languages such as, e.g., AgentSpeak [
        <xref ref-type="bibr" rid="ref20">18</xref>
        ], GOAL [
        <xref ref-type="bibr" rid="ref21">19</xref>
        ], 3APL [
        <xref ref-type="bibr" rid="ref22">20</xref>
        ] and DALI [
        <xref ref-type="bibr" rid="ref23 ref24">21, 22</xref>
        ])
one can extend their proof procedure so as to automatically invoke metarules whenever
applicable, to validate or invalidate success of A.
      </p>
      <p>
        How to define the predicate ethical (C ; R; Act 0)? Again, rules defining this
predicate can be specified at design time, or they can be learned, or a combination of both
options. In previous works [
        <xref ref-type="bibr" rid="ref25 ref26 ref27">23–25</xref>
        ], a hybrid logic-based approach was proposed for
ethical evaluation of agents’ behavior, with reference to dialogue agents (so-called
‘chatbots’) but easily extendable to other kinds of agents and of applications. The approach
is based on logic programming as a knowledge representation and reasoning language,
and on Inductive Logic Programming (ILP) for learning rules needed for ethical
evaluation and reasoning, taking as a starting point general ethical guidelines related to a
context; the learning phase starts from a set of annotated cases, but the system is then
able to perform continuous incremental learning.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Checking Agent’s Behavior over Time</title>
      <p>The techniques illustrated in the previous section are “punctual”, in the sense that they
provide context-based mechanisms to allow/disallow agents’ actions. However, it is
necessary to introduce ways to monitor an agent’s behavior in a more extensive way.
In fact, properties that one wants to verify often depend upon which events have been
observed by an agent up to a certain point, and which others are supposed to occur later.
The definition of frameworks such as the one that we propose here, for checking agent’s
operation during its ‘life’ based on its experience and expectations, has not been widely
treated so far in the literature.</p>
      <p>Below we propose in fact a method for checking the agent’s behavior during the
agent’s activity, based on maintaining information on its past behavior. In particular, we
assume an agent to record what has happened in the past (events perceived, conclusions
reached and actions performed). These records (that we call past events) encode
relevant aspects of an agent’s experience: possibly augmented by time stamps, they form
the (subjective) history of the agent’s activity. The set of past events evolves in time, and
can be managed and kept up-to-date. Past events constitute the ground upon which an
3 modulo bijection: i.e., SEM can be allowed to produce sets of atoms which are in one-to-one
correspondence with acceptable sets of atoms
agent can check its own behavior with respect to what has happened in the external
environment (more precisely, with respect to what the agent has perceived about what has
happened). To perform such checks, we introduce a new kind of constraints, that we call
Evolutionary Expressions, that define properties that should hold and countermeasures
that should be taken if they are violated.
3.1</p>
      <sec id="sec-3-1">
        <title>A-ILTL</title>
        <p>For defining properties that are supposed to be respected by an evolving system, a
wellestablished approach is that of Temporal Logic, and in particular of Linear-time
Temporal Logics (LTL). These logics evaluate each formula with respect to a vertex-labeled
infinite path (or “state sequence”) s0s1 : : : where each vertex si in the path corresponds
to a point in time (or “time instant” or “state”). In what follows, we use the standard
notation for the best-known LTL operators.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref12">10</xref>
          ] we formally introduced an extension to LTL based on intervals, called
AILTL for ‘Agent-Oriented Interval LTL’. A-ILTL is useful because the underlying
discrete linear model of time and the complexity of the logic remains unchanged with
respect to LTL. This simple formulation can be efficiently implemented, and is
sufficient for expressing and checking a number of interesting properties of agent systems.
Formal syntax and semantics of A-ILTL operators (also called below “Interval
Operators”) are fully defined in [
          <xref ref-type="bibr" rid="ref12">10</xref>
          ]. Some among the A-ILTL operators are the following,
where ' is an expression in an underlying agent-oriented language L, and m; n are
positive integer numbers used to (optionally) specify the interval where the formula must
hold; if the interval is not specified, then the meaning is the same as for LTL.
Fm;n (eventually (or “finally”) in time interval). Fm;n' states that ' has to hold
sometime on the path from state sm to state sn.
        </p>
        <p>Gm;n (always in time interval). Gm;n' states that ' should become true at most at state
sm and then hold at least until state sn. Can be customized into Gm, bounded always,
where ' should become true at most at state sm.</p>
        <p>Nm;n (never in time interval). Nm;n' states that ' should not be true in any state
between sm and sn.</p>
        <p>In practical use, as seen below A-ILTL operators will allow one to construct useful
run-time constraints.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>A-ILTL and Evolutionary Semantics</title>
        <p>
          In this section, we refine A-ILTL so as to operate on a sequence of states that
corresponds to the Evolutionary Semantics [
          <xref ref-type="bibr" rid="ref28">26</xref>
          ]. This is a meta-semantic approach, as it is
independent of the underlying agent-oriented logic languages/formalism L. It assumes
that, during agent’s execution, the agent can evolve: at each evolution step i the agent’s
program (that initially will be P0) may change (e.g., by learning and via interaction
with other agents), with a transformation of Pi into Pi+1, thus producing a Program
Evolution Sequence P E = [P0; : : : ; Pn; : : :]. The program evolution sequence will
imply a corresponding Semantic Evolution Sequence M E = [M0; : : : ; Mn; : : :] where
Mi is the semantic account of Pi at step i according to the semantics of L. In parallel,
the agent’ history H, including agent’s memories as recorded by the agent itself, will
evolve as well (for a recent formal approach to memory management in logical agents
cf. [
          <xref ref-type="bibr" rid="ref29 ref30">27, 28</xref>
          ]) . The Evolutionary Semantics "Ag of Ag is thus the tuple hH; P E; M Ei,
with n = 1 (i.e., over a potentially infinite evolution). The snaphot at stage i, indicated
with "iAg, is the tuple hHi; Pi; Mii
        </p>
        <p>
          Notice that, states in our case are not simply intended as time instants. Rather, they
correspond to stages of the agent evolution. Time in this setting is considered to be local
to the agent, where with some sort of “internal clock” is able to time-stamp events and
state changes. We borrow from [
          <xref ref-type="bibr" rid="ref31">29</xref>
          ] the following definition of timed state sequence,
that we tailor to our setting.
        </p>
        <p>Definition 1. Let be a (finite or infinite) sequence of states, where the ith state ei, ei
0, is the semantic snaphots at stage i "iAg of given agent Ag . Let T be a corresponding
sequence of time instants ti, ti 0. A timed state sequence for agent Ag is the couple
Ag = ( ; T ). Let i be the i-th state, i 0, where i = hei; tii = h"iAg ; tii.</p>
        <p>We in particular consider timed state sequences which are monotonic, i.e., if ti+1 &gt;
ti then ei+1 6= ei. In fact, there is no point in semantically considering a static situation:
as mentioned, a transition from ei to ei+1 will in fact occur when something happens,
externally or internally, that affects the agent.</p>
        <p>
          Then, in the above definition of A-ILTL operators, it is immediate to let si = i
(with a refinement, cf. [
          <xref ref-type="bibr" rid="ref12">10</xref>
          ], to make states correspond to time instants).
        </p>
        <p>We need to adapt the interpretation function I of LTL to our setting. In fact, we
intend to employ A-ILTL within agent-oriented languages, where we restrict ourselves
to logic-based languages for which an evolutionary semantics and a notion of logical
consequence can be defined. Thus, given agent-oriented language L at hand, the set
of propositional letters used to define an A-ILTL semantic framework will coincide
with all ground expressions of L (an expression is ground if it contains no variables,
and each expression of L has a possibly infinite number of ground versions). A given
agent program can be taken as standing for its (possibly infinite) ground version, as
it is customarily done in many approaches. Notice that we have to distinguish between
logical consequence in L, that we indicate as j=L, from logical consequence in A-ILTL,
indicated above simply as j=. However, the correspondence between the two notions
can be quite simply stated by specifying that in each state si the propositional letters
implied by the interpretation function I correspond to the logical consequences of agent
program Pi:
Definition 2. Let L be a logic language. Let Expr L be the set of ground expressions
that can be built from the alphabet of L. Let Ag be a timed state sequence for agent Ag ,
and let i = h"iAg ; tii be the ith state, with "iAg = hHi; Pi; Mii. An A-ILTL formula
is defined over sequence Ag if in its interpretation structure M = hN; Ii, index i 2 N
refers to i, which means that = Expr L and I : N 7! 2 is defined such that, given
p 2 , p 2 I(i) iff Pi j=L p. Such an interpretation structure will be indicated with
MAg . We will thus be consequently able to state whether holds/does not hold w.r.t.</p>
        <p>Ag .</p>
        <p>A-ILTL properties will be verified at run-time, and thus they can act as constraints
over the agent behavior4. In an implementation, verification may not occur at every state
(of the given interval). Rather, sometimes properties need to be verified with a certain
frequency, that can be specific for each property. To model a frequency k, we have
introduced a further extension that consists in defining subsequences of the sequence
of all states: if Op is any of the operators introduced in A-ILTL and k &gt; 1, Opk is a
semantic variation of Op where the sequence of states Ag of given agent is replaced
by the subsequence s0; sk1 ; sk2 ; : : : where for each kr; r 1, kr mod k = 0, i.e.,
kr = g k for some g 1.</p>
        <p>A-ILTL formulas to be associated to an agent to establish the properties it has to
fulfill can be defined within the agent program, though they constitute an additional
separate layer. Agent evolution can be considered to be “satisfactory”, or “coherent”, if
it obeys all these properties. An “ideal” agent will have a coherent evolution. Instead,
violations will occasionally occur, and actions should be undertaken so as to attempt to
regain coherence for the future.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>A-ILTL in Practice</title>
        <p>In this section, we adopt the pragmatic syntax that we have adopted in DALI, where
an A-ILTL formula is represented as OP (m; n; k ) '. Integers m; n define the time
interval where (or since when, if n is omitted) expression OP ' is required to hold, and
k (optional) is the frequency for checking whether the expression actually holds. E.g.,
EVENTUALLY (m; n; k ) ' states that ' should become true at some point between
time instants m and n. In rule-based logic programming languages like DALI, we
restrict ' to be a conjunction of literals. In pragmatic A-ILTL formulas, ' must be ground
when the formula is checked. Variables occurring in an A-ILTL formula must be
instantiated prior to check via a context : so, we have contextual A-ILTL formulas of the
form OP (m; n; k ) ' :: . For the evaluation of ' and , we rely upon the procedural
semantics of the ‘host’ language.</p>
        <p>
          In the following, a contextual A-ILTL formula will implicitly stand for the ground
A-ILTL formula obtained via evaluating the context. In [
          <xref ref-type="bibr" rid="ref12">10</xref>
          ] we have specified how
to operationally check whether such a formula holds. The following formulation deals
with monitoring a condition and performing, if necessary, suitable corrections to agent’s
belief state or behavior.
        </p>
        <p>Definition 3. An evolutionary A-ILTL rule (or “constraint”) is of the form (where
M; N; K can be either variables or constants)</p>
        <p>: OP (M ; N ; K ) ' :: ::: ::::
where: (i) is an (optional) event sequence, to be satisfied by agent’s history for the
rule to be checked; (ii) OP (M ; N ; K ) ' :: is a contextual A-ILTL formula, called the
monitoring condition, that may involve the evaluation of observed events, and possibly
of agent’s internal conditions; (iii) is a sequence of events (optional) that are expected
to happen in the future; (iv) is a sequence of events (optional) that are expected not to
4 By abuse of notation we will indifferently talk about A-ILTL rules, expressions, or constraints.
happen in the future; (v) (optional) is the active part of the rule, as it specifies the
actions of repair/improvement to be performed on the agent’s state and functioning.</p>
        <p>All past events are assumed to be stored in a ground form. All variables occurring
in evolutionary A-ILTL expressions are implicitly universally quantified, in the style
of Prolog-like logic languages. To define partially known sequences of any length, we
admit for event sequences the syntax of regular expressions so as to specify
irrelevant/unknown events, and repetitions. Whenever the monitoring condition
(automatically checked at frequency K) is violated (i.e., it does not hold in present agent’s state),
then the active part is executed.</p>
        <p>A sample evolutionary A-ILTL rule is the following (where N stands for operator
“never”):</p>
        <p>The expression specifies that, after a supply of resource r for a certain unknown
quantity (supplyP +(r; s) stands for a sequence of of supply actions occurred in the
past, each one with a certain quantity s, recorded as past events, postfix P ), the agent
is expected to consume quantities of the resource itself (postfix A indicates actions).
The expression states in particular a constraint requiring, in the part, that the
available quantity of resource r must remain over a certain threshold th. Such expression
is supposed to be verified at run-time whenever new events are perceived. A violation
may occur if in some state the A-ILTL formula does not hold, i.e., in the example, if
the available quantity V of resource r runs too low. This constraint is very significant
from an ethical point of view, where in fact a very common unethical behavior concerns
the improper use/waste of limited resources, think for instance to the excessive and/or
improper use of environmental resources.</p>
        <p>The variation listed below states that no more consumption can take place if the
available quantity of resource r is scarce; thus, in this case, the a repair measure is
specified.
supplyP +(r; s) : N (quantity (r; V ); V &lt; th) :::</p>
        <p>consumeA+(r; Q) block (consumeA(r; Q))</p>
        <p>We might instead opt for another (softer) formulation, that forces the agent to limit
consumption to small quantities (say less than quantity q) if the threshold is approaching
(say that the remaining quantity is th + f , for some f ).
supplyP +(r; s) : N (quantity (r; V ); V &lt; th + f ) :::</p>
        <p>consumeA+(r; Q) allow (consumeA(r; Q); Q &lt; q)</p>
        <p>The above example demonstrates that the proposed approach to dynamic
verification is indeed needed: one can easily realize that static verification is not suitable for
the verification task at hand. In fact, one cannot in general provide a static checker
with any possible input configuration, i.e., any sequence of performed ’supply’ and
’consume’ actions, as the number of potential configurations is not limited. In
alternative, one might provide total supply and consumption figures: however, one would just
draw the quite pleonastic conclusion that the desired property holds whenever supply
is sufficiently generous and consumption prudentially limited. Instead, in the proposed
approach we are able to verify the target property “on the fly”, whatever the sequence
of performed actions and the involved quantities. Moreover, we are also able to try to
repair an unwanted situation and regain a satisfactory state of affairs.</p>
        <p>Below we provide another example that, though simple, is in our opinion significant
as it is representative of many others. Namely, we assume that an agent manages a FIFO
queue, thus accepting operations of pushing and popping elements on/from the queue.
The example thus models in an abstract way the very general and widely used
architecture where an agent provides a service to a number of ’consumers’. We establish
the constraint, represented below in our formalism, that the queue must not contain any
duplicated elements e1 and e2. From an ethical point of view, this means that customers
cannot reiterate a request of service if their previous one have not been processed. This
in order to ensure fairness in the satisfaction of different customers’ requests. The
possible agent’s actions are: pushA(Req; Q), that inserts a generic value Req in the queue,
representing (in some format) a request of service from another agent (each inserted
element is given an index ei); popA(e; R), that extracts the oldest element from the queue,
i.e., the request to be presently satisfied. The constraint considers an unknown number
of pushing actions happened in the past (and thus are now recorded as past events) and
can foresee an unknown number of future popping actions.
pushP +(Req; Q) :</p>
        <p>N (in queue(e1 ; R1 ); in queue(e2 ; R1 )) ::: popA+(eRQ)
3.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>Experiments</title>
        <p>
          We implemented and we have been experimenting the proposed approach within the
DALI multi-agent system [
          <xref ref-type="bibr" rid="ref32">30</xref>
          ], by exploiting the internal event feature that allows
automated execution at a (predefined or user-defined) frequency. In this section we present
basic experiments, aimed at establishing the effectiveness of the approach not w.r.t.
competitors, that basically do not exist, but w.r.t. a correspondent solution developed in
pure Prolog5. DALI is in fact an agent-oriented extension to Prolog whose interpreter
is implemented in Prolog itself. The experiments have been performed on the queue
constraint illustrated above. We did not consider the frequency of constraint-checking,
that could not be implemented in an acceptably simple way in Prolog.
        </p>
        <p>The instance size (number of elements to push and pop on the queue) can be
established by the user when starting a test session. The items to pop/push are, in the
experiments, randomly-generated numbers. In Figure 1 and Figure 2 we show the
execution time of the two solutions at the growth of the instance size. In Figure 3 we show
the difference in percentage between the execution times.</p>
        <p>All figures refer to a dataset of up to 500 elements to push and pop. This has been
sufficient to identify an initial “unstable” stage and then a trend that further consolidates
with the growth of the instance size.
5 We thank former Ph.D. student Dr. Alessio Paolucci who has practically performed the
experiments.</p>
        <p>Fig. 1. x axis: instance size; y axis: execution time, blue bars pure Prolog green bars DALI</p>
        <p>What we can see is that, when the number of events that we consider is small, then
the two solutions are more or less equivalent, the Prolog one a bit better as it involves
no overhead (while the DALI implementation itself and the DALI events management
necessarily involves some). But, as soon as the instance size grows, the DALI solution
becomes better and better in a very significant way, despite the overhead of the DALI
implementation6. We can thus conclude that the new constructs that we propose are
not only expressive and then useful for expressing problem features in a compact and
declarative way, but they also improve efficiency and thus effectiveness of solutions.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Related Work and Concluding Remarks</title>
      <p>In this paper we have extended past work so as to devise a toolkit for run-time
selfchecking of logic-based agents. The proposed toolkit is able to detect and correct
behavioral anomalies by using special meta-rules, and via dynamic constraints that are
also able to consider partially specified sequences of events that happened, or that are
expected to happen. The experiments, performed in the DALI language, have shown
that the proposed approach is computationally affordable.</p>
      <p>
        There are relationships between our approach and event-calculus formulations [
        <xref ref-type="bibr" rid="ref33">31</xref>
        ],
e.g., the ones presented in [
        <xref ref-type="bibr" rid="ref34">32</xref>
        ] and [
        <xref ref-type="bibr" rid="ref35">33</xref>
        ] where however the temporal aspects and the
correction of violations are not present. Deontic logic has been used for building
wellbehaved agents (c.f., e.g., [
        <xref ref-type="bibr" rid="ref36">34</xref>
        ]). However, expressive deontic logics are undecidable7.
Therefore, although our approach cannot compete in expressivity with deontic logics,
it can be usefully exploited in practical applications.
      </p>
      <p>Future work includes making self-checking constraints adaptable to changing
conditions, and devising a useful integration and synergy with declarative a-priori
verification techniques.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bordini</surname>
            ,
            <given-names>R.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Braubach</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dastani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fallah-Seghrouchni</surname>
            ,
            <given-names>A.E.</given-names>
          </string-name>
          ,
          <article-title>Go´mez-</article-title>
          <string-name>
            <surname>Sanz</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leite</surname>
            , J.,
            <given-names>O</given-names>
          </string-name>
          <string-name>
            <surname>'Hare</surname>
            ,
            <given-names>G.M.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pokahr</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricci</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A survey of programming languages and platforms for multi-agent systems</article-title>
          .
          <source>Informatica (Slovenia)</source>
          <volume>30</volume>
          (
          <issue>1</issue>
          ) (
          <year>2006</year>
          )
          <fpage>33</fpage>
          -
          <lpage>44</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Rao</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          , Georgeff, M.:
          <article-title>Modeling rational agents within a BDI-architecture</article-title>
          .
          <source>In: Proc. of the Second International Conference on Principles of Knowledge Representation and Reasoning (KR'91)</source>
          . Morgan Kaufmann (
          <year>1991</year>
          )
          <fpage>473</fpage>
          -
          <lpage>484</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Kouvaros</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lomuscio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Verifying fault-tolerance in parameterised multi-agent systems</article-title>
          . In Sierra, C., ed.
          <source>: Proceedings of the Twenty-Sixth Intl. Joint Conference on Artificial Intelligence, IJCAI2017</source>
          . ijcai.
          <source>org</source>
          (
          <year>2017</year>
          )
          <fpage>288</fpage>
          -
          <lpage>294</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Shapiro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Lespe´rance, Y.,
          <string-name>
            <surname>Levesque</surname>
          </string-name>
          , H.:
          <article-title>The cognitive agents specification language and verification environment In: AAMAS '02: First International Joint conference on Autonomous agents and multiagent systems</article-title>
          ,
          <source>Proceedings</source>
          (
          <year>2002</year>
          )
          <fpage>19</fpage>
          -
          <lpage>26</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>6 The reader wishing to reproduce this experiment can refer to us to obtain the the code to run</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>7 We acknowledge Dr. Abeer Dyoub for the thorough investigation of the applications of deontic logic to build ethical agents during the development of her</article-title>
          <source>Ph.D. Thesis</source>
          [
          <volume>35</volume>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          5.
          <string-name>
            <surname>Tørresen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plessl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Self-aware and self-expressive systems</article-title>
          .
          <source>IEEE Computer 48(7)</source>
          (
          <year>2015</year>
          )
          <fpage>18</fpage>
          -
          <lpage>20</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          6.
          <string-name>
            <surname>Amir</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andreson</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chaudri</surname>
            ,
            <given-names>V.K.</given-names>
          </string-name>
          :
          <source>Report on DARPA workshop on self aware computer systems</source>
          .
          <source>Technical Report</source>
          , SRI International Menlo Park United States (
          <year>2007</year>
          ) Full Text : http://www.dtic.mil/dtic/tr/fulltext/u2/1002393.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          7. De Giacomo,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Iocchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Favorito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Patrizi</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>Foundations for restraining bolts: Reinforcement learning with ltlf/ldlf restraining specifications</article-title>
          .
          <source>In: Proceedings of the TwentyNinth International Conference on Automated Planning and Scheduling</source>
          ,
          <string-name>
            <surname>ICAPS</surname>
          </string-name>
          <year>2018</year>
          , AAAI Press (
          <year>2019</year>
          )
          <fpage>128</fpage>
          -
          <lpage>136</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          8.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gasperis</surname>
            ,
            <given-names>G.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pitoni</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Trustworthiness and safety for intelligent ethical logical agents via interval temporal logic and runtime self-checking</article-title>
          .
          <source>In: 2018 AAAI Spring Symposia</source>
          , Stanford University, AAAI Press (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          9.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dell'Acqua</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pereira</surname>
            ,
            <given-names>L.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tocchio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Ensuring agent properties under arbitrary sequences of incoming events</article-title>
          .
          <source>In: Informal Proceedings of 17th RCRA Intl</source>
          .
          <article-title>Workshop on Experimental evaluation of algorithms for solving problems with combinatorial explosion (</article-title>
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          10.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Self-checking logical agents</article-title>
          .
          <source>In: Proceedings of LA-NMR 2012. Volume 911, CEUR Workshop Proceedings (CEUR-WS.org)</source>
          , Invited paper (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          11.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pitoni</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Reflection and introspection for humanized intelligent agents</article-title>
          .
          <source>In: Proceedings of the fourth Workshop on Bridging the Gap between Human and Automated Reasoning, co-located with the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI</source>
          <year>2018</year>
          ).
          <source>Also appeared in: Proceedings of the 33rd Italian Conference on Computational Logic CILC2018, CEUR Workshop Proceedings</source>
          , volume
          <volume>2214</volume>
          , CEUR-WS.org, (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          12.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Self-checking logical agents</article-title>
          .
          <source>In: AAMAS'13</source>
          , International Conference on Autonomous Agents and
          <string-name>
            <surname>Multi-Agent</surname>
            <given-names>Systems</given-names>
          </string-name>
          , Proceedings. IFAAMAS (
          <year>2013</year>
          )
          <fpage>1329</fpage>
          -
          <lpage>1330</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          13.
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Subrahmanian</surname>
            ,
            <given-names>V.S.</given-names>
          </string-name>
          :
          <article-title>Meta-languages, reflection principles, and self-reference</article-title>
          .
          <source>In: Handbook of Logic in Artificial Intelligence and Logic Programming</source>
          , Volume2,
          <string-name>
            <given-names>Deduction</given-names>
            <surname>Methodologies</surname>
          </string-name>
          . Oxford University Press (
          <year>1994</year>
          )
          <fpage>323</fpage>
          -
          <lpage>358</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          14.
          <string-name>
            <surname>Barklund</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dell'Acqua</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lanzarone</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          :
          <article-title>Semantical properties of encodings in logic programming</article-title>
          . In Lloyd, J.W., ed.:
          <article-title>Logic Programming</article-title>
          ,
          <source>Proceedings of the 1995 International Symposium</source>
          . MIT Press (
          <year>1995</year>
          )
          <fpage>288</fpage>
          -
          <lpage>302</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          15.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Meta-reasoning: a Survey. In: Computational Logic: Logic Programming and Beyond, Essays in Honour of Robert A</article-title>
          .
          <string-name>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <source>Part II. Lecture Notes in Computer Science 2408</source>
          . Springer (
          <year>2002</year>
          )
          <fpage>253</fpage>
          -
          <lpage>288</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          16.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Formisano</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Augmenting knowledge representation and reasoning languages with customizable metalogic features</article-title>
          .
          <source>In: Proceedings of the 34th Italian Conference on Computational Logic</source>
          , Trieste, Italy, June 19-21,
          <year>2019</year>
          . Volume 2396 of CEUR Workshop Proceedings., CEUR-WS.org (
          <year>2019</year>
          )
          <fpage>14</fpage>
          -
          <lpage>29</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          17.
          <string-name>
            <surname>Dix</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A classification theory of semantics of normal logic programs: I. Strong properties</article-title>
          .
          <source>Fundam. Inform</source>
          .
          <volume>22</volume>
          (
          <issue>3</issue>
          ) (
          <year>1995</year>
          )
          <fpage>227</fpage>
          -
          <lpage>255</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          18.
          <string-name>
            <surname>Rao</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          : Agentspeak(l):
          <article-title>BDI agents speak out in a logical computable language</article-title>
          .
          <source>In: Agents Breaking Away, 7th European Works. on Modelling Autonomous Agents in a Multi-Agent World, Proceedings. Lecture Notes in Computer Science 1038</source>
          . Springer (
          <year>1996</year>
          )
          <fpage>42</fpage>
          -
          <lpage>55</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          19.
          <string-name>
            <surname>Hindriks</surname>
            ,
            <given-names>K.V.</given-names>
          </string-name>
          :
          <article-title>Programming rational agents in goal</article-title>
          .
          <source>In: Multi-Agent Programming</source>
          .
          <source>Springer US</source>
          (
          <year>2009</year>
          )
          <fpage>119</fpage>
          -
          <lpage>157</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          20.
          <string-name>
            <surname>Dastani</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>van Riemsdijk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>B</article-title>
          ., Meyer,
          <string-name>
            <surname>J.J.C.</surname>
          </string-name>
          :
          <article-title>Programming multi-agent systems in 3APL In: Multi-Agent Programming: Languages, Platforms and Applications</article-title>
          .
          <source>Multiagent Systems, Artificial Societies, and Simulated Organizations</source>
          , Volume
          <volume>15</volume>
          . Springer (
          <year>2005</year>
          )
          <fpage>39</fpage>
          -
          <lpage>67</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          21.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tocchio</surname>
            ,
            <given-names>A.:</given-names>
          </string-name>
          <article-title>A logic programming language for multi-agent systems</article-title>
          .
          <source>In: Logics in Artificial Intelligence</source>
          , European Conference,
          <source>JELIA 2002, Proceedings. Lecture Notes in Computer Science 2424</source>
          . Springer (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          22.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tocchio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The DALI logic programming agent-oriented language</article-title>
          .
          <source>In: Logics in Artificial Intelligence, 9th European Conference, JELIA 2004, Proceedings. Volume 3229 of Lecture Notes in Computer Science</source>
          . Springer (
          <year>2004</year>
          )
          <fpage>685</fpage>
          -
          <lpage>688</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          23.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Learning Answer Set Programming Rules for Ethical Machines</article-title>
          .
          <source>In: Proceedings of the Thirty Fourth Italian Conference on Computational Logic CILC2019</source>
          ,
          <article-title>CEUR-WS.org</article-title>
          , Volume
          <volume>2396</volume>
          (
          <year>2019</year>
          )
          <fpage>300</fpage>
          -
          <lpage>315</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          24.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Towards an ILP Application in Machine Ethics</article-title>
          .
          <source>In: Proceedings of the 29th International Conference on Inductive Logic Programming - ILP2019, Lecture Notes in Computer Science 11770</source>
          . Springer (
          <year>2019</year>
          )
          <fpage>26</fpage>
          -
          <lpage>35</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          25.
          <string-name>
            <surname>Dyoub</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Towards ethical machines via logic programming</article-title>
          .
          <source>In: Proceedings 35th International Conference on Logic Programming (Technical Communications)</source>
          ,
          <source>ICLP 2019 Technical Communications. Volume 306 of EPTCS</source>
          , (
          <year>2019</year>
          )
          <fpage>333</fpage>
          -
          <lpage>339</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          26.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tocchio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>About declarative semantics of logic-based agent languages</article-title>
          .
          <source>In: Declarative Agent Languages and Technologies III, Third International Workshop, DALT 2005, Selected and Revised Papers. Lecture Notes in Computer Science 3904</source>
          . Springer (
          <year>2006</year>
          )
          <fpage>106</fpage>
          -
          <lpage>123</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          27.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pitoni</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Reasoning about memory management in resource-bounded agents</article-title>
          .
          <source>In: Proceedings of the 34th Italian Conference on Computational Logic. Volume 2396 of CEUR Workshop Proceedings</source>
          , CEUR-WS.org (
          <year>2019</year>
          )
          <fpage>217</fpage>
          -
          <lpage>228</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          28.
          <string-name>
            <surname>Pitoni</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A temporal module for logical frameworks</article-title>
          .
          <source>In: Proceedings 35th International Conference on Logic Programming (Technical Communications)</source>
          ,
          <source>ICLP 2019 Technical Communications. Volume 306 of EPTCS</source>
          , (
          <year>2019</year>
          )
          <fpage>340</fpage>
          -
          <lpage>346</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          29.
          <string-name>
            <surname>Henzinger</surname>
            ,
            <given-names>T.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pnueli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Timed transition systems</article-title>
          .
          <source>In: Real-Time: Theory in Practice, REX Works., Lecture Notes in Computer Science</source>
          600 Springer (
          <year>1992</year>
          )
          <fpage>226</fpage>
          -
          <lpage>251</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          30.
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Gasperis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pitoni</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salutari</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>DALI: A multi agent system framework for the web, cognitive robotic and complex event processing</article-title>
          .
          <source>In: Joint Proceedings of the 18th Italian Conference on Theoretical Computer Science and the 32nd Italian Conference on Computational Logic co-located with the 2017</source>
          IEEE International Workshop on Measurements and
          <string-name>
            <surname>Networking (2017 IEEE M&amp;N)</surname>
          </string-name>
          .
          <source>Volume 1949 of CEUR Workshop Proceedings., CEUR-WS.org</source>
          (
          <year>2017</year>
          )
          <fpage>286</fpage>
          -
          <lpage>300</lpage>
          Download at https://github.com/ AAAI-DISIM-UnivAQ/DALI.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          31.
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sergot</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A logic-based calculus of events</article-title>
          .
          <source>New Generation Computing</source>
          <volume>4</volume>
          (
          <year>1986</year>
          )
          <fpage>67</fpage>
          -
          <lpage>95</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          32.
          <string-name>
            <surname>Berreby</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bourgne</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ganascia</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A declarative modular framework for representing and applying ethical principles</article-title>
          .
          <source>In: AAMAS'17, 16th Conference on Autonomous Agents and MultiAgent Systems</source>
          , Proceedings. ACM Press (
          <year>2017</year>
          )
          <fpage>96</fpage>
          -
          <lpage>104</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          33.
          <string-name>
            <surname>Tufis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ganascia</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A normative extension for the BDI agent model</article-title>
          .
          <source>In: Proceedings of the 17th INternational Conference on Climbing an Walking Robots and the Support Technologies for Mobile Machines</source>
          . (
          <year>2014</year>
          )
          <fpage>691</fpage>
          -
          <lpage>702</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          34.
          <string-name>
            <surname>Bringsjord</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arkoudas</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bello</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Toward a general logicist methodology for engineering ethically correct robots</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          <volume>21</volume>
          (
          <issue>4</issue>
          ) (
          <year>2006</year>
          )
          <fpage>38</fpage>
          -
          <lpage>44</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          35.
          <string-name>
            <surname>Dyoub</surname>
          </string-name>
          , A.:
          <article-title>Multi-Agent systems in Comp. logic</article-title>
          .
          <source>PhD thesis</source>
          , Department of Information Engineering, Computer Science and Mathematics, Supervisor Prof. Stefania Costantini. University of L'Aquila, (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>