<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Extraction of Defeasible Proofs as Explanations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luca Pasetto</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matteo Cristani</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guido Governatori</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Olivieri</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Edoardo Zorzi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Luxembourg</institution>
          ,
          <addr-line>Esch-sur-Alzette</addr-line>
          ,
          <country country="LU">Luxembourg</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University of Verona</institution>
          ,
          <addr-line>Verona</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Houdini is a Defeasible Deontic Logic reasoner that has been recently developed in Java. The algorithm employed in Houdini follows the proof conditions of the logic to conclude propositional and deontic literals, and is an eficient solution that provides the full extension of a theory. This computation is made in a forward-chaining complete way. Efectiveness is a fundamental property of the adopted approach, but we are also interested in providing an explicit reference to the reasoning that is employed to reach a conclusion. This reasoning is a proof that corresponds to an explanation for that conclusion, and such a proof is less natural to identify in a non-monotonic framework like Defeasible Logic than it would be in a classical one. Depending on the formalism and on the algorithm, the process of reconstructing a proof from a derived conclusion can be cumbersome. Intuitively, a proof consists of a support argument in favour of a literal to be concluded. However, it is necessary also to show that this argument is strong enough, either because the are no arguments against it, or because those arguments are weaker than it. In this paper, with a slight modification of the algorithm of Houdini, we show that it is possible to extract a proof for a defeasible literal in polynomial time, and that such a proof results minimal in its depth.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Defeasible Logic</kwd>
        <kwd>Proof extraction</kwd>
        <kwd>Non-monotonic reasoning</kwd>
        <kwd>Explainable AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The problem of reconstructing a proof for a specific conclusion in automated reasoning systems
has been considered by scholars in the past for a large spectrum of logical frameworks, including
non-monotonic ones. However, the investigation has been carried out only to a very limited
extent for the case of Defeasible Logic in [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], where the reconstruction process is based only
on the proof system, and not on an algorithmic method.
      </p>
      <p>
        Based on the algorithmic approach to the computation of the extension of a defeasible theory
of [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], the authors of this work, along with others, have built an implementation, the Houdini
system, that also operates on other aspects of the logical framework, including deontic operators
and numeric constraints [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The aim of this work is to develop a method for providing an explanation for a defeasible
literal that Houdini has computed to be in the extension by extracting a proof for it. It is thus
part of a stream that is carried out by some of the authors of this paper, along with others, on
the topic of explainability in non-monotonic frameworks [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
      <p>The concept of explanation that we adopt here is based on and extends the classical notion of
proof. We explain how a given conclusion is obtained by finding a chain from the assumptions
to the conclusion, and moreover including in the proof all the information that is needed to
beat possible counter-arguments. The actual proof system, and consequently the algorithm
and its implementation, does not generate proof trees or other structures that can be used to
explain the derivation, but rather a complex graph that is proving all the literals in the extension.
Therefore, the proof needs to be extracted.</p>
      <p>In order to introduce the reader to the conceptual framework shortly mentioned above, we
devise an example, that is informally described here and reconsidered formally in Examples 2
and 3.</p>
      <p>Example 1. Let us consider the following fictional situation, described in natural language, of
a taxonomist seeing a platypus for the first time and debating whether it should be classified as
a mammal or not based on a number of its characteristics. First of all, the taxonomist observes
some of the characteristics of the platypus:
• The taxonomist notices a new animal, that we now know as a platypus;
• The taxonomist observes that the animal has the ability to produce milk to nourish its
young;
• The taxonomist observes that the animal has its body covered in fur, providing insulation
and protection from the environment;
• The taxonomist observes that the animal is warm-blooded;
• The taxonomist observes that the animal lays eggs;
• The taxonomist observes that males of the animal have a pair of venomous spurs;
• The taxonomist observes that the animal has a bill resembling that of a duck.
The taxonomist knows these rules that typically hold for classifying an animal:
• Producing milk is usually a mammalian trait;
• Having a layer of fur or hair is usually a mammalian trait;
• Being warm-blooded is usually a mammalian trait;
• Having a fur-covered body and being warm-blooded imply the ability to regulate the
body temperature;
• Being able to regulate the body temperature is usually a mammalian trait;
• Laying eggs is usually a trait of birds, and not of mammals;
• Producing venom is usually a trait of reptiles, and not of mammals;
• Having a duck-like bill is usually a trait of birds, and not of mammals.</p>
      <p>If the taxonomist considers all of the rules to have the same importance, he will not be able
to take a decision in this case, as the rules that can be applied lead to decide that the platypus is
both a mammal and a non-mammal. Indeed, the taxonomist then comes up with the following
preferences for the rules:
• Producing milk is more important than laying eggs;
• Having a layer of fur is more important than producing venom;
• Being warm-blooded is more important than having a duck-like bill;
• Being able to regulate the body temperature is more important than producing venom;
• Being able to regulate the body temperature is more important than having a duck-like
bill.</p>
      <p>Based on these preferences, the taxonomist can then decide that the platypus can be classified
as a mammal. The chain of reasoning is the following:</p>
      <p>The taxonomist has observed that the animal has a number of mammalian traits, and
he focuses on three: the ability to produce milk to nourish its young; its body is covered
in fur; and it is warm-blooded. The taxonomist has also observed that the animal has
three non-mammalian traits: it lays eggs; its males have a pair of venomous spurs; and
it has a bill resembling that of a duck. Since each one of the existing non-mammalian
traits is beated by one of the chosen three mammalian traits, it is possible to conclude
that the animal is plausibly a mammal.</p>
      <p>Example 1 is a case of a derivation obtained by chaining simple steps and by comparing
diferent arguments. In this paper, we provide a technique for the reconstruction of a proof for a
literal derived in the extension of a Defeasible Logic theory by expanding the representation used
during the computation of the extension and adopting a simple approach of graph exploration.
We then prove that the entire approach can be conducted efectively, in polynomial time on
deterministic machines.</p>
      <p>The rest of the paper is structured as follows. Section 2 provides a brief introduction to
Defeasible Logic; Section 3 describes Houdini and the algorithm used in the system; Section
4 discusses the problem of interest, the algorithm and its computational properties; Section 5
provides reference to relevant literature; and finally Section 6 draws conclusions and identifies
possible future developments.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Defeasible Logic</title>
      <p>
        Defeasible Logic (DL) [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ] is a simple, flexible, and eficient rule-based non-monotonic
formalism. Its strength lies in its constructive proof theory, as DL allows to draw meaningful
conclusions from a (potentially) conflicting and incomplete knowledge base. In non-monotonic
systems, more accurate conclusions can be obtained when more pieces of information become
available. Throughout the years, many variants of DL have been proposed for the modelling of
diferent application areas: agents [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ], legal reasoning [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], and business processes [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
in the scope of regulatory compliance.
      </p>
      <p>We start with the language. Let PROP be a set of propositional atoms, and Lab be a set of
arbitrary labels (the names of the rules). The set of literals is Lit = PROP ∪ {¬ |  ∈ PROP}.
We shall use lower-case Roman letters to denote literals, and lower-case Greek letters to denote
rules.</p>
      <p>
        The complement (or opposite) of a literal  is denoted by ∼ . If  is a positive literal  then ∼ 
is ¬, and if  is a negative literal ¬ then ∼  is . The definition of a defeasible theory is the
standard one in DL ([
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]).
      </p>
      <p>Definition 1 (Defeasible theory). A defeasible theory  is a tuple (, , &gt;), being  ⊆
the set of facts,  the set of rules, and &gt; the superiority relation.</p>
      <p>Lit</p>
      <p>The set of facts  denotes simple pieces of information that are considered to be always
true, like “Sylvester is a cat”, formally (). The set of rules  contains three types
of rules: strict rules, defeasible rules, and defeaters. The superiority relation1 &gt;⊆  ×  is
an irreflexive relation to solve conflicts in case of potentially contradicting information. The
superiority relation is not an order, as it is not transitive.</p>
      <p>A defeasible theory is finite if  and  are finite. A rule  ∈  is an expression  : () ˓→
() such that:
1.  ∈ Lab is the unique name of the rule;
2. () ⊆ Lit is the (possibly empty) set of the antecedents2;
3. ˓→∈ {→, ⇒, ↝} denotes, resp., strict rules, defeasible rules, and defeaters;
4. () ∈ Lit is the consequent, a single literal.</p>
      <p>A strict rule is a rule in the classical sense: whenever the premises are indisputable, so is the
conclusion. The statement “All computer scientists are humans” is formulated through the strict
rule</p>
      <p>
        : CScientist ( ) → Human ( ),
since there is no exception to it. As in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], we consider only a propositional version of this logic,
and we do not take function symbols into account. Every expression with variables represents
the finite set of its variable-free instances.
      </p>
      <p>Defeasible rules represent statements that can be defeated by contrary evidence: the statement
“Computer scientists travel to the city of the conference that accepted their paper” is thus
represented with the defeasible rule</p>
      <p>: CScientist , PaperAccepted ⇒ TravelConference</p>
      <p>Defeaters are special rules whose only purpose is to prevent the derivation of the opposite
conclusion; the statement “During pandemic travels might be prohibited” is hence represented
by the defeater</p>
      <p>: Pandemic ↝ ¬TravelConference
In this case, using a defeater is preferable to using a defeasible rule, as we simply want to “block”
the derivation of travelling.</p>
      <p>We use some conventional abbreviations. The set of strict rules is s, the set of strict and
defeasible rules is sd. [] is the set of rules whose consequent is literal . We say that a literal 
appears in , if there exists a rule such that  is either one of its antecedents, or the consequent.</p>
      <p>A conclusion of a defeasible theory  is a tagged formula, an expression of the form ± #,
 ∈ Lit and # ∈ {,  }, with the following meanings:
1Also referred to as priority or preference relation.
2We shall interchangeably use antecedent to refer to ().</p>
      <p>• + means that  is strictly proved, i.e., there is a strict proof of  in ;
• −  means that  is strictly refuted, i.e., a strict proof of  does not exist in ;
• + means that  is defeasibly proved, i.e., there is a defeasible proof of  in ;
• −  means that  is defeasibly refuted, i.e., a defeasible proof of  does not exist in .
We refer to ±  and ±  as proof tags. The definition of  describes forward chaining of strict
rules, while  represents the non-monotonic part of the logic.</p>
      <p>Given a defeasible theory , a proof  of length  in  is a finite sequence
 (1),  (2), . . . ,  () of tagged formulas of the type + , −  , +, − , where the proof
conditions defined hereafter hold;  (1..) denotes the first  steps of  . Proofs are based on
the notions of a rule being applicable or discarded. A rule is applicable when all the elements in
its antecedent have been proved; a rule is discarded if at least one of such elements has been
refuted.</p>
      <sec id="sec-2-1">
        <title>Definition 2 (Applicable &amp; Discarded).</title>
        <p>∈  is</p>
        <p>Given a defeasible theory  = (, , &gt;), a rule
• #-applicable, # ∈ {,  } at  ( + 1) if ∀ ∈ (). + # ∈  (1..);
• #-discarded, # ∈ {,  } at  ( + 1) if ∃ ∈ (). − # ∈  (1..).</p>
        <p>When it is not needed or it is clear from the context, we will omit  -applicable,  -discarded,
-applicable, -discarded, and just say that a rule is applicable or discarded. The proof conditions
hereafter are also standard in DL.
+ : If  ( + 1) = + then
(1)  ∈  , or
(2) ∃ ∈ [] s.t.  is  -applicable.
+: If  ( + 1) = + then
(1) + ∈  (1..), or
(2.1) −  ∼  ∈  (1..), and
∃ ∈ sd[] s.t.</p>
        <p>(2.2)  is -applicable, and
∀ ∈ [∼ ] then either
(2.3.1)  is -discarded, or
(2.3.2) ∃ ∈ [] s.t.</p>
        <p>(2.3.2.1)  is -applicable and
(2.3.2.2)  &gt; .</p>
        <p>A literal is strictly proved if (1) it is a fact, or (2) there exists a  -applicable strict rule for it.
A literal is defeasibly proved if (1) it has been strictly proved, or (2.1) the opposite has not been
strictly proved and (2.2) there exists a -applicable rule ( for ) such that every attack ( for the
opposite ∼ ) is either (2.3.1) -discarded or (2.3.2) defeated by a -applicable rule ( for ).</p>
        <p>
          Negative proof tags −  and −  are omitted as they are obtained by applying the strong
negation principle to their positive counter-parts [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], which applies the function that simplifies
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Definition 4 (Theory equivalence).</title>
        <p>alent if () ≡ (′).</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Houdini</title>
      <p>Given two defeasible theories  and ′, they are
equiva formula by moving all negations to an innermost position in the resulting formula, replacing
the positive tags with the respective negative tags, and vice versa. The last notions of this section
are those of extension of a theory, and of equivalence of theories. Informally, an extension is
everything that is derived and refuted, and two theories are equivalent when they have the
same extension.</p>
      <p>Definition 3 (Extension). Consider two defeasible theories  and ′. The set of positive
and negative conclusions of  is the extension () = (+, − , +, − ), where ± # = {| 
appears in  and  ⊢ ± #}, # ∈ {,  } and we use symbol ⊢ to indicate that there is a
derivation of  from .</p>
      <p>We call +, − ,</p>
      <p>+, −  the extension sets of a theory.</p>
      <p>In this section, we briefly describe the DL reasoner Houdini. While this paper focuses just on
Defeasible Logic, Houdini also works with its deontic version, Defeasible Deontic Logic. At the
current state, Houdini is a web-app based on Java with a single endpoint and a responsive and
user-friendly UI. The choice of programming language fell on Java due to its high maintainability
and portability, as well as the existence of mature web frameworks that are suitable to develop
straightforward applications, that is, easy to understand for nonprofessional people such as
students and researchers with limited knowledge of programming. In particular, Houdini is
open source, available on request, and developed by using Java 11 and Spring Boot 2.7.1.</p>
      <p>Front-end checks with colour-coded error messages guide users and let them correct
illformed inputs, forbidden characters, and plain wrong syntax. These controls are responsive
and can fire an error message at the top of the screen with each single inserted character and
mouse click. Input fields are generated dynamically and can be eliminated with the press of a
button so as to have a clean interface devoid of too many unused spaces. At the start, users
are presented with three diferent kinds of empty fields wherein they can insert the elements
of the theory one-by-one, or an upload button which can be used to load a JSON file with the
whole theory specified. The syntax required for this file is easy to understand and mimics the
structure of a hand-inserted theory.</p>
      <p>As an example, the following rule is syntactically correct and is allowed by the system: “a,
[O]b, x &gt; 3 =&gt;[P] ∼ c". The body is “a, [O]b, x &gt; 3" and so it is made up of three elements: the
literal “a", a proposition marked with the deontic tag O with variable name “b" and parameter
“x" and a logical constraint “x &gt; 3".</p>
      <p>Superiority relations are simply two rule names separated by a “&gt;” character: the left-side
indicates the superior rule, whereas the right-side the inferior one. A rule name is simply the
character “r" plus a number. For example, “r5 &gt; r1" is a valid superiority relation. Once the
user inserts the data Houdini parses and validates it, then it computes the extension sets ± #,
# ∈ {,  },  ∈ {C, O, P}, and finally it displays these sets in an appropriate result page.</p>
      <p>Reasoning about and computing the extension sets of a Defeasible Deontic Logic theory
requires, first, a clear and precise representation of it. Hence, Houdini expects users to provide
data which complies with a precise syntax that cannot lead to any ambiguity. In particular,
a theory must be provided by following a context-free grammar specification which is then
used by Houdini’s custom parser (generated using ANTLR 4 parse tree listeners) to validate it,
server-side. This is the first operation done on the inputted data.</p>
      <p>If the input is ill-formed, an error is returned. Client-side checks are also performed to avoid
unnecessary parsing, and to deliver quicker feedback. Houdini’s parser not only validates the
user data but also builds, incrementally, as it traverses the tree, the necessary internal data
structures that are later used by the reasoner to compute the extension sets. Some optimisations
are also employed, both on the grammar side, such as grouping and prioritising the most
common expressions (chosen empirically), and on the parser side.</p>
      <p>Moreover, mathematical expressions (which can appear inside a rule head) and logical
expressions (which can appear inside a rule body logical constraint) are transformed in their respective
Reverse Polish Notation (RPN) versions to speed-up things later when such expressions must
be evaluated multiple times with actual arguments, as RPN expressions are easily evaluated and
with minimal memory requirements.</p>
      <p>The reasoner module, which is performed after the parsing phase, implements the core
functionality of Houdini: computing the extension of the given defeasible deontic theory. It
is composed by two sub-modules: Strict Reasoner and Defeasible Reasoner, whose behaviour
follows the proof conditions provided above.</p>
      <p>The main algorithm, the reasoner module, acts on Literal and Rule instances which have been
initialised by the parser and which contain, together with a general Theory container instance,
all the necessary information to work with. For example, a Literal instance contains or contains
references to: all the rules wherein it appears as head or in the body, its deontic modality, its
opposite literal, whether at the current moment it has been assigned by the reasoner to an
extension set (±  and ± ) or it is still undecided, and other information. On the other hand, a
Rule instance contains or contains references to: all the literals appearing in its body and head,
extra information about its head literal (deontic mode, parameters, logical expressions, . . . ), and
other information.</p>
      <p>Also, all these classes implement methods called by the reasoner to compute the extension
sets. The decision of having a strict pairing between data and methods that can act inside objects
is due to the need to have to use similar but not identical functions depending on the type of
the literal, its mode, the presence of mathematical expressions in the head of the rule, etc.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Proof Reconstruction in Defeasible Logic</title>
      <p>When the algorithm halts because it has computed the extension of the given defeasible theory
, for a literal  such that  ⊢ + or  ⊢ +, in order to explain why  has been derived
(either definitely or defeasibly), it is necessary to reconstruct its proof.</p>
      <p>The strict proof tree of literal  such that  ⊢ + is defined inductively as follows.
Definition 5 (Strict proof tree). Given a defeasible theory  = (, , &gt;) and a literal  such
that  ⊢ + , the strict proof tree  (+ ) of  is defined as follows:
• if  ⊢ + because  ∈  , the strict proof tree  (+ ) consists of a node labeled by ;
• if  ⊢ + because there is a rule  ∈ [] s.t. ∀ ∈ () :  ⊢ + , the strict
proof tree  (+ ) consists of a node labeled by + with a subtree  (+ ) for each
 ∈ ().</p>
      <p>After this definition, in the rest of the paper we focus on defeasible proofs. For a defeasible
proof of +, we make the choice of showing in the proof only the chains that end up with +,
and not the chains that attack these, but we make sure to include in the proof the information
on how the attacks are beaten. The defeasible proof tree of literal  such that  ⊢ + is defined
inductively as follows.</p>
      <p>Definition 6 (Defeasible proof tree). Given a defeasible theory  = (, , &gt;) and a literal
 such that  ⊢ +, the defeasible proof tree  (+) of  is defined as follows:
• if  ⊢ + because  ⊢ + , the defeasible proof tree  (+) consists of a node
labeled by + with a subtree  (+ );
• if  ⊢ + because there is a rule  ∈ [] s.t. ∀ ∈ () :  ⊢ + and for every
 ∈ [∼ ] that is activated, there exists a rule  ∈ [] s.t. ∀ ∈ () :  ⊢ + and
 &gt; , the defeasible proof tree  (+) consists of a node labeled by + with a subtree
 (+) for each  ∈ (), and for every  ∈ [∼ ] that is activated and defeated by its
corresponding rule  ∈ [], a subtree  (+) for each  ∈ ().</p>
      <p>In case that a rule is a hypothesis, that is, it is a defeasible rule without premises, we show it
in the proof by connecting the concluded literal to a special node  that we call the dummy fact.
Since we are working in a non-monotonic context, the algorithm can activate more than one
rule concluding for the same literal. Indeed, the algorithm is a forward chaining one, and if a
rule can be activated, then it will be. For this reason, when we do proof reconstruction on a
derived literal, we might end up with a directed acyclic graph (DAG) instead of a tree.</p>
      <p>The defeasible proof DAG of literal  such that  ⊢ + is defined inductively as follows.
Definition 7 (Defeasible proof DAG). Given a defeasible theory  = (, , &gt;) and a literal
 such that  ⊢ +, the defeasible proof DAG  (+) of  is defined as follows:
• if  ⊢ + because  ⊢ + , the defeasible proof DAG  (+) consists of a node
labeled by + with an edge from  (+ );
• if  ⊢ + because there is a rule  ∈ [] s.t. ∀ ∈ () :  ⊢ + and for every
 ∈ [∼ ] that is activated, there exists a rule  ∈ [] s.t. ∀ ∈ () :  ⊢ + and
 &gt; , the defeasible proof DAG  (+) consists of a node labeled by + with an edge
from  (+) for each  ∈ (), and for every  ∈ [∼ ] that is activated and defeated
by its corresponding rule  ∈ [], an edge from  (+) for each  ∈ ().</p>
      <p>Also here, in the case of a hypothesis rule, we show it in the proof by connecting the concluded
literal to a special node  that we call the dummy fact.</p>
      <p>In general, the reconstruction process described in the definitions above is not trivial to
perform after having computed the extension. Indeed, if a proof has been built by applying
only one rule at each step, as in a direct chain from facts towards a literal that is concluded
+</p>
      <p>+
+
+</p>
      <p>+
3
2
1
+
5
3
2
1
1

2
1
+
+
+</p>
      <p>+</p>
      <p>1
+

a
+
3
+
c

directly without attacks against it, then the tree can be easily reconstructed in polynomial time.
However, it is not immediate for more complicated cases.</p>
      <p>Let us exhibit some examples of reconstructed proofs, that will enlighten the approach we
can adopt to solve the issue mentioned above.</p>
      <p>Example 2. Consider the following theory  = (∅, , &gt;), where  = {1 : ⇒ , 2 :  ⇒
, 3 :  ⇒ , 4 :  ⇒ ¬, 5 :  ⇒ , 6 : ¬ ⇒ ¬}, and &gt; is empty. The extension of 
is such that + = {, , , }. Figure 1a shows the corresponding proof trees.
(a) The proof trees reconstructed from the
extension of Example 2.
(b) The proof tree reconstructed for +
in Example 3.</p>
      <p>Example 3. Consider the following theory  = (, , &gt;), where  = {, , },  =
{1 :  ⇒ , 2 :  ↝ ¬, 3 :  ⇒ , 4 :  ⇒ ¬, }, with 1 &gt; 4 and 3 &gt; 2. The
extension of  is such that + = {, , , }. Figure 1b shows the proof tree for +.</p>
      <p>In order to compute the proof DAG, we need some further information. In particular, for each
derived literal that we want to extract the proof of, we need to know when the literal has been
derived for the first time in that reasoning chain. This information can be managed to identify
when a rule has actually been used to derive a literal, and build the above mentioned DAG.</p>
      <p>Before going into the details of the algorithm that deals with the annotation needed to
compute the reconstruction, we need to discuss an issue that may appear complicated: cycles. In
a defeasible theory, we may have dependency circularity, namely there might be a derivation that
starts from facts and goes towards the same literal conclusion twice (or even more times). This,
in principle, could cause computational unfeasibility: we may generate a number of derivation
chains that is potentially infinite, and since the cycles that are generated can be combined in
sequences, this can give rise to combinatorial efects, transforming the reconstruction of proofs
into a search problem in the graph of derivation chains.</p>
      <p>This can be remedied by a single technique. For a literal that has been concluded once, we can
ignore it if it is derived again afterwards, as this is not relevant for the existence of a derivation
chain. In fact, we derive the following lemma, whose proof is a straightforward consequence of
the above reasoning and is therefore omitted.</p>
      <p>Lemma 1. Any derivation chain obtained in a forward-chaining computation of the propositional
extension of a defeasible theory contains a cycle-free derivation chain.</p>
      <p>As a consequence of Lemma 1, we can conceive a solution to the marking problem for these
chains, that in turn leads us to the algorithm for computing the proof DAG proposed below.
The idea is to compute the extension as usually, and to also add to each literal in the extension
sets a number representing the first appearance of that literal during the computation. We call
this number the derivation depth of the considered literal.</p>
      <p>
        By means of this annotation, a simple and straightforward modification of the original
algorithm implemented in Houdini (and more generally, in any forward-chaining method based
on [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]), we can manage the computation of the proof DAG of Definition 7 in the following
manner. First of all, we compute the extension with the described annotation, yielding an
annotated dependency graph (ADG). This graph is a slight modification of known dependency
graphs [15]. Given a defeasible theory  = (, , &gt;), the ADG for  is built as follows:
• For every  ∈  , build a node and annotate it with derivation depth  = 0;
• For every literal  that is derived by rule  ∈  from literals whose nodes are already in
the ADG, create an edge  that connects each  ∈ () to a new node annotated with
derivation depth  = ′ + 1, where ′ = (ℎ(()));
• The edge  of the previous step is marked with the label of the applied rule .
      </p>
      <p>The graph obtained via the above method is labelled by a label function  that associates a
set of rules to an edge. Therefore, given a defeasible theory  = (, , &gt;) we have that such a
graph is defined by  = ⟨, , ⟩, where  is a function that goes from  to 2. However, this
labelling does not generate an exponential growth of the computational cost, as discussed below.
The process above, that starts from facts and computes the extension with annotations for the
derivation depth, has the exact same computational cost of the process without the annotation.</p>
      <p>We are now able to sketch the algorithm for proof reconstruction. The basic concept is to start
with a concluded literal  for which we aim at reconstructing the proof, and provide subsequent
expansions within the ADG while following the links in a way that does not incur in the issue
of circular dependencies. In particular, we expand a literal  towards another literal  ′ that is
connected to  if, and only if, the literal depth of  ′ is less than the depth of  . In this way, after
a nfiite number of steps, each of which can compute, at maximum, a subset of the literals (and
therefore with a maximum number of steps that is (), being  the number of literals), we
reach a set of literals with depth 0. These literals are therefore facts, and they trigger the theory
in a way that derives the concluded literal , and they also trigger the arguments against , that
are however proven to be weaker.</p>
      <p>The description sketched above is defined formally in Algorithm 1. We denote by  ( ) the
depth of the literal  in a defeasible theory. We can keep this value undefined for the literals
that are not proven in the theory.</p>
      <p>Based on the reasoning we reported above, we can prove the following theorem.
Algorithm 1 Algorithm that computes the proof DAG of a literal.
Theorem 1. Algorithm 1 correctly computes a proof DAG for a literal  in a defeasible theory 
in (2 · ), with  the number of literals and  the number of rules in .</p>
      <p>Proof. Consider a graph  that is the ADG of a defeasible theory , and literal  , proven in .</p>
      <p>First of all, we can easily show that the graph  outputted by Algorithm 1 is a DAG. By
construction, there is only one vertex in  that has no in-edge, and there exist at least one vertex
that has no out-edge. Let us assume, by contradiction, that we can build a cycle that starts
and ends in one vertex diferent from the root and the leaves found by the above reasoning.
This means that on the cycle in Lines 6-16 we generate one vertex  = (, ) that is entered
twice in one sub-cycle 10-13 in ′. This is impossible, for it would have been the case that
simultaneously  () &lt;  () and  () &lt;  (). This proves the first claim.</p>
      <p>The fact that the obtained DAG  actually is a proof DAG can be easily proven by observing
that the set of facts in , excluding the dummy fact  (that manages the hypotheses), trigger
the rules in the theory. By construction of the chains in backward search provided by cycle
4-19, when no hypothetical rules exist in the theory, this leads to a chain from facts to  , that,
being  proven in , by construction of the graph  in input to Algorithm 1, is unbeaten.</p>
      <p>When the theory contains hypotheses, we need to observe that the dummy fact  triggers only
the rules that are taken from  in , not all the triggered rules, and therefore we cannot ensure
automatically the above reasoning to be applicable. However, since the triggered hypotheses
are a proper subset of the hypotheses, and the literal  is proven by  we can conclude the same.</p>
      <p>The complexity of the algorithm is determined by the execution of the cycles in Algorithm 1.
Although the labels are subsets of rules, there is no combinatorial explosion, as the rules are
used in the labelling only once, with the first derivation of the conclusion of the rule. If a label
"consumes" some of the available rules, no further usage of those rules can occur in subsequent
labelling. Therefore, the process ends in polynomial time.</p>
      <p>We consider a proof to be minimal when it is minimal in its depth. By applying the
reconstruction process by Algorithm 1 and with the proof of Theorem 1, we can ensure that the
extracted proof is minimal. The proof of this is omitted for the sake of space.
Corollary 1. Algorithm 1 computes a minimal proof.</p>
      <p>We conclude this section by providing a formalisation of the theory of Example 1 from Section 1,
and in Figure 2 we then show the DAG and the proof tree for the conclusion +.
Example 4. Consider the theory of Example 1,  = (, , &gt;), with  = {} and  =
{1 :  ⇒ , 2 :  ⇒ , 3 :  ⇒ , 4 :  ⇒ , 5 :  ⇒ , 6 :  ⇒ , 7 :  ⇒
, 8 :  ⇒ , 9 :  ⇒ , 10 : ,  ⇒ , 11 :  ⇒ , 12 :  ⇒ ¬, 13 :  ⇒
¬, 14 :  ⇒ ¬ }, and superiority relations: 7 &gt; 12, 8 &gt; 13, 9 &gt; 14, 11 &gt; 13,
and 11 &gt; 14. We can claim that + holds. Figure 2 shows the proof tree for +.
+
7
1
9
2</p>
      <p>+
+
8
3
+
+


+
+
1


7</p>
      <p>9
+
8
2
+
+


+
+
3</p>
    </sec>
    <sec id="sec-5">
      <title>5. Related work</title>
      <p>Proof reconstruction is a well-studied topic that has received much attention in the area of
automated reasoning. It has been treated in SMT solvers such as Z3 [16], first-order theorem provers
such as E [17], SPASS [18] and Vampire [19], higher-order theorem provers such as Leo-III [20],
interactive theorem provers such as Isabelle [21] and HOL Light [22], and in combinations of
interactive theorem provers with other systems [23, 24, 25]. Also paramodulation methods
employed for higher-order derivations are central [26]. A problem for proof reconstruction in
automated reasoning is when data, which can be either clauses in classical logic or rules in our
case, gets eliminated during the reasoning process because it is redundant. This is not the case
with Houdini, that does not perform such deletion operations.</p>
      <p>Closer to our field of interest, non-monotonic reasoning, are the investigations on the
difference between proving and exhibiting a proof, as in [27] and [28]. The specificity of
nonmonotonic systems lies on the need for a proof that takes into consideration not only a positive
argument in favour of a given thesis, but also the arguments against that thesis, that should be
shown to be beaten by the positive argument.</p>
      <p>In parallel to the research on proof reconstruction, a discussion has started on the topic of
argument reconstruction, where proof and argument are dual concepts, in the sense that the
skeptical framework consider a proof only when there are arguments for it. Walton’s influential
research on the nature of argumentation [29] has considered since the very beginning argument
reconstruction as one of the most interesting research questions in argumentation theory. Some
recent technical advancements have been brought by Aakhus et al. in [30, 31] while designing
strategies for large scale discourses.</p>
      <p>The need for argument reconstruction and for technical solutions to it has been an inspiration
for this work, as it has been for other studies, such as the significant research by Wieland [ 32].
In his study, the author addresses the problem of how to choose an argument among many
when they all appear to be plausible in deriving a conclusion. His approach looks in a critical
way at the foundational studies made by Ryle, and is based on the notion of regress, which
shares commonalities with our technique.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions and further work</title>
      <p>In this paper, we dealt with the problem of computationally reconstructing proofs in a reasoner
for propositional Defeasible Logic (DL). We have shown that the problem shares issues with
proof reconstruction in automated reasoning for classical logic and argument reconstruction
in argumentation theory. The solution that we propose is rooted in the principles of skeptical
non-monotonic reasoning and in the definition of proof in Defeasible Logic. Moreover, the
given algorithm is computationally feasible and outputs a proof that is minimal in its depth.</p>
      <p>There are two directions that are still to explore. First of all, we are interested in exploring
proof reconstruction for Defeasible Deontic Logic. Deontic reasoning requires more subtleties in
the reconstruction process, but then the exhibition of such proofs could be useful in a variety of
legal contexts where a reasoning system can be used. The second direction lies on the possibility
of introducing information asymmetry as in game theory. The fact that we can provide a proof
that is accepted in a dialogue entirely lies on the agreement about the underlying background.
If the background is the law, this is typically managed with an epistemic pure operator, that
asserts the knowledge of facts, rules and superiority relation by each of the discussants. In other
cases, it may be possible to disagree among discussants about the background, that generates a
quite diferent dialogue convergence protocol. A proposition in this context can be justified by
proposing an argument, but also by rejecting counterarguments within a common base or by
negotiation on this base. In case the base is formalised in Defeasible Logic, all of these elements
could be computed by a proof reconstruction algorithm like the one showed in this paper.
1093/jigpal/jzp006.
[15] L. Aceto, W. Fokkink, C. Verhoef, Structural operational semantics, in: J. Bergstra, A. Ponse,
S. Smolka (Eds.), Handbook of Process Algebra, Elsevier Science, Amsterdam, 2001, pp.
197–292. doi:https://doi.org/10.1016/B978-044482830-9/50021-7.
[16] L. M. de Moura, N. S. Bjørner, Proofs and refutations, and z3, in: LPAR Workshops, 2008.
[17] S. Schulz, S. Cruanes, P. Vukmirović, Faster, higher, stronger: E 2.3, in: P. Fontaine (Ed.),</p>
      <p>Proc. of the 27th CADE, Natal, Brasil, number 11716 in LNAI, Springer, 2019, pp. 495–507.
[18] C. Weidenbach, D. Dimova, A. Fietzke, R. Kumar, M. Suda, P. Wischnewski, Spass version
3.5, in: R. A. Schmidt (Ed.), Automated Deduction – CADE-22, Springer Berlin Heidelberg,
Berlin, Heidelberg, 2009, pp. 140–145.
[19] L. Kovács, A. Voronkov, First-order theorem proving and vampire, in: N. Sharygina,
H. Veith (Eds.), Computer Aided Verification, Springer Berlin Heidelberg, Berlin,
Heidelberg, 2013, pp. 1–35.
[20] A. Steen, C. Benzmüller, Extensional higher-order paramodulation in leo-iii, Journal of</p>
      <p>Automated Reasoning 65 (2021) 775 – 807. doi:10.1007/s10817-021-09588-x.
[21] A. Krauss, C. Sternagel, R. Thiemann, C. Fuhs, J. Giesl, Termination of isabelle functions
via termination of rewriting, LNCS-LNAI 6898 LNCS (2011) 152 – 167. doi:10.1007/
978-3-642-22863-6_13.
[22] C. Kaliszyk, J. Urban, Proch: Proof reconstruction for hol light, LNCS-LNAI 7898 LNAI
(2013) 267 – 274. doi:10.1007/978-3-642-38574-2_18.
[23] S. Böhme, Proof reconstruction for Z3 in Isabelle/HOL, in: 7th International Workshop on</p>
      <p>Satisfiability Modulo Theories (SMT ’09), 2009.
[24] S. Böhme, A. C. J. Fox, T. Sewell, T. Weber, Reconstruction of z3’s bit-vector proofs
in hol4 and isabelle/hol, volume 7086 LNCS, 2011, p. 183 – 198. doi:10.1007/
978-3-642-25379-9_15.
[25] N. Sultana, C. Benzmüller, L. C. Paulson, Proofs and reconstructions, in: C. Lutz, S. Ranise
(Eds.), Frontiers of Combining Systems, Springer International Publishing, Cham, 2015, pp.
256–271.
[26] A. Asperti, E. Tassi, Higher order proof reconstruction from paramodulation-based
refutations: The unit equality case, LNCS-LNAI 4573 LNAI (2007) 146 – 160. doi:10.1007/
978-3-540-73086-6_14.
[27] R. Blanco, Z. Chihani, D. Miller, Translating between implicit and explicit versions of proof,</p>
      <p>LNCS-LNAI 10395 LNAI (2017) 255 – 273. doi:10.1007/978-3-319-63046-5_16.
[28] H. Barbosa, J. C. Blanchette, M. Fleury, P. Fontaine, Scalable fine-grained proofs for
formula processing, Journal of Automated Reasoning 64 (2020) 485 – 510. doi:10.1007/
s10817-018-09502-y.
[29] D. N. Walton, Dialogue theory for critical thinking, Argumentation 3 (1989) 169 – 184.</p>
      <p>doi:10.1007/BF00128147.
[30] M. Aakhus, M. G. Benovitz, Argument reconstruction and socio-technical facilitation of
large scale argumentation, volume 363, 2008, p. 77 – 81. doi:10.1145/1479190.1479201.
[31] M. Aakhus, M. Lewiriski, Argument analysis in large-scale deliberation, 2011. doi:10.</p>
      <p>1075/z.163.12aak.
[32] J. W. Wieland, Regress argument reconstruction, Argumentation 26 (2012) 489 – 503.
doi:10.1007/s10503-012-9264-9.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bassiliades</surname>
          </string-name>
          , G. Antoniou, G. Governatori,
          <article-title>Proof explanation in the dr-device system</article-title>
          ,
          <source>LNCS-LNAI 4524 LNCS</source>
          (
          <year>2007</year>
          )
          <fpage>249</fpage>
          -
          <lpage>258</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -72982-2_
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Antoniou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bikakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Dimaresis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Genetzakis</surname>
          </string-name>
          , G. Georgalis,
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Karouzaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kazepis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kosmadakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kritsotakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Lilis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Papadogiannakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pediaditis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Terzakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Theodosaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zeginis</surname>
          </string-name>
          ,
          <article-title>Proof explanation for the semantic web using defeasible logic, LNCS-LNAI 4798 LNAI (</article-title>
          <year>2007</year>
          )
          <fpage>186</fpage>
          -
          <lpage>197</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>540</fpage>
          -76719-0_
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Antoniou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Billington</surname>
          </string-name>
          , G. Governatori,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maher</surname>
          </string-name>
          ,
          <article-title>Representation results for defeasible logic</article-title>
          ,
          <source>ACM Transactions on Computational Logic</source>
          <volume>2</volume>
          (
          <year>2001</year>
          )
          <fpage>255</fpage>
          -
          <lpage>287</lpage>
          . doi:
          <volume>10</volume>
          .1145/371316. 371517.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pasetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tubini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Veronese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Villa</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Zorzi,</surname>
          </string-name>
          <article-title>Houdini (unchained): an efective reasoner for defeasible logic</article-title>
          ,
          <source>in: Proceedings of the 6th Workshop on Advances in Argumentation in Artificial Intelligence</source>
          , volume
          <volume>3354</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rotolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          , Stable normative explanations,
          <source>Frontiers in Artificial Intelligence and Applications</source>
          <volume>362</volume>
          (
          <year>2022</year>
          )
          <fpage>43</fpage>
          -
          <lpage>52</lpage>
          . doi:
          <volume>10</volume>
          .3233/FAIA220447.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rotolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <article-title>Inference to the stable explanations</article-title>
          ,
          <source>LNCS-LNAI 13416 LNAI</source>
          (
          <year>2022</year>
          )
          <fpage>245</fpage>
          -
          <lpage>258</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -15707-3_
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Nute</surname>
          </string-name>
          ,
          <article-title>Defeasible logic</article-title>
          ,
          <source>in: Handbook of Logic in Artificial Intelligence and Logic Programming</source>
          , volume
          <volume>3</volume>
          , Oxford University Press,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Antoniou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Billington</surname>
          </string-name>
          , G. Governatori,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maher</surname>
          </string-name>
          ,
          <article-title>Representation results for defeasible logic</article-title>
          ,
          <source>ACM Trans. Comput. Log</source>
          .
          <volume>2</volume>
          (
          <year>2001</year>
          )
          <fpage>255</fpage>
          -
          <lpage>287</lpage>
          . doi:
          <volume>10</volume>
          .1145/371316.371517.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Scannapieco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rotolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <article-title>The rationale behind the concept of goal, Theory Pract</article-title>
          . Log. Program.
          <volume>16</volume>
          (
          <year>2016</year>
          )
          <fpage>296</fpage>
          -
          <lpage>324</lpage>
          . doi:
          <volume>10</volume>
          .1017/ S1471068416000053.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          , L. Pasetto,
          <article-title>Revising ethical principles and norms in hybrid societies: Basic principles and issues</article-title>
          , in: G. Jezic,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen-Burger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kusek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sperka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Howlett</surname>
          </string-name>
          , L. C. Jain (Eds.),
          <source>Agents and Multi-Agent Systems: Technologies and Applications</source>
          <year>2021</year>
          , Springer Singapore, Singapore,
          <year>2021</year>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rotolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Scannapieco</surname>
          </string-name>
          ,
          <article-title>Computing strong and weak permissions in defeasible logic</article-title>
          ,
          <source>J. Philos. Log</source>
          .
          <volume>42</volume>
          (
          <year>2013</year>
          )
          <fpage>799</fpage>
          -
          <lpage>829</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s10992-013-9295-1.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tomazzoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pasetto</surname>
          </string-name>
          ,
          <article-title>An ontology of changes in normative systems from an agentive viewpoint</article-title>
          , in: F.
          <string-name>
            <surname>De La Prieta</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mathieu</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Rincón Arango</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>El Bolock</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Del Val</surname>
            ,
            <given-names>J. Jordán</given-names>
          </string-name>
          <string-name>
            <surname>Prunera</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Carneiro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Fuentes</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Lopes</surname>
          </string-name>
          , V. Julian (Eds.),
          <article-title>Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness</article-title>
          .
          <source>The PAAMS Collection</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          , G. Governatori,
          <article-title>Compliant business processes with exclusive choices from agent specification</article-title>
          ,
          <source>in: PRIMA</source>
          , volume
          <volume>9387</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2015</year>
          , pp.
          <fpage>603</fpage>
          -
          <lpage>612</lpage>
          . doi:https://10.1007/978-3-
          <fpage>319</fpage>
          -25524-8\_
          <fpage>43</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Padmanabhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rotolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sattar</surname>
          </string-name>
          ,
          <article-title>A defeasible logic for modelling policy-based intentions and motivational attitudes</article-title>
          ,
          <source>Log. J. IGPL</source>
          <volume>17</volume>
          (
          <year>2009</year>
          )
          <fpage>227</fpage>
          -
          <lpage>265</lpage>
          . doi:10.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>