<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>These authors contributed equally.
" cabalar@udc.es (P. Cabalar); brais.mcastro@udc.es (B. Muñiz)
~ https://www.dc.fi.udc.es/~cabalar (P. Cabalar)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Explanation Graphs for Stable Models of Labelled Logic Programs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pedro Cabalar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Brais Muñiz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Coruña</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>In this paper, we introduce the notion of an explanation graph for any model of a logic program. For each true atom in the model, the graph contains a proof that uses program rules represented by rule labels. A model may have zero, one or several explanation graphs: when it has at least one, it is called a justified model. We prove that all stable models are justified whereas, in general, the opposite does not hold, at least for disjunctive programs. With the purpose of just keeping the information that is considered to be salient, we discuss several operations on explanation graphs. These include the removal of incoming or outgoing edges of a node, but also what we define as node forgetting, that is, removing a node while keeping the connectivity of the rest of the graph. Then, we explain how these theoretical concepts constitute the foundation of xclingo 2.0, a tool for explainable Answer Set Programming (ASP) that uses, in its turn, an ASP encoding to generate explanations. The tool translates the original program into a meta-program that constitutes the core of xclingo 2.0. We explain this encoding and prove its soundness and completeness with respect to explanation graphs. Through a practical example, we illustrate the general use of the tool and its input language based on annotations. Finally, we show how critical these annotations can be for designing meaningful, summarised, natural language explanations.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the past few years, Artificial Intelligence (AI) systems have made great advancements,
generally at the cost of increasing their scale and complexity. This has frequently meant that
explaining their full behaviour is no longer embraceable by human understanding. Even for
symbolic AI approaches, which are credited with the advantage of being verifiable, the number
and size of possible justifications generated to explain a given result may easily exceed the
capacity of human comprehension. In a recent review, Tim Miller [1] discusses the adequacy of
explanation generation from the point of view of cognitive science and social psychology. The
author claims that explanations generated by AI systems should be more similar to explanations
provided by humans, who need not be AI specialists. One would expect, therefore, some features
of human-like explanations like being context dependent, causal, selected and contrastive, among
other attributes.</p>
      <p>A line of work initiated in [2] considered the generation of causal graph explanations for
Answer Set Programming (ASP) [3] in the context of practical Knowledge Representation (KR).
This was followed later on by a more practical approach with the implementation of a tool,
xclingo [4], that allowed the generation of explanation trees informally related to causal
graphs. The orientation introduced by xclingo was understanding explanations as a KR
feature per se, rather than a debugging or formal analysis tool. When we try to produce sensible,
context-dependent explanations, or we try to reduce them to relevant information, an additional
efort of KR specification is needed. To facilitate such a specification, xclingo provides an
annotation language. Even though xclingo was only focused on efective causes derived
from the positive part of the program (xclingo does not deal with contrastive explanations),
its application to large programs, sometimes dealing with dynamic domains, revealed some
important limitations. First, xclingo always computed all the possible derivations for each
atom, even when multiple rules could be used to derive it. In large logic programs (especially
those generated automatically) the number of alternative derivations may rise exponentially.
This caused that, in programs where a first answer set was obtained instantly, xclingo required
several minutes to completely explain such answer set. Second, xclingo explanations contained
sometimes information that should be removed since, although accurate from the point of view
of the logic program derivations, it was counter-intuitive under a causal reading perspective. A
third important drawback was that xclingo did not actually count with a formal semantics
describing the explanations to be obtained, so its behaviour remained purely operational.</p>
      <p>In this paper, we describe a formal characterisation of explanations in terms of graphs
constructed with atoms and program rule labels. Under this framework, models may be justified ,
meaning that they have one or more explanation graphs, or unjustified otherwise. We prove
that all stable models are justified whereas, in general, the opposite does not hold, at least
for disjunctive programs. We also characterize a pair of basic operations on graphs, which
we call edge pruning and node forgetting, that allow performing information filtering in the
explanations. These formal definitions constitute the basis of xclingo 2.0, which relies on
an ASP encoding to generate the explanation graphs of a given answer set of some original
program. We prove the soundness and correctness of this encoding and then proceed to explain
the new xclingo 2.0 specification language, in terms of the efects it produces on explanation
graphs. We also provide an illustrating example and explain several minor variations on its
annotations.</p>
      <p>The rest of the paper is structured as follows. Section 2 contains the formal definitions for
explanation graphs, their properties with respect to stable models and their manipulation for
information filtering. Section 3 describes our ASP implementation and proves its soundness
and completeness, including a small example as an illustration. Section 4 briefly comments on
related work and, finally, Section 5 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Explanation Graphs</title>
      <p>
        We start from a finite 1 signature At , a non-empty set of propositional atoms. A (labelled) rule is
an implication of the form:
ℓ : 1 ∨ · · · ∨
 ← 1 ∧ · · · ∧
 ∧ ¬1 ∧ · · · ∧ ¬  ∧ ¬¬1 ∧ · · · ∧ ¬¬ 
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
Given a rule  like (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), we denote its label as Lb() =df ℓ. We also call the disjunction in the
consequent the head of , written Head (), and denote the set of head atoms as H () =df
{1, . . . , }; the conjunction in the antecedent is called the body of  and denoted as Body ().
We also define the positive and negative parts of the body respectively as the conjunctions
Body +() = 1 ∧· · ·∧  and Body − () =df ¬1 ∧· · ·∧¬  ∧¬¬1 ∧· · ·∧¬¬ . The atoms in the
df
positive body are represented as B +() =df {1, . . . , }. As usual, an empty disjunction (resp.
conjunction) stands for ⊥ (resp. ⊤). A rule  with empty head H () = ∅ is called a constraint.
On the other hand, when if H () = {} is a singleton, B +() = ∅ and Body − () = ⊤ the rule
has the form ℓ :  ← ⊤ and is said to be a fact, simply written as ℓ : . We will sometimes
use the abbreviation ℓ : {} ←  to stand for ℓ :  ←  ∧ ¬¬. A (labelled) logic program 
is a set of labelled rules where no label is repeated. Note that  may still contain two rules
, ′ with same body and head Body () = Body (′) and H () = H (′), but diferent labels
Lb() ̸= Lb(′). A program  is positive if Body − () = ⊤ for all rules  ∈  . A program  is
non-disjunctive if |H ()| ≤ 1 for every rule  ∈  . Finally,  is Horn if it is both positive and
non-disjunctive: note that this may include (positive) constraints ⊥ ← .
      </p>
      <p>A propositional interpretation  is any subset of atoms  ⊆ At . We say that a propositional
interpretation is a model of a labelled program  if  |= Body () → Head () in classical logic,
for every rule  ∈  . The reduct of a labelled program  with respect to , written   , is a
simple extension of the standard reduct by [5] that collects now the labelled positive rules:
  =df { Lb() : Head () ←</p>
      <p>Body +() |  ∈ ,  |= Body − () }
As usual, an interpretation  is a stable model (or answer set) of a program  if  is a minimal
model of   . Note that, for the definition of stable models, the rule labels are irrelevant. We
write SM ( ) to stand for the set of stable models of  .</p>
      <p>We define the rules of a program  that support an atom  under interpretation  as  [, ] =df
{ ∈  |  ∈ H (),  |= Body ()} that is, rules with  in the head whose body is true w.r.t. .
The next proposition proves that, given , the rules that support  in the reduct   are precisely
the positive parts of the rules that support  in  .</p>
      <p>Proposition 1. For any model  |=  of a program  and any atom  ∈ : (  )[, ] =
( [, ]) . □
Definition 1 (Explanation). Let  be a labelled program and  a classical model of  . An
explanation  of  under  is a labelled directed graph  = ⟨, ,  ⟩ whose vertices are the
atoms in , the edges in  ⊆  ×  connect pairs of atoms, the function  :  → Lb( ) assigns
a label to each atom, and  further satisfies:
1We leave the study of infinite signatures for future work. This will imply explanations of infinite size, but each
one should contain a finite proof for each atom.</p>
      <p>(i)  is acyclic
(ii)  is injective
(iii) for every  ∈ , the rule  such that Lb() =  () satisfies:
 ∈  [, ] and B +() = { | (, ) ∈ }.
□</p>
      <p>Condition (ii) means that there are no repeated labels in the graph, i.e.,  () ̸=  () for
diferent atoms ,  ∈ . Condition (iii) requires that each atom  in the graph is assigned
the label ℓ of some rule with  in the head, with a body satisfied by  and whose atoms in
the positive body form all the incoming edges for  in the graph. Intuitively, labelling  with
ℓ means that the corresponding (positive part of the) rule has been fired, “producing”  as a
result. Since a label cannot be repeated in the graph, each rule can only be used to produce one
atom, even though the rule head may contain more than one (when it is a disjunction). It is not
dificult to see that an explanation  = ⟨, ,  ⟩ for a model  is uniquely determined by its
atom labelling  . This is because condition (iii) about  in Definition 1 uniquely specifies all the
incoming edges for all the nodes in the graph. On the other hand, of course, not every arbitrary
atom labelling corresponds to a well-formed explanation. We will sometimes abbreviate an
explanation  for a model  by just using its labelling  represented as a set of pairs of the form
 () :  with  ∈ .</p>
      <sec id="sec-2-1">
        <title>Definition 2 (Justified model) . We say that  is a justified model of a labelled program  if</title>
        <p>|=  and  has some explanation  under  . We write JM ( ) to stand for the set of justified
models of  .</p>
        <p>We can observe that not all models are justified, whereas a justified model may have more
than one explanation, as we illustrate next.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Example 1. Consider the labelled logic program</title>
        <p>No model  |=  with  ∈  is justified since  does not occur in any head, and so, its support is
always empty  [, ] = ∅ and  cannot be labelled. The models of  without  are {}, {, },
{, } and {, , } but only the first two are justified. The explanation for  = {} corresponds
to the labelling {(ℓ1 : )} (it forms a graph with a single node). Model  = {, } has the two
possible explanation graphs:
Model  = {, } is not justified: we have no support for  given ,  [, ] = ∅, because 
satisfies neither bodies of ℓ2 nor ℓ3. On the other hand, model {, , } is not justified either,
because  [, ] =  [, ] = {ℓ1} and we cannot use the same label ℓ1 for two diferent atoms 
and  in a same explanation (condition (ii) in Def. 1). □
Definition 3 (Proof of an atom). Let  be a model of a labelled program  ,  = ⟨, ,  ⟩ an
explanation for  under  and let  ∈ . The proof for  induced by , written  (), is the
derivation:</p>
        <p>df  (1) . . .  ()  (),
 () =

where, if  ∈  is the rule satisfying Lb() =  (), then {1, . . . , } = B +(). When  =
0, the derivation antecedent  (1) . . .  () is replaced by ⊤ (corresponding to the empty
conjunction). □
Example 2. Let  be the labelled logic program:
ℓ1 : 
 has a unique justified model {, , } whose explanation is shown in Figure 1 (left) whereas the
induced proof for atom  is shown in Figure 1 (right). □
ℓ1 : 
→ ℓ2 : 
→
→ ℓ3 : 
Explanation graph
⊤ (ℓ1)


(ℓ2)</p>
        <p>Proof for atom 
⊤ (ℓ1)

(ℓ3)</p>
        <p>The next proposition trivially follows from the definition of explanation graphs:</p>
      </sec>
      <sec id="sec-2-3">
        <title>Proposition 2. If  is a Horn program, and  is an explanation for a model  of  then, for</title>
        <p>every atom,  ∈ ,  () corresponds to a Modus Ponens derivation of  using the rules in  .</p>
        <p>It is worth mentioning that explanations do not generate any arbitrary Modus Ponens
derivation of an atom, but only those that are globally “coherent” in the sense that, if any atom
 is repeated in a proof, it is always justified repeating the same subproof.</p>
        <p>In the previous examples, justified and stable models coincided: one may wonder whether
this is a general property. As we see next, however, every stable model is justified but, in general,
the opposite may not hold.</p>
        <sec id="sec-2-3-1">
          <title>Theorem 1. Stable models are justified: SM ( ) ⊆ JM ( ).</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Proposition 3. If  is a consistent Horn program then it has a unique justified model  that coincides with the least model of  .</title>
        <p>In general, the number of explanations for a single justified model can be exponential, even
when the program is Horn, and so, has a unique justified and stable model corresponding to the
least classical model, as we just proved. As an example:</p>
      </sec>
      <sec id="sec-2-5">
        <title>Example 3 (A chain of firing squads) . Consider the following variation of the classical Firing</title>
      </sec>
      <sec id="sec-2-6">
        <title>Squad Scenario introduced by [6] for causal counterfactuals (although we do not use it for that</title>
        <p>purpose here). We have an army distributed in  squads of three soldiers each, a captain and two
riflemen for each squad. We place the squads in a sequence of  consecutive hills  = 0, . . . ,  − 1.</p>
        <sec id="sec-2-6-1">
          <title>An unfortunate prisoner is at the last hill  − 1, and is being aimed at by the last two riflemen.</title>
        </sec>
      </sec>
      <sec id="sec-2-7">
        <title>At each hill , the two riflemen  and  will fire if their captain  gives a signal to fire. But then, captain +1 will give a signal to fire if she hears a shot from the previous hill  in the distance. Suppose captain 0 gives a signal to fire. Our logic program would have the form:</title>
        <p>0 : signal 0
 : fireA ←
 : fireB ←
signal 
signal 
′+1 : signal +1 ←
′+1 : signal +1 ←
fireA
fireB
for all  = 0, . . . ,  − 1 where we assume (for simplicity) that signal  represents the death of
the prisoner. This program has one stable model (the least model) making true the 3 + 1 atoms
occurring in the program. However, this last model has 2 explanations because to derive signal +1
from level , we can choose between any of the two rules ′ or ′ (corresponding to the two riflemen)
in each explanation. □</p>
        <p>In many disjunctive programs, justified and stable models coincide. For instance, the following
example is an illustration of a program with disjunction and head cycles.</p>
      </sec>
      <sec id="sec-2-8">
        <title>Example 4. Let  be the program:</title>
        <p>ℓ1 :  ∨</p>
        <sec id="sec-2-8-1">
          <title>This program has one justified model {, } that coincides with the unique stable model and has two possible explanations, {(ℓ1 : ), (ℓ2 : )} and {(ℓ1 : ), (ℓ3 : )}. □</title>
          <p>However, in the general case, not every justified model is a stable model: we provide next a
simple counterexample. Consider the program  :
whose classical models are the five interpretations: {}, {, }, {, }, {, } and {, , }. The
last one {, , } is not justified, since we would need three diferent labels and we only have
two rules. Each model {, }, {, }, {, } has a unique explanation corresponding to the
atom labellings {(ℓ1 : ), (ℓ2 : )}, {(ℓ1 : ), (ℓ2 : )} and {(ℓ1 : ), (ℓ2 : )}, respectively.
On the other hand, model {} has two possible explanations, corresponding to {(ℓ1 : )} and
{(ℓ2 : )}. Notice that, in the definition of explanation, there is no need to fire every rule with
a true body in  – we are only forced to explain every true atom in . Note also that only the
justified models {} and {, } are also stable: this is due to the minimality condition imposed
by stable models on positive programs, getting rid of the other two justified models {, } and
{, }. The following theorem asserts that, for non-disjunctive programs, every justified model
is also stable.</p>
          <p>Theorem 2. If  is a non-disjunctive program, then SM ( ) = JM ( ).
□</p>
          <p>In the rest of this section, we consider a pair of operations on explanation graphs that allow
ifltering their information depending on what a final user may consider relevant or not. These
two operations are edge pruning and node forgetting. The idea behind edge pruning is as follows.
Suppose we display some proof  () as a tree, with the atom  in the root, writing the proof
backwards until we reach the facts in the leaves. We may reach points in the proof where we
are not interested in go on deepening, so we prefer pruning the tree at that node. Formally, we
define this as an operation in the graph.</p>
          <p>Definition 4 (Edge pruning). Let  = ⟨, ,  ⟩ be an explanation and let us define the subset
of atoms  ⊆ At and the subset of labels  ⊆ Lb( ). Then, the (edge) pruning operation on 
df
produces a new graph prune(, , ) = ⟨,  ∖ ′,  ⟩ where:
′ =df {︀ (, ) ∈  |  ∈ }︀
∪ {(, ) ∈  |  () ∈ }
□</p>
          <p>In other words, we remove the outgoing edges for all pruned atoms  and the incoming
edges for all nodes  with a pruned label  () ∈ . As an example, Figure 2 shows in 2 the
result of pruning 1 with  = {} and  = {ℓ4}.</p>
          <p>The intuition for the second operation, node forgetting, is to obtain explanations that only
consist of atoms and rules considered relevant, removing the irrelevant information.
Definition 5 (Node forgetting). Let  = ⟨, ,  ⟩ be an explanation and let  ⊆ At be
a set of atoms to be removed. Then, the node forgetting operation on  produces the graph
forget (, ) =df ⟨ ∖ , ′,  ⟩ where ′ contains all edges (0, ) such that there exists a path
(0, 1), (1, 2), . . . , (− 1, ) in  with  ≥ 0, {0, } ∩  = ∅ and {1, . . . , − 1} ⊆ .
□</p>
          <p>Note that ′ contains all original edges (, ) ∈  for non-removed atoms {, } ∩  = ∅,
since they correspond to the case where  = 1. Graph 3 in Figure 2 is the result of forgetting
nodes  = {, } on 1.</p>
          <p>If we are going to prune some edges and forget some nodes, the order in which we perform
both operations produces diferent results. For instance, take the graph :
ℓ1 :  →−
ℓ2 :  →−
and suppose we want to both prune atom  = {} and forget the same atom. If we start by
pruning, forget (prune(, {}, ∅), {}), the final result leaves two disconnected nodes ℓ1 : 
and ℓ3 : . If we start instead by forgetting, prune(forget (, {}), {}, ∅), we get
because the result of pruning after forgetting has no efect since node  does not exist any more.
For this reason, in practice, pruning will be performed in the first place (when we still have all
the original nodes in the graph) and forgetting afterwards. An important observation is that, if
pruning and forgetting are defined as overall operations on a set of explanations, we may get
that two diferent explanations 1 and 2 end up producing the same filtered explanation 3.
For this reason, pruning and forgetting not only reduce the information in each explanation but
may also significantly reduce the number of diferent (filtered) explanations.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Using ASP to compute and filter explanations: xclingo 2.0</title>
      <p>One of the main features introduced in xclingo 2.0 is the computation of explanations based on
an ASP encoding instead of the ad hoc algorithm used before. Given a program  , xclingo 2.0
builds a generic (non-ground) ASP program ( ) that can be fed with the atoms in some
answer set  of  to build the ground program (, ). As we will prove, the answer sets
of (, ) are in one-to-one correspondence with the explanations of . In this way, rather
than collecting all possible explanations in a single shot, something that results too costly
for explaining large programs, xclingo 2.0 performs regular calls to clingo with (, )
to compute one, several or all explanations of  on demand. Besides, this provides a more
declarative approach that can be easily extended to cover new features (such as, for instance,
minimisation among explanations).</p>
      <p>
        Although the real xclingo encoding for the program ( ) is slightly more elaborated, its
essential part can be just defined as follows. For each rule in  of the form (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), ( ) contains
the set of rules:
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
(ℓ) ←
(1) ∧ · · · ∧
      </p>
      <p>() ∧ () ∧ ¬(1) ∧ · · · ∧ ¬ ( )
∧ ¬¬(1) ∧ · · · ∧ ¬¬ ()
{ (ℓ, )} ←
 (1) ∧ · · · ∧</p>
      <p>() ∧ () ∧ (ℓ)
⊥ ←</p>
      <p>(ℓ, ) ∧  (ℓ,  )
for all ,  = 1 . . .  and  ̸= , and, additionally ( ) contains the rules:
 () ←
⊥ ←
⊥ ←
 (, ) ∧ ()
not  () ∧ ()
 (, ) ∧  (′, ) ∧  ̸= ′ ∧ ()
As we can see, ( ) reifies atoms in  using three predicates: () which means that atom 
is in the answer set , so it is an initial assumption;  (, ) means that rule with label  has
been “fired” for atom , that is,  () = ; and, finally,  () that just means that there exists
some fired rule for  or, in other words, we were able to derive . Predicate (ℓ) tells us that
the body of the rule  with label ℓ is “supported” by , that is,  |= Body (). Given any answer
set  of  , we define the program (, ) =df ( ) ∪ {() |  ∈ }. It is easy to see that
(, ) becomes equivalent to the ground program containing the following rules:
{ (ℓ, )} ←  (1) ∧ · · · ∧
⊥ ←  (ℓ, ) ∧  (ℓ,  )
 () ←  (ℓ, )</p>
      <p>
        not  ()
⊥ ←
⊥ ←  (ℓ, ) ∧  (ℓ′, )
for each rule  ∈  like (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
 |= Body (),  ∈ H () ∩ 
for each rule  ∈  like (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
,  ∈ H (),  ̸= 
for each  ∈ 
for each  ∈ 
for each  ∈ , ℓ ̸= ℓ′
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
      </p>
      <sec id="sec-3-1">
        <title>Theorem 3 (Soundness). Let  be an answer set of  . For every answer set  of program (, )</title>
        <p>there exists an explanation  = ⟨, ,  ⟩ of  under  such that  () = ℓ if  (ℓ, ) ∈  . □
Theorem 4 (Completeness). Let  be an answer set of  . For every explanation  = ⟨, ,  ⟩
of  under  there exists some answer set  of program (, ) where  (ℓ, ) ∈  if  () = ℓ
in . □</p>
        <p>The tool xclingo 2.0 also performs the information filtering in an explanation, as an
extension to the ASP encoding (, ) that computes the explanations themselves. Once filtering
(pruning and forgetting) is applied, we may obtain that diferent explanations collapse to the
same filtered one: we make use of the –project feature in clingo to avoid producing repeated
explanations.</p>
        <p>To decide which atoms we want to forget or which nodes we want to prune, the xclingo input
language allows including annotations in the original ASP program  . These annotations have
the form of line comments beginning with %! followed by some keyword. As a result, xclingo
code is fully compatible with regular clingo, and we may just decide to leave the annotations
as usual comments (in fact, in many cases, they also help to clarify the rule meaning). Figure 3
shows an example of annotated code. The program has several facts about some weddings,
divorces, and their associated years. Predicate person(X) just collects anybody mentioned
in any wedding. Besides, predicates wed(X,Y,D) and divorced(X,Y,D) are symmetric in
their arguments X and Y. Predicate unwed(X,Y,D) points out a wedding wed(X,Y,D) that
became inefective by a later divorce. Finally, predicates married(X) and single(X) indicate
the current marital status of some person X. Note that the rule for single(X) is formulated as
a default: somebody is single if we cannot prove she is currently married. This example has
a unique answer set for which xclingo generates the unique explanation shown in Figure 4.
Each tree in the output corresponds to a reversed (Modus Ponens) proof for the positive part
of the program and shows the corresponding trace message at each proof node. We can see
that billy is married because he wed connie in 2004. We do not get an extended story
with billy’s previous marriages, as they are not relevant. Similarly, brad, angelina and
jennifer are currently single due to the application of a default: there is no current efective
marriage for them. In fact, xclingo answers positive questions like “why is X married?” or
wed(angelina,billy,2000).
wed(brad,jennifer,2000).
wed(brad,angelina,2014).
wed(billy,connie,2014).
person(X) :- wed(X,Y,D).
person(Y) :- wed(X,Y,D).
%!mute {person(X)} :- person(X).
%!mute_body.
wed(X,Y,D) :- wed(Y,X,D).</p>
        <p>divorced(angelina,billy,2003).
divorced(brad,jennifer,2005).</p>
        <p>divorced(brad,angelina,2019).
divorced(X,Y,D) :- divorced(Y,X,D).
unwed(X,Y,D) :- wed(X,Y,D), divorced(X,Y,D2), D&lt;D2.
%!trace_rule {"% is married",X}.
married(X) :- wed(X,Y,D), not unwed(X,Y,D).
%!trace_rule {"% is single, as we couldn’t prove (s)he is married",X}.
single(X) :- person(X), not married(X).
%!trace {wed(X,Y,D), "% wed % in %",X,Y,D} :- wed(X,Y,D).
%!show_trace {single(X)}.
%!show_trace {married(X)}.
“why is X single?” in terms of what has actually happened, but it does not currently consider
negative questions like “why is not brad married?” or “why is not billy single?” that would
require hypothetical reasoning.</p>
        <p>The example shows how edge pruning can be specified using annotations mute (for atoms)
and mute_body (for rules). For instance, the rule for the symmetric closure of wed(X,Y,D)
is afected by mute_body, an annotation that must always precede the afected rule. Without
this, the explanation for connie’s marital status would look like:
|__"connie is married"
| |__"connie wed billy in 2014"
| | |__"billy wed connie in 2014"
that looks unnecessarily redundant. By muting the body of the rule, there is no real diference
between wed(X,Y,D) and wed(Y,X,D) regarding its efect on the explanations. We can also
observe that predicate person(X) is always muted. This is because, in this program, the extent
of this predicate is derived from the first two arguments of wed(X,Y,D). If we remove this
mute annotation, we would get unnatural efects for single people, like for instance:
|__"brad is single, as we couldn’t prove (s)he is married"
Answer: 1
##Explanation: 1.1
*
|__"angelina is single, as we couldn’t prove (s)he is married"
*
|__"brad is single, as we couldn’t prove (s)he is married"
*
|__"jennifer is single, as we couldn’t prove (s)he is married"
*
|__"billy is married"
| |__"billy wed connie in 2014"
*
|__"connie is married"
| |__"connie wed billy in 2014"
##Total Explanations: 1
Models: 1
| |__"brad wed angelina in 2014"
This explanation is counter-intuitive because it uses marriage to justify that brad is single.
Moreover, by removing the mute for person we actually get 32 explanations, since several
single people have married multiple times in the past, and each marriage combination can be
used in these justifications. To understand what is really happening, we may further add the
following trace annotation:
%!trace {person(X),"% is a person",X} :- person(X).
at any point in the code to unveil the efect of predicate person, which was being forgotten
until now. Once we do so, we obtain a more detailed justification:
|__"brad is single, as we couldn’t prove (s)he is married"
| |__"brad is a person"
| | |__"brad wed angelina in 2014"
that is telling us that, to decide that brad is single, we have used the fact that he is a person,
something concluded, in its turn, from his participation in a wedding. This clarifies why
predicate person was originally muted since the way in which we complete this predicate from
others in the database should have no causal reading (brad is not a person because he married
once).</p>
        <p>Back to the original code in Figure 3, note how annotations trace and trace_rule are used
to point out which atoms are relevant for an explanation, being all the rest automatically
forgotten. For instance, we use trace on any atom for wed, so that wed(brad,angelina,2014)
generates the trace message “brad wed angelina in 2014” each time this fact is included
in any proof tree. The efect of trace_rule is similar, but is associated to the rule occurring
immediately after the annotation.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Related work</title>
      <p>The closest approach to the current work is clearly the already mentioned one based on causal
graphs [2]. Although we conjecture that a formal relation can be established (we plan this
for future work), the main diference is that causal graphs are “atom oriented” whereas the
current approach is graph oriented. For instance, in the firing squads example, the
causalgraph explanation for the derivations of atoms signal 4 and signal 8 would contain algebraic
expressions with all the possible derivations for each one of those atoms. In the current approach,
however, we would get an individual derivation in each case, but additionally, the proof we get
for signal 4 has to be the same one we use for that atom inside the derivation of signal 8.</p>
      <p>Justifications based on the positive part of the program were also used before in [ 7]. There,
the authors implemented an ad-hoc approach to the problem of solving biomedical queries,
rather than a general ASP explanation tool. In that paper, the authors were also able to explain
unsatisfiable programs by weakening the constraints adding auxiliary head atoms. In fact, this
feature has also been implemented in xclingo 2.0 (currently under experimentation) by adding
trace messages to constraints. Another interesting feature in that approach was the selection of
minimal explanations and the possibility of generating alternative explanations diferent enough ,
using a distance measure. These ideas can now be easily explored in xclingo thanks to its
implementation as an ASP encoding.</p>
      <p>Other examples of general approaches are the formal theory of justifcations [8], of-line
justifications [9], LABAS [10] (based on argumentation theory [11, 12]) or s(CASP) [13]. All of
them provide tree-based explanations for an atom to be (or not) in a given answer set. The
formal theory of justifications was also extended to deal with justification graphs and nested
justifications [ 14] and is actually a more general framework that allows covering other logic
programming semantics. In the case of s(CASP), they provide a similar text template system to
provide natural language explanations and also provide several methods for simplifying the
ifnal explanation trees. Additionally, s(CASP) proceeds in a top-down manner, building the
explanation as an ordered list of literals extracted from the goal-driven satisfaction of the query.
As opposed to our approach, these frameworks include the negative body of the rules as part
of the explanation of why an atom is true. Although this clearly gives more detail, we claim
that in practice, this orientation may become rather inconvenient (at least if we do not look for
contrastive explanations). For instance, in our illustrating example, asking why brad is single
is answered by just replying that no efective marriage was found. The previous marriages
are irrelevant for that query, provided that the single status was defined as a default. This
may seem an insignificant issue, but when we deal with defaults in dynamic domains, such as
inertia, we may easily get too much irrelevant information. For instance, turning on a lamp at
situation 0 and asking why is it on at situation 100 may end up describing all possible ways in
which the lamp was not turned of during those 100 steps, rather than just pointing out the only
executed action in the initial state. We claim that negative information should be considered for
contrastive queries such as “why is not the lamp of?” and, even in that case, a type of minimal
explanation (as happens in diagnosis systems) would be required.</p>
      <p>Other explanation approaches are more oriented to models as a whole, rather than to atoms
in a model. For instance, [15] uses a meta-programming technique to explain why a given model
is not an answer set of a given program. More recently, [16] considered the explanation of
ASP programs that have no answer sets in terms of the concept of abstraction [17]. This allows
spotting which parts of a given domain are actually relevant for rising the unsatisfiability of the
problem.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>We have provided a formal basis for version 2.0 of the ASP explanation tool xclingo, whose
operation mode has been changed to use ASP for computing the explanations. We introduced
the notion of an explanation graph for any model of a logic program and defined justified models
as those that have at least one explanation. We proved that all stable models are justified, but
the opposite does not hold, at least for disjunctive programs. We also defined a pair of simple
operations on graphs that allow filtering their information. These definitions were then used to
prove the soundness and correctness of the xclingo 2.0 encoding. We also explained the new
features of xclingo language and how these are related to their formal semantics.</p>
      <p>The main contribution of this paper is the formal definition of the explanation graphs, which
can be used to implement other systems that use ASP for explanation generation, together with
the proof of correctness of the xclingo 2.0 encoding. A more reference system description
of the tool is left for a forthcoming document. It is worth mentioning that xclingo 2.0 is
being currently used in a pair of medium/large size applications that have motivated part of the
language and design changes. The first one is the tool TEXT2ALM [18], which is able to answer
queries for narratives such as the ones published in the bAbI tasks [19], and is being upgraded
to also explain the answer to such queries. The new system X_TEXT2ALM2 uses xclingo 2.0
Python API to compute the explanations. The second application is the explanation of Decision
Tree classifiers using the crystal-tree Python package3 that calls xclingo 2.0 as a backend
and has been applied to the medical domain [20].</p>
      <p>As commented in the previous section, future work includes the explanation of
unsatisfiable programs (by constraint weakening) and the minimisation or even the specification of
preferences among explanations. Another interesting topic we plan to explore is diferent
alternatives for an explanation of aggregates: right now, xclingo 2.0 uses all literals involved
in an aggregate as a cause for its truth. Finally, we also plan to study formal comparisons to
some of the approaches in the literature mentioned in the related work.
2https://github.com/amdorsey12/X_Text2ALMClientRelease
3https://github.com/bramucas/crystal-tree</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Partially funded by Xunta de Galicia and the European Union, grants CITIC (ED431G
2019/01), GPC ED431B 2022/33, by the Spanish Ministry of Science and Innovation, Spain,
MCIN/AEI/10.13039/501100011033 (grant PID2020-116201GB-I00) and by Project LIANDA by
Fundación BBVA scientific research projects, Spain
answer-set programs, in: D. Fox, C. P. Gomes (Eds.), Proc. of the 23rd AAAI Conf. on
Artificial Intelligence, Chicago, IL, USA, AAAI Press, 2008, p. 6.
[16] T. Eiter, Z. Saribatur, P. Schüller, Abstraction for zooming-in to unsolvability reasons of
grid-cell problems, in: Intl. Joint Conf. on Artificial Intelligence IJCAI 2019, Workshop on
Explainable Artificial Intelligence, 2019, p. 15.
[17] Z. G. Saribatur, T. Eiter, P. Schüller, Abstraction for non-ground answer set programs,</p>
      <p>Artificial Intelligence 300 (2021) 103563.
[18] C. Olson, Y. Lierler, Information extraction tool Text2ALM: From narratives to action
language system descriptions, in: Intl. Conf. on Logic Programming, ICLP, 2019, p. 20.
[19] J. Weston, A. Bordes, S. Chopra, T. Mikolov, Towards AI-complete question answering: A
set of prerequisite toy tasks, in: Y. Bengio, Y. LeCun (Eds.), 4th Intl. Conf. on Learning
Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conf. Track Proceedings,
2016, p. 21.
[20] P. Cabalar, B. Muñiz, G. Pérez, F. Suárez, Explainable machine learning for liver
transplantation, in: 2nd Intl. Workshop on eXplainable Artificial Intelligence in Healthcare,
XAI-Healthcare 2022, Rochester, Minnesota, USA, 2022, p. 11.</p>
    </sec>
    <sec id="sec-7">
      <title>Appendix. Proofs</title>
      <p>Proof of Proposition 1.</p>
      <p>We prove first ⊇ : suppose  ∈  [, ] and let us call ′ = Lb() : Head () ← Body+(). Then,
by definition,  |= Body() and, in particular,  |= Body− (), so we conclude ′ ∈   . To see
that ′ ∈   [, ], note that  |= Body() implies  |= Body+() = Body(′).</p>
      <p>For the ⊆ direction, take any ′ ∈   [, ]. By definition of reduct, we know that ′ is
a positive rule and that there exists some  ∈  where Lb() = Lb(′), H () = H (′),
B +() = B +(′) and  |= Body− (). Consider any rule  satisfying that condition (we could
have more than one): we will prove that  ∈  [, ]. Since ′ ∈   [, ], we get  |= Body(′)
but this is equivalent to  |= Body+(). However, as we had  |= Body− (), we conclude
 |= Body() and so  is supported in  given . □</p>
      <p>To prove Theorem 1 We start proving a correspondence between explanations for any model
 of  and explanations under   .</p>
      <sec id="sec-7-1">
        <title>Proposition 4. Let  be a model of program  . Then  is an explanation for  under  if  is</title>
        <p>an explanation for  under   .</p>
        <p>Proof. By Proposition 1, for any atom  ∈ , the labels in  [, ] and   [, ] coincide, so there
is no diference in the ways in which we can label  in explanations for  and for   . On the
other hand, the rules in   [, ] are the positive parts of the rules in  [, ], so the graphs we
can form are also the same. □
Corollary 1.  ∈ JM ( ) if  ∈ JM (  ).</p>
        <p>Proof of Theorem 1.</p>
        <p>Let  be a stable model of  . To prove that there is an explanation  for  under  , we can use
Proposition 1 and just prove that there is some explanation  for  under   . We will build the
explanation with a non-deterministic algorithm where, in each step , we denote the graph 
as  = ⟨, ,  ⟩ and represent the labelling   as a set of pairs of the form (ℓ : ) meaning
ℓ =  (). The algorithm proceeds as follows:
1: 0 ← ∅ ; 0 ← ∅ ;  0 ← ∅
2: 0 = ⟨0, 0,  0⟩
3:  ← 0
4: while  ̸|=   do
5: Pick a rule  ∈   s.t.  |= Body() ∧ ¬Head ()
6: Pick an atom  ∈  ∩ H ()
7: +1 ←  ∪ {}
8:  +1 ←   ∪ {(ℓ : )}
9: +1 ←  ∪ {(, ) |  ∈ B +()}
10:  ← ⟨ , ,  ⟩
11:  ←  + 1
12: end while
The existence of a rule  ∈   in line 5 is guaranteed because the while condition asserts
 ̸|=   and so there must be some rule whose positive body is satisfied by  but its head is not
satisfied. We prove next that the existence of an atom  ∈  ∩ Head () (line 5) is also guaranteed.
First, note that the while loop maintains the invariant  ⊆ , since 0 = ∅ and  only grows
with atoms  (line 7) that belong to  (line 6). Therefore,  |= Body () implies  |= Body (),
but since  |=   , we also conclude  |=  and thus  |= Head () that is  ∩ H () ̸= ∅,
so we can always pick some atom  in that intersection. Now, note that the algorithm stops
because, in each iteration,  grows with exactly one atom from  that was not included before,
since  |= ¬Head (), and so, this process will stop provided that  is finite. The while stops
satisfying  |=   for some value  = . Moreover,  = , because otherwise, as  ⊆  is
an invariant, we would conclude  ⊂  and so  would not be a minimal model of   , which
contradicts that  is a stable model of  . We remain to prove that the final  = ⟨, ,  ⟩
is a correct explanation graph for  under   . As we said, the atoms in  are the graph nodes
 = . Second, we can easily see that  is acyclic because each iteration adds a new node 
and links this node to previous atoms from B +() ⊆  (remember  |= Body ()) so no loop
can be formed. Third, no rule label can be repeated, because we go always picking a rule 
that is new, since it was not satisfied in  but becomes satisfied in +1 (the rule head Head ()
becomes true). Last, for every  ∈ , it is not hard to see that the (positive) rule  ∈   such
that Lb() =  () satisfies  ∈ H () and B +() = { | (, ) ∈ } by the way in which we
picked  and inserted  in , whereas  |= Body () because  |= Body (),  is a positive rule
and  ⊆ . □
Proof of Proposition 3.</p>
        <p>Since  is Horn and consistent (all constraints are satisfied) its unique stable model is the least
model . By Theorem 1,  is also justified by some explanation . We remain to prove that 
is the unique justified model. Suppose there is another model  ⊃  (remember  is the least
model) justified by an explanation  and take some atom  ∈  ∖ . Then, by Proposition 2, the
proof for  induced by ,  (), is a Modus Ponens derivation of  using the rules in  . Since
Modus Ponens is sound and the derivation starts from facts in the program, this means that 
must be satisfied by any model of  , so  ∈  and we reach a contradiction. □
Proof of Theorem 2.</p>
        <p>Given Theorem 1, we must only prove that, for non-disjunctive programs, every justified model
is also stable. Let  be a justified model of  . By Proposition 4, we also know that  is a justified
model of   .   is a positive program and is non-disjunctive (since  was non-disjunctive)
and so,  is a Horn program. By Proposition 3, we know  is also the least model of   , which
makes it a stable model of  . □
Proof of Theorem 3.</p>
        <p>
          We have to prove that  induces a valid explanation graph . Let us denote At ( ) =df { ∈
At |  () ∈  }. Since (
          <xref ref-type="bibr" rid="ref11">11</xref>
          ) is the only rule for  (), we can apply completion to conclude
that  () ∈  if  (ℓ, ) ∈  for some label ℓ. So, the set At ( ) contains the set of atoms
for which  assigns some label: we will prove that this set coincides with . We may observe
that  ⊆ At ( ) because for any  ∈  we have the constraint (
          <xref ref-type="bibr" rid="ref12">12</xref>
          ) forcing  () ∈  . On the
other hand, At ( ) ⊆  because the only rules with  () in the head are (
          <xref ref-type="bibr" rid="ref11">11</xref>
          ) and these are
only defined for atoms  ∈ . To sum up, in any answer set  of (, ), we derive exactly the
original atoms in , At ( ) =  and so, the graph induced by  has exactly one node per atom
in .
        </p>
        <p>
          Constraint (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) guarantees that atoms  (ℓ, ) have a functional nature, that is, we never get
two diferent labels for a same atom . This allows defining the labelling function  () = ℓ
if  (ℓ, ) ∈  . We remain to prove that conditions (i)-(iii) in Definition 1 hold. Condition
(ii) requires that  is injective, something guaranteed by (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ). Condition (iii) requires that,
informally speaking, the labelling for each atom  corresponds to an activated, supported rule
for . That is, if  () = ℓ, or equivalently  (ℓ, ), we should be able to build am edge (, )
for each atom in the positive body of ℓ so that atoms  are among the graph nodes. This is
guaranteed by that fact that rule (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) is the only one with predicate  (ℓ, ) in the head. So, if that
ground atom is in  , it is because  () are also in  i.e.  ∈ , for all atoms in the positive body
of rule labelled with ℓ. Note also that (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) is such that  |= Body (), so the rule supports atom 
under , that is,  ∈  [, ]. Let us call  to the set of edges formed in this way. Condition (i)
requires that the set  of edges forms an acyclic graph. To prove this last condition, consider
the reduct program (, ) . The only diference of this program with respect to (, ) is
that rules (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) have now the form:
 (ℓ, ) ←  (1) ∧ · · · ∧
for each rule  ∈  like (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ),  |= Body (),  ∈ H () ∩  as before, but additionally  (ℓ, ) ∈ 
so the rule is kept in the reduct. Yet, the last condition is irrelevant since  (ℓ, ) ∈  implies
 () ∈  so  ∈ At ( ) = . Thus, we have exactly one rule (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) in (, ) per each choice
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) in (, ). Now, since  is an answer set of (, ), by monotonicity of constraints, it (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ),
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          ) and (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) and is an answer set of the rest of the program  ′ formed by rules (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) and (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ).
This means that  is a minimal model of  ′. Suppose we have a cycle in , formed by the
(labelled) nodes and edges (ℓ1 : 1) →− · · · →− (ℓ : ) →− (ℓ1 : 1).Take the interpretation
 ′ =  ∖ { (ℓ1, 1), . . . ,  (ℓ, ),  (1), . . . ,  ()}. Since  is a minimal for  ′ there must
be some rule (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) or (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) not satisfied by  ′. Suppose  ′ does not satisfy some rule (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) so that
 () ̸∈  ′ but  (ℓ, ) ∈  ′ ⊆  . This means we had  () ∈  since the rule was satisfied by 
so  is one of the removed atoms  belonging to the cycle. But then  (ℓ, ) should have been
removed  (ℓ, ) ̸∈  ′ and we reach a contradiction. Suppose instead that  ′ does not satisfy
some rule (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ), that is,  (ℓ, ) ̸∈  ′ and { (1), . . . ,  ()} ⊆  ′ ⊆  . Again, since the body
holds in  , we get  (ℓ, ) ∈  and so,  (ℓ, ) is one of the atoms in the cycle we removed from
 ′. Yet, since (ℓ : ) is in the cycle, there is some incoming edge from some atom in the cycle
and, due to the way in which atom labelling is done, this means that this edge must come from
some atom  with 1 ≤  ≤  in the positive body of the rule whose label is ℓ. But, since this
atom is in the cycle, this also means that  () ̸∈  ′ and we reach a contradiction. □
Proof of Theorem 4.
        </p>
        <p>Take  an answer set of  and  = ⟨, ,  ⟩ some explanation for  under  and let us define</p>
        <p>
          := { () |  ∈ } ∪ { (ℓ, ) |  () = ℓ}
We will prove that  is an answer set of (, ) or, in other words, that  is a minimal model
of (, ) . First, we will note that  satisfies (, ) rule by rule. For the constraints, 
obviously satisfy (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) because it contains an atom  () for each  ∈ . We can also see that
 satisfies (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) because graph  does not contain repeated labels, so we cannot have two
diferent atoms with the same label. The third constraint (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) is also satisfied by  because
atoms  (ℓ, ),  (ℓ′, ) are obtained from  () that is a function that cannot assign two diferent
labels to a same atom . Satisfaction of (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) is guaranteed since the head of this rule  () is
always some atom  ∈  and therefore  () ∈  . For the remaining rule, (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ), we have two
cases. If  (ℓ, ) ̸∈  then the rule is not included in the reduct and so there is no need to be
satisfied. Otherwise, if  (ℓ, ) ∈  then the rule in the reduct corresponds to (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) and is trivially
satisfied by  because its only head atom holds in that interpretation. Finally, to prove that
 is a minimal model of (, ) , take the derivation tree  () for each atom  ∈ . Now,
construct a new tree  where we replace each atom  in  () by an additional derivation
from  (ℓ, ) to  () through rule (
          <xref ref-type="bibr" rid="ref11">11</xref>
          ). It is easy to see that  constitutes a Modus Ponens proof
for  () under the Horn program (, ) and the same reasoning can be applied to atom
 (ℓ, ) ∈  that is derived in the tree  for  (). Therefore, all atoms in  must be included in
any model of (, ) . □
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>267</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cabalar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fandinno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fink</surname>
          </string-name>
          ,
          <article-title>Causal graph justifications of logic programs</article-title>
          ,
          <source>Theory and Practice of Logic Programming</source>
          <volume>14</volume>
          (
          <year>2014</year>
          )
          <fpage>603</fpage>
          -
          <lpage>618</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Brewka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Truszczyński</surname>
          </string-name>
          ,
          <article-title>Answer set programming at a glance</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>54</volume>
          (
          <year>2011</year>
          )
          <fpage>92</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cabalar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fandinno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Muñiz</surname>
          </string-name>
          ,
          <article-title>A system for explainable answer set programming</article-title>
          , in: F. R. et al (Ed.),
          <source>Proc. 36th Intl. Conf. on Logic Programming (Technical Communications)</source>
          , UNICAL, Rende, Italy,
          <year>2020</year>
          , volume
          <volume>325</volume>
          <source>of EPTCS</source>
          ,
          <year>2020</year>
          , p.
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gelfond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          ,
          <article-title>The stable models semantics for logic programming</article-title>
          ,
          <source>in: Proc. of the 5th Intl. Conf. on Logic Programming</source>
          ,
          <year>1988</year>
          , pp.
          <fpage>1070</fpage>
          -
          <lpage>1080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          ,
          <article-title>Reasoning with cause and efect</article-title>
          , in: T.
          <string-name>
            <surname>Dean</surname>
          </string-name>
          (Ed.),
          <source>Proc. of the 16th Intl. Joint Conf. on Artificial Intelligence, IJCAI 99</source>
          , Stockholm, Sweden, Morgan Kaufmann,
          <year>1999</year>
          , p.
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Erdem</surname>
          </string-name>
          , U. Oztok,
          <article-title>Generating explanations for complex biomedical queries</article-title>
          ,
          <source>Theory and Practice of Logic Programming</source>
          <volume>15</volume>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Denecker</surname>
          </string-name>
          , G. Brewka,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strass</surname>
          </string-name>
          ,
          <article-title>A formal theory of justifications</article-title>
          , in: F. Calimeri, G. Ianni, M. Truszczynski (Eds.),
          <source>Logic Programming and Nonmonotonic Reasoning - 13th Intl. Conf., LPNMR</source>
          <year>2015</year>
          ,
          <article-title>Lexington</article-title>
          ,
          <string-name>
            <surname>KY</surname>
          </string-name>
          , USA,
          <year>2015</year>
          . Proceedings, volume
          <volume>9345</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2015</year>
          , p.
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Pontelli</surname>
          </string-name>
          , T. C.
          <article-title>Son, Justifications for logic programs under answer set semantics</article-title>
          , in: S. Etalle, M. Truszczyński (Eds.),
          <source>Logic Programming</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2006</year>
          , pp.
          <fpage>196</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Justifying answer sets using argumentation</article-title>
          ,
          <source>Theory and Practice of Logic Programming</source>
          <volume>16</volume>
          (
          <year>2016</year>
          )
          <fpage>59</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>An abstract, argumentation-theoretic approach to default reasoning</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>93</volume>
          (
          <year>1997</year>
          )
          <fpage>63</fpage>
          -
          <lpage>101</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Assumption-based argumentation</article-title>
          ,
          <source>Argumentation in Artificial Intelligence</source>
          (
          <year>2009</year>
          )
          <fpage>199</fpage>
          -
          <lpage>218</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Arias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Carro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Gupta,
          <article-title>Justifications for goal-directed constraint answer set programming</article-title>
          ,
          <source>in: Intl. Conf. on Logic Programming</source>
          ,
          <string-name>
            <surname>ICLP</surname>
          </string-name>
          ,
          <year>2020</year>
          , p.
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Marynissen</surname>
          </string-name>
          ,
          <source>Advances in Justification Theory, Ph.D. thesis</source>
          , Department of Computer Science, KU Leuven,
          <year>2022</year>
          . Denecker,
          <article-title>Marc and Bart Bogaerts (supervisors).</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gebser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pührer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schaub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tompits</surname>
          </string-name>
          ,
          <article-title>A meta-programming technique for debugging</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>