<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modeling Abduction over Acyclic First-Order Logic Horn Theories in Answer Set Programming: Preliminary Experiments⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Peter Schüller</string-name>
          <email>peter.schuller@marmara.edu.tr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Engineering Department, Faculty of Engineering Marmara University</institution>
          ,
          <country country="TR">Turkey</country>
        </aff>
      </contrib-group>
      <fpage>76</fpage>
      <lpage>90</lpage>
      <abstract>
        <p>We describe encodings in Answer Set Programming for abductive reasoning in First Order Logic in acyclic Horn theories in the presence of value invention and in the absence of Unique Names Assumption. We perform experiments using the Accel benchmark for abductive plan recognition in natural language processing. Results show, that our encodings cannot compete with state-of-the-art realizations of First Order Logic abduction, mainly due to large groundings. We analyze reasons for this bad performance and outline potential future improvements.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>i.e., variables that are absent in the rule body which makes the rule unsafe.</p>
      <p>In ASP rules must be safe, therefore we need to encode value invention.
Note that value invention makes FOL abduction undecidable in general. The
Accel knowledge base is an acyclic theory,1 therefore undecidability is not an
issue and we can realize value invention in ASP using Skolemization.</p>
      <p>Partial UNA has the effect that we must encode equivalence classes between
terms and effects of these equivalences in ASP. Sort names make the problem
of finding the smallest set of abducibles that explains a goal nontrivial (without
sort names the smallest explanation is those where all terms are equivalent).</p>
      <p>Both value invention and partial UNA make an ASP solution tricky, in
particular the size of the instantiated program can become problematic easily.</p>
      <p>In this study we approach these challenges and analyze problems that become
apparent. We describe three encodings which use different rewritings of the Accel
knowledge base and an observation that should be explained into ASP:
– BackCh models a backward-chaining algorithm and the notion of ‘factoring’
(to deal with equivalence) similar to the algorithm proposed by Ng [28],
– BwFw defines potential domains of predicates (as in Magic Sets [33]),
guesses all potential atoms as abducibles, infers truth of atoms using the original
axioms from the knowledge base, and checks if this explains the observation,
– Simpl realizes a simplified abduction with closed domain and UNA.
BackCh and BwFw realizing existential quantification using uninterpreted
functions to build Skolem terms. Simpl serves as a performance baseline.</p>
      <p>Our experiments with Accel show, that only small instances can be solved
within reasonable resource limits, and that memory as well as proving optimality
of solutions are problematic issues. We analyze the main reasons for these
observations using solver statistics, and we outline possibilities for achieving better
results in future work.</p>
      <p>Section 2 gives preliminaries of abduction and ASP, Section 3 describes
rewritings and ASP encodings, Section 4 reports on experiments, and Section 5
concludes with related work and an outlook on future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Preliminaries</title>
      <p>We give a brief introduction of Abduction in general and First Order Horn
abduction in specific, moreover we give brief preliminaries of ASP.</p>
      <p>Notation: variables start with capital letters and constants with small letters.
2.1</p>
      <sec id="sec-2-1">
        <title>Abduction</title>
        <p>
          Abduction, originally described in [29], can be defined logically as follows. Given
a set T of background knowledge axioms and an observation O, find a set H
1 This should be true according to [28]. Actually we had to deactivate one cyclic axiom
in the knowledge base to make this property true.
of hypothesis atoms such that T and H are consistent, and reproduce the
observation, i.e., T ∪ H ⊧~  and T ∪ H ⊧ O. In this work we formalize axioms and
observations in First Order Logic (FOL) as done by Ng [28]: the observation (in
the following called ‘goal’) O is an existentially quantified conjunction of atoms
∃V1, . . . , Vk ∶ o1(V1, . . . , Vk) ∧ ⋯ ∧ om(V1, . . . , Vk)
and an axiom in T is a universally quantified Horn clause of form
c(V1, . . . , Vk) ⇐ p1(V1, . . . , Vk) ∧ ⋯ ∧ pr(V1, . . . , Vk).
or an integrity constraints (like (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) but without a head).
        </p>
        <p>The set H of hypotheses can contain any predicate from the theory T and
the goal O, hence existence of a solution is trivial. Explanations with minimum
cardinality are considered optimal. A subset of constants is declared as sort
names that cannot be abduced as equivalent with other constants.
Example 1 (Running Example). Consider the following text</p>
        <p>
          ‘Mary lost her father. She is depressed.’
which can be encoded as the following FOL goal, to be explained by abduction.
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
and sort names
person
male female
dead
depressed
we can use abduction to infer the following: (a) loss of a person here should be
interpreted as death, (b) ‘she’ refers to Mary, and (c) her depression is because
of her father’s death because her father was important for her.
        </p>
        <p>
          This is reached by abducing the following atoms and equivalences.
is(X, depressed ) ⇐ is(Y, dead ) ∧ importantfor (Y, X)
lost (X, Y ) ⇐ is(Y, dead ) ∧ importantfor (Y, X) ∧ inst (Y, person) (
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
name(m, mary )
fatherof (f, m)
is(f, dead )
        </p>
        <p>m = s
name(m, mary )∧lost (m, f )∧fatherof (f, m)∧inst (s, female)∧is(s, depressed )
Given the set of axioms</p>
        <p>inst (X, male) ⇐ fatherof (X, Y )
inst (X, female) ⇐ name(X, mary )
importantfor (Y, X) ⇐ fatherof (Y, X)
inst (X, person) ⇐ inst (X, male)
is(X, depressed ) ⇐ inst (X, pessimist )
The first two atoms directly explain goal atoms. We can explain the remaining
goal atoms by showing inferring truth of the following atoms.</p>
        <p>inst (f, male)
inst (m, female)
inst (s, female)
importantfor (f, m)</p>
        <p>
          inst (f, person)
is(m, depressed )
is(s, depressed )
lost (m, f )
[infered via (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          )]
[infered via (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          )]
[goal, factored from (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          )]
[infered via (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          )]
[infered via (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) using (
          <xref ref-type="bibr" rid="ref12">12</xref>
          )]
[infered via (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          ) and (
          <xref ref-type="bibr" rid="ref14">14</xref>
          )]
[goal, factored from (
          <xref ref-type="bibr" rid="ref16">16</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          )]
[goal, infered via (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) using (
          <xref ref-type="bibr" rid="ref11">11</xref>
          ), (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ), and (
          <xref ref-type="bibr" rid="ref15">15</xref>
          )]
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          )
(
          <xref ref-type="bibr" rid="ref14">14</xref>
          )
(
          <xref ref-type="bibr" rid="ref15">15</xref>
          )
(
          <xref ref-type="bibr" rid="ref16">16</xref>
          )
Note that there are additional possible inferences but they are not necessary
to explain the goal atoms. Moreover note that another abductive explanations
(with higher number of abduced atoms and therefore higher cost) would be to
abduce all goal atoms, or to abduce inst (m, pessimist ) and lost (m, f ) instead of
abducing is(f, dead ). ◻
Complexity-wise we think that deciding optimality of a solution is harder than
for cardinality minimal abduction on (non-ground) logic programs under
wellfounded semantics [11, Sec. 4.1.3] because value invention provides a ground
term inventory of polynomial size in the number of constants in the goal. The
propositional case has been analyzed in [3, 10]. See also Section 5.1.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Answer Set Programming</title>
        <p>We assume familiarity with ASP [16, 25, 12, 14] and give only brief preliminaries
of those part of the ASP-Core-2 standard [5] that we will use: logic programs with
(uninterpreted) function symbols, aggregates, choices, and weak constraints. A
logic program consists of rules of the form</p>
        <p>α ← β1, . . . , βn, not βn+1, . . . , not βm.
where α and βi are head and body atoms, respectively, and not denotes negation
as failure. We say that a rule is a fact if m = 0, and it is a constraint if there
is no head α. Atoms can contain constants, variables, and function terms, and
programs must obey safety restrictions (see [5]) to ensure a finite instantiation.
Anonymous variables of form ‘_’ are replaced by new variable symbols.</p>
        <p>An aggregate literal in the body of a rule accumulates truth values from a
set of atoms, e.g., 2 = #count{X ∶ p(X)} is true iff the extension of p (in the
answer set candidate) contains exactly 2 elements.</p>
        <p>Choice constructions can occur instead of rule heads, they generate a solution
space if the rule body is satisfied; e.g., 1 ≤ {p(a); p(b); p(c)} ≤ 2 in the rule head
generates all solution candidates where at least 1 and at most 2 atoms of the
set are true. The bounds can be omitted. The colon symbol ‘:’ can be used
to define conditions for including atoms in a choice, for example the choice
p X) ∶ q(X), not r(X)} encodes a guess over all p(X) such that q(X) is true
{ (
and r(X) is not true.</p>
        <p>A weak constraint of form
↜ p(X).
denotes that an answer set I has cost equivalent to the size of the extension of p
in I. Answer sets of the lowest cost are considered optimal. Note that the syntax
[A@B, X] denotes that cost A is incurred on level B, and costs are summed over
all distinct X (i.e., X ensures each atom p(X) ∈ I is counted once).</p>
        <p>Semantics of an answer set program P are defined using its Herbrand
universe, ground instantiation, and the GL-reduct [16] which intuitively reduces the
program using assumptions in an answer set candidate.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Modeling Abduction in ASP</title>
      <p>We next describe two encodings for modeling abduction with partial UNA and
with value invention in ASP (Sections 3.1 and 3.2) and one simpler encoding
with UNA for all constants and without value invention (Section 3.3).</p>
      <p>All atoms in Accel have arity 2. For practical reasons we represent an atom
of the form pred(arg1, arg2) as c(pred, arg1, arg2).</p>
      <p>Goals in Accel have no variables, hence we can represent them as facts. For
our example we represent the goal as the following set of ASP facts.
goal (c(name, m, mary )). goal (c(lost , m, f )). goal (c(fatherof , f, m)).</p>
      <p>goal (c(inst , s, female)). goal (c(is, s, depressed )).</p>
      <p>Sorts are encoded as facts as follows.</p>
      <p>sortname(person). sortname(male). sortname(female).</p>
      <p>sortname(dead ). sortname(depressed ).
Our first encoding represents the abduction algorithm by Ng [28] in the ASP
grounder: (i) backward-chain from the goal and invent new constants on the
way, (ii) factor atoms with other atoms in the goal or with atoms discovered in (i)
and abduce equivalences between constant terms, (iii) mark atoms that have not
been inferred or factored as abduced, and (iv) search for the solution with the
minimum number of abduced atoms while obeying integrity constraints. Note
that this is the most straightforward modeling idea, although we consider it the
least declarative one because it is oriented towards realizing an algorithm and
does not realize abduction as usually done in ASP (for that see next section).</p>
      <p>First, everything that is a goal is defined to be true, and everything true
must either be abduced, inferred, or factored.</p>
      <p>true(P ) ← goal(P ).
1 ≤ {abduce(P ); inf er(P ); f actor(P )} ≤ 1 ← true(P ).</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref17">17</xref>
        )
We prefer solutions with a minimum number of abduced terms.
      </p>
      <p>↜ abduce(P ). [1@1, P ]</p>
      <p>
        Backward chaining is realized by rewriting each axiom of form (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) into two
parts: a guess whether the head is inferred via this rule, and for each body atom
that it must be inferred, abduced, or factored, if the head was inferred.
      </p>
      <p>
        For example axiom (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) is translated into
0 ≤ {inferVia(r1, c(is, X, depressed ))} ≤ 1 ←
      </p>
      <p>
        infer (c(is, X, depressed )).
inferenceNeeds (c(is, X, depressed ), r1, c(importantfor , s1(X), X)) ←
inferenceNeeds (c(is, X, depressed ), r1, c(is, s1(X), dead )) ←
inferVia(r1, c(is, X, depressed )). (
        <xref ref-type="bibr" rid="ref20">20</xref>
        )
inferVia(r1, c(is, X, depressed )). (
        <xref ref-type="bibr" rid="ref21">21</xref>
        )
where r1 is a unique identifier for axiom (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ); rule (
        <xref ref-type="bibr" rid="ref19">19</xref>
        ) guesses if inferring
is(X, depressed ) happens via r1; and rules (
        <xref ref-type="bibr" rid="ref20">20</xref>
        ) and (
        <xref ref-type="bibr" rid="ref21">21</xref>
        ) define that this
inference requires to justify atoms importantfor (s1(X), X) and is(s1(X), dead ).
Note that s1(⋅) is a Skolem function unique to axiom (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ).
      </p>
      <p>
        For each atom that is inferred, we add a constraint that it must be inferred
via at least one rule. Moreover each atom required by such inference is defined
as true and hence must be justified according to (
        <xref ref-type="bibr" rid="ref17">17</xref>
        ).
      </p>
      <p>← inf er(P ), 0 ≤ #count{A, P ∶ inf erV ia(A, P )} ≤ 0.</p>
      <p>true(Body) ← inf erenceN eeds(Head, Axiom, Body).</p>
      <p>
        For factoring we need to handle absence of the UNA, so we obtain the Herbrand
Universe (HU) and obtain the ‘User Herbrand Universe’ (UHU) of constants
that are not sort names and can be equivalent to other constants.
(
        <xref ref-type="bibr" rid="ref18">18</xref>
        )
(
        <xref ref-type="bibr" rid="ref19">19</xref>
        )
hu(C) ← true(c(_, C, _)).
      </p>
      <p>hu(C) ← true(c(_, _, C)).</p>
      <p>uhu(C) ← hu(C), not sortname(C).</p>
      <p>
        We guess if a member of UHU is represented by another member of UHU.
0 ≤ {rep(X, Y ) ∶ uhu(X), X &lt; Y } ≤ 1 ← uhu(Y ).
(
        <xref ref-type="bibr" rid="ref22">22</xref>
        )
This guesses at most one representative for each member of UHU. Note that
operator ‘&lt;’ in (
        <xref ref-type="bibr" rid="ref22">22</xref>
        ) realizes lexicographic comparison of ground terms; this means
the representative of an equivalence class of UHU terms is always the smallest
term in that class (this reduces symmetries in the search space).
      </p>
      <p>The following rules ensure that no ground term is both represented and
representing another one using rep.</p>
      <p>
        representative(X) ← rep(X, Y ).
(
        <xref ref-type="bibr" rid="ref23">23</xref>
        )
represented(Y ) ← rep(X, Y ). (
        <xref ref-type="bibr" rid="ref24">24</xref>
        )
← representative(X), represented(X). (
        <xref ref-type="bibr" rid="ref25">25</xref>
        )
Rules (
        <xref ref-type="bibr" rid="ref22">22</xref>
        )–(
        <xref ref-type="bibr" rid="ref25">25</xref>
        ) generate all possible equivalence relations over UHU terms,
encoded as mappings to representative terms. If a term has neither a representative
nor is representing another term, it is a singleton equivalence class.
      </p>
      <p>Given rep, we define a mapping map for all HU terms to their representative,
where singletons and sort names are their own representatives.</p>
      <p>map(X, Y ) ← rep(X, Y ).</p>
      <p>map(X, X) ← hu(X), not represented(X).</p>
      <p>
        Now that we generate and represent equivalence classes, we can perform
factoring. We experiment with three formulations: all of them define a predicate
f actorOk(P ) if factoring is allowed, and they share the following three rules:
(
        <xref ref-type="bibr" rid="ref28">28</xref>
        ) requires f actorOk to be true for factored atoms, while (
        <xref ref-type="bibr" rid="ref29">29</xref>
        ) and (
        <xref ref-type="bibr" rid="ref30">30</xref>
        ) define
the factoring base which is the set of inferred or abduced atoms, i.e., the set of
atoms that other atoms can be factored with.
      </p>
      <p>
        ← factor (P ), not factorOk (P ).
factoringbase(A) ← infer (A).
factoringbase(A) ← abduce(A).
(
        <xref ref-type="bibr" rid="ref26">26</xref>
        )
(
        <xref ref-type="bibr" rid="ref27">27</xref>
        )
(
        <xref ref-type="bibr" rid="ref28">28</xref>
        )
(
        <xref ref-type="bibr" rid="ref29">29</xref>
        )
(
        <xref ref-type="bibr" rid="ref30">30</xref>
        )
The three factoring variations are as follows.
● Factoring (a) uses the map predicate to match factored atoms to the factoring
base and is realized as follows.
factorOk (c(P, A1, B1)) ← factor (c(P, A1, B1)), factoringbase(c(P, A2, B2)),
map(A, A1), map(A, A2), map(B, B1), map(B, B2).
● Factoring (b) defines a canonical factoring base using the map predicate and
matches factored elements with that canonical base using the following rules.
cfactoringbase(c(P, A, B)) ← factoringbase(c(P, A1, B1)),
factorOk (c(P, A1, B1)) ← cfactoringbase(c(P, A, B)),
map(A, A1), map(B, B1).
      </p>
      <p>factor (c(P, A1, B1)), map(A, A1), map(B, B1).
● Factoring (c) defines a relation for canonicalizing atoms using the map
predicate and defines factorOk using that relation.</p>
      <p>noncanonical (P ) ← factor (P ).</p>
      <p>noncanonical (Q) ← factoringbase(Q).
canonical (c(P, A, B), c(P, A1, B1)) ← noncanonical (c(P, A1, B1)),
factorOk (P1) ← factor (P1), factoringbase(P2),
map(A, A1), map(B, B1).</p>
      <p>canonical (P, P1), canonical (P, P2).
3.2</p>
      <sec id="sec-3-1">
        <title>Skolemized Domain and Standard ASP Abduction (BwFw)</title>
        <p>This encoding follows an idea similar to Magic Sets [33]: starting from the goal
(in Magic Sets the query) we represent the domain of each argument of each
predicate. Subsequently we define domains of predicates in the body of an axiom
based on the domain of the predicate in heads of that axiom, deterministically
expanding the domain with Skolem terms whenever there are variables in the
axiom body that are not in the axiom head. Once we have the domains, we
can follow the usual ASP approach for abduction: (i) guess (using the domain)
which atoms are our abduction hypothesis, (ii) use axioms to infer what becomes
true due to the abduction hypothesis; and (iii) require that the observation is
reproduced. In BackCh, equivalence was used for factoring (which reduces the
number of abducibles required to derive the goal). In BwFw, we instantiate the
whole potential proof tree, so we do not need factoring between atoms in rules,
it is sufficient to factor abducibles with one another. Hence in this encoding we
handle equivalences by defining an atom to be true if its terms are equivalent
with the terms of an abduced atom.</p>
        <p>We next give the encoding in detail.</p>
        <p>Predicates in Accel have arity 2, so we represent the domain as dom(P, S, O),
i.e., predicate P has ⟨S, O⟩ as potential extension. We seed dom from the goal.</p>
        <p>dom(P, S, O) ← goal(c(P, S, O)).</p>
        <p>
          Domains are propagated by rewriting axioms of form (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) as indicated above. For
example (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) is translated into the following rules.
        </p>
        <p>dom(importantfor , s1(X), X) ← dom(is, X, depressed ).</p>
        <p>dom(is, s1(X), dead ) ← dom(is, X, depressed ).</p>
        <p>
          Additionally we rewrite each axiom into an equivalent rule in the ASP
representation, for example we rewrite (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) into the following rule.
        </p>
        <p>
          true(c(is, X, depressed )) ← true(c(importantfor , Y, X)),
(
          <xref ref-type="bibr" rid="ref31">31</xref>
          )
(
          <xref ref-type="bibr" rid="ref32">32</xref>
          )
(
          <xref ref-type="bibr" rid="ref33">33</xref>
          )
true(c(is, Y, dead )).
        </p>
        <p>For guessing representatives we first define UHU for first and second
arguments of each predicate and we define HU over all arguments.</p>
        <p>dom1(P, X) ← dom(P, X, _).
dom2(P, Y ) ← dom(P, _, Y ).
uhu1(P, X) ← dom1(P, X), not sortname(X).
uhu2(P, Y ) ← dom2(P, Y ), not sortname(Y ).</p>
        <p>hu(X) ← dom1(_, X).</p>
        <p>hu(Y ) ← dom2(_, Y ).</p>
        <p>We guess representatives, i.e., equivalence classes, only among those UHU
elements that can potentially be unified because they are arguments of the same
predicate at the same argument position.2
0 ≤ {rep(X1, X2) ∶ uhu1(P, X2), X1 &lt; X2} ≤ 1 ← uhu1(P, X1).</p>
        <p>0 ≤ {rep(Y1, Y2) ∶ uhu2(P, Y2), Y1 &lt; Y2} ≤ 1 ← uhu2(P, Y1).</p>
        <p>
          We reuse (
          <xref ref-type="bibr" rid="ref23">23</xref>
          )–(
          <xref ref-type="bibr" rid="ref27">27</xref>
          ) from the BackCh encoding to ensure that rep encodes an
equivalence relation and to define map. (Note also that hu is used in (
          <xref ref-type="bibr" rid="ref27">27</xref>
          ).)
        </p>
        <p>We abduce atoms using the domain under the condition that the domain
elements are representatives of their equivalence class (symmetry breaking).</p>
        <p>{abduce(c(P, S, O)) ∶ dom(P, S, O), not represented (S), not represented (O)}.
We define that abduced atoms are true, and we use map to define that atoms
equivalent with abduced atoms are true.</p>
        <p>true(A) ← abduce(A).
true(c(P, A, B)) ← abduce(c(P, RA, RB)), map(RA, A), map(RB, B). (34)
This makes all consequences of the abduction hypothesis in the axiom theory
true while taking into account equivalences.</p>
        <p>Finally we again require that the observation is reproduced, and we again
minimize the number of abduced atoms. (Note that term equivalence has been
taken care of in (34) hence we can ignore it for checking the goal.)
← goal (A), not true(A).
This encoding propagates domains differently from the previous one: it does
not introduce Skolem terms but always the same null term for variables in
axiom bodies that are not contained in the axiom head. Abduction substitutes
all possible constants for these null terms, hence this encoding does not realize
an open domain. Moreover, this encoding uses the UNA for all terms.</p>
        <p>This encoding is less expressive than the previous ones, it is incomplete but
sound: Simpl finds a subset of solutions that other encodings find, and for this
subset each solution has same cost as in other encodings. We use Simpl
primarily for comparing memory and time usage with other encodings and other
approaches. For readability we repeat some rules from previous encodings.</p>
        <p>As in BwFw, the domain is seeded from the goal.</p>
        <p>dom(P, S, O) ← goal (c(P, S, O)).</p>
        <p>
          The rewriting is different, however: instead of rewriting the example axiom (
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
into (
          <xref ref-type="bibr" rid="ref31">31</xref>
          ) and (
          <xref ref-type="bibr" rid="ref32">32</xref>
          ) we create the following rules.
        </p>
        <p>dom(importantfor , null , X) ← dom(is, X, depressed ).</p>
        <p>dom(is, null , dead ) ← dom(is, X, depressed ).
2 A more naive encoding showed significantly higher memory requirements.
Status</p>
        <p>Resources
Encoding</p>
        <p>OPT SAT OOT OOM
sum sum sum sum</p>
        <p>Space (osp) Time (otm/grd/slv)</p>
        <p>MB/avg s/avg</p>
        <p>Now we define a mapping that maps null to each element of (implicit) UHU and
contains the identity mapping for HU.</p>
        <p>nullrepl (X, X) ← hu(X).
nullrepl (null , X) ← hu(X), not sortname(X).
(37)
true(c(P, A, B)) ← abduce(c(P, A, B)).</p>
        <p>We abduce atoms using this mapping and define that abduced atoms are true.</p>
        <p>
          {abduce(c(P, S′, O′))} ← dom(P, S, O), nullrepl (S, S′), nullrepl (O, O′).
We propagate truth as in BwFw using the rewriting exemplified in (
          <xref ref-type="bibr" rid="ref33">33</xref>
          ).
        </p>
        <p>We again require goals to be true and minimize the number of abduced atoms.
← goal (A), not true(A).</p>
        <p>Note that, although this encoding solves an easier problem than the other
encodings, we will see that it does not necessarily perform better.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experimental Results</title>
      <p>To test the above encodings, we performed experiments using the Accel
benchmark3 [28]. This benchmark consists of 50 instances (i.e., goals) with between 5
3 ftp://ftp.cs.utexas.edu/pub/mooney/accel
and 26 atoms in a goal (12.6 atoms on average), and a knowledge base consisting
of 190 first order Horn rules with bodies of size between 1 and 11 atoms (2.2 on
average). Our encodings and instances used in experiments are available online.4</p>
      <p>The benchmarks were performed on a computer with 48 GB RAM and two
Intel E5-2630 CPUs (total 16 cores) using Ubuntu 14.04 and Clingo 4.5.0 [15].
Each run was limited to 10 minutes and 5 GB RAM, HTCondor was used as a job
scheduling system, each run was repeated 10 times and no more than 8 jobs were
running simultaneously. For Clingo we used the setting --configuration=handy.</p>
      <p>Table 1 shows results of the experiments.</p>
      <p>Exceeding the memory of 5 GB turns out to be a significant problem in
all encodings, even for Simpl where (only) 10 instances exceeding the memory.
Memory was always exceeded during grounding, never during solving.</p>
      <p>Our encodings perform much worse than the state-of-the-art for solving Accel
which is less than 10 s per instance on average [19].</p>
      <p>Encoding Simpl is able to ground more instances than the other encodings,
however it only finds the optimal solution for 132 runs while BackCh (b) finds
optimal solutions for 238 runs, although Simpl encodes a sound approximation
of the problem encoded in BackCh and BwFw.</p>
      <p>Encoding BwFw, which is most similar to classical abduction encodings in
ASP, performs worst due to memory exhaustion: only one instance can be solved.
In BwFw the high space complexity comes from the full forward-instantiation
of the whole knowledge base after guessing representatives for the domain of
each abduced atom. Although BwFw first backward-chains the domain using
a deterministic set of rules, it seems to instantiate a much larger part of the
rules necessary for solving the problem, in particular in comparison with the
BackCh encoding. Observe that we split the representation for UHU which
might seem unintuitive as we do not do this in other encodings. However, when
we experimented with a more naive encoding for BwFw (replacing uhu1 and
uhu2 by a single uhu) this encoding could not even solve a single instance within
the 5 GB memory limit.</p>
      <p>The BackCh encodings perform best and show big differences among the
factoring variations. Factoring (b) is clearly superior to (a) and (c): it uses
significantly less memory (only 260 out-of-memory runs) and once grounding is
successful it solves many instances (238) within the timeout. The reason for this
can be found in the rules defining factorOk : factoring (a) contains 4 atoms of
map with six joining 6 variables, hence its instantiation contains (in worst case)
O(n6) rules, where n is the size of HU; moreover (c) defines canonical from map
with by defining O(n3) distinct atoms and joins these atoms to define factorOk ,
again resulting in O(n6) rules. Encoding (b) on the other hand defines O(n3)
distinct atoms with predicate cfactoringbase and joins them to 2 instances of
map, resulting in O(n5) rules defining factorOk .</p>
      <p>Single Instance Analysis. To analyze performance differences between
encodings, we looked at instance # 8 which is the smallest instance and the single
instance where all encodings found an optimal solution.
4 https://bitbucket.org/knowlp/supplement-rcra2015-asp-fo-abduction/</p>
      <p>Encodings BackCh (a) and BackCh (c) require 117 s and 244 s grounding
time, respectively, and both groundings contain ∼150 k ASP rules. Although
BackCh (b), BwFw, and Simpl stay below 10 s grounding time, BwFw
produces an amazing 803 k ASP rules. The reason is, that (34) creates many
potentially true atoms, which instantiates many axioms. As a consequence BwFw
grounds comparatively fast but at the same time produces the biggest grounding.</p>
      <p>The smallest grounding is produced by Simpl, which also has fewest OOM
results. Yet Simpl finds the optimal solution for fewer instances than BackCh
(b). Solver statistics show that Simpl requires 463 k choice points in the solver
and creates 4 k conflict lemmas for instance 8, while BackCh (b) proves
optimality using 737 choice points and 150 lemmas. We conjecture that these choice
points originate in the naive substitution of null with UHU elements in (37),
which produces a big grounding and a big hypothesis space with many
symmetric solutions of equal cost. Such symmetries makes it difficult to prove optimality.
Additional Observations. As suggested by a reviewer we ran our encodings
with and without projection to abduce atoms. This did not change run times
significantly, however it greatly reduced log file size.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Discussion and Conclusion</title>
      <p>We presented encodings for realizing abduction with an open domain in the
absence of the UNA and performed experiments using the Accel benchmark.</p>
      <p>Experiments show clear differences between our encodings, and they all
perform much worse than the state-of-the-art for Accel [19]. Hence we consider this
work a negative result. Nevertheless we observed huge performance variation
between our encodings, and we think there is still potential for finding better
encodings (and solver parameters).
5.1</p>
      <sec id="sec-5-1">
        <title>Related Work</title>
        <p>The idea of abduction goes back to Peirce [29] and was later formalized in logic.</p>
        <p>Abductive Logic Programming (ALP) is an extension of logic programs with
abduction and integrity constraints. Kakas et al. [20] discuss ALP and
applications, in particular they relate Answer Set Programming and abduction. Fung
et al. describe the IFF proof procedure [13] which is a FOL rewriting that is
sound and complete for performing abduction in a fragment of ALP with only
classical negation and specific safety constraints. Denecker et al. [8] describe
SLDNFA-resolution which is an extension of SLDNF resolution for performing
abduction in ALP in the presence of negation as failure. They also describe a
way to ‘avoid Skolemization by variable renaming’ which is a strategy that can
be mimicked in ASP by using HEX for value invention (instead of uninterpreted
function terms). Kakas et al. describe the A-System for evaluating ALP using
an algorithm that interleaves instantiation of variables and constraint solving
[21]. The CIFF framework [26] is conceptually similar to the A-System but it
allows a more relaxed use of negation. The SCIFF framework [1] relaxes some
restrictions of CIFF and provides facilities for modeling agent interactions.</p>
        <p>Implementations of ALP have in common that they are based on evaluation
strategies similar to Prolog [26]. CIFF is compare with ASP on the example
of n-queens and the authors emphasize that CIFF has more power due to its
partial non-ground evaluation [26]. However they also show a different CIFF
encoding that performs much worse than ASP. Different from CIFF and earlier
ALP implementations, our ASP encodings are first instantiated and then solved.</p>
        <p>Weighted abduction (called cost-based abduction by some authors) [18] is
FOL abduction with costs that are propagated along axioms in a proof tree.
The purpose of costs is to find the most suitable abductive explanations for
interpretation of natural language. The solver Henry-n700 [19] realizes weighted
abduction by first instantiating an ILP instance with Java and then finding
optimal solutions for the ILP instance. Our approach is similar and therefore
our work is related more to weighted abduction than to ALP.</p>
        <p>Probabilistic abduction was realized in Markov Logic [30] in the Alchemy
system [23] although without an open domain [2, 32], which makes it similar to
the Simpl encoding. (Alchemy naively instantiates existential variables in rule
heads with all known ground terms just as Simpl does.)
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Future Work</title>
        <p>As the Accel benchmark problems can be solved much faster (i.e., within a few
seconds on average) with a classical backward chaining implementation in Java
and an ILP solver [19], we conjecture that a solution in ASP has the potential
to be similarly fast. In the future we want to develop encodings that perform
nearer to the state-of-the-art. We believe that we can gain increased flexibility if
we model this problem in ASP as opposed to creating an ILP theory using Java.</p>
        <p>Our experiments show that small changes in encodings can lead to big
performance changes regarding grounding size, grounding speed, finding an initial
solution, and proving optimality. We are currently investigating alternative
encodings, in particular to replace guessing of representatives by a more direct
guessing of the equivalence relation.</p>
        <p>As large groundings are a major issue in our work, interesting future work
includes using dlv [24] which is known to be strong in grounding, or solvers that
perform lazy grounding such as idp [7] or OMiGA [6]. Another possible
improvement of grounding is to replace grounding-intensive constraints by sound
approximations with lower space complexity, and create missing constraints on demand
when they are violated. An equivalent strategy is used in Henry-n700 where and
it is called Cutting Plane Inference and described as essential for the performance
of the ILP solution for weighted abduction [19]. In ASP such strategies are used
by state-of-the-art solvers for on-demand generation of loop-formula nogoods.
User-defined on-demand constraints can be realized using a custom propagator
in Clingo [15] or on-demand constraints in HEX [9, Sec. 2.3.1].</p>
        <p>Potentially beneficial for future improvements of this work are Open Answer
Set Programming (OASP) [17] which inherently supports open domains, and
Datalog± [4] which supports existential constructions in rule heads. Both
formalisms have been used for representing Description Logics in ASP and could
provide insights for future work about modeling FOL abduction in ASP.</p>
        <p>So far we prefer the least number of abduced atoms to find optimal
solutions. In the future we want to consider more sophisticated preferences based on
Coherence [27] and based on Relevance Theory [31].</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>We thank the anonymous reviewers for their constructive and useful feedback.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>M.</given-names>
            <surname>Alberti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chesani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gavanelli</surname>
          </string-name>
          , E. Lamma,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mello</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Torroni</surname>
          </string-name>
          .
          <article-title>Verifiable agent interaction in abductive logic programming: The SCIFF framework</article-title>
          .
          <source>ACM Transactions on Computational Logic</source>
          ,
          <volume>9</volume>
          (
          <issue>4</issue>
          ):Article No.
          <volume>29</volume>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J.</given-names>
            <surname>Blythe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Hobbs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Domingos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Kate</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Mooney</surname>
          </string-name>
          .
          <article-title>Implementing Weighted Abduction in Markov Logic</article-title>
          .
          <source>In International Conference on Computational Semantics (IWCS)</source>
          , pages
          <fpage>55</fpage>
          -
          <lpage>64</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>T.</given-names>
            <surname>Bylander</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Allemang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Tanner</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Josephson</surname>
          </string-name>
          .
          <article-title>The computational complexity of abduction</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>49</volume>
          (
          <issue>1-3</issue>
          ):
          <fpage>25</fpage>
          -
          <lpage>60</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>A.</given-names>
            <surname>Calì</surname>
          </string-name>
          , G. Gottlob, and
          <string-name>
            <given-names>T.</given-names>
            <surname>Lukasiewicz</surname>
          </string-name>
          . Datalog+/
          <article-title>-: A Unified Approach to Ontologies and Integrity Constraints</article-title>
          .
          <source>In International Conference on Database Theory</source>
          , pages
          <fpage>14</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>F.</given-names>
            <surname>Calimeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Faber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gebser</surname>
          </string-name>
          , G. Ianni,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kaminski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Krennwallner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Leone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ricca</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Schaub</surname>
          </string-name>
          .
          <article-title>ASP-Core-2 Input language format</article-title>
          .
          <source>Technical report</source>
          , ASP Standardization Working Group,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. M. Dao-tran, T. Eiter,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fink</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Weidinger, and</article-title>
          <string-name>
            <surname>A. Weinzierl.</surname>
          </string-name>
          <article-title>OMiGA : An Open Minded Grounding On-The-Fly Answer Set Solver</article-title>
          .
          <source>In Logics in Artificial Intelligence (JELIA)</source>
          , pages
          <fpage>480</fpage>
          -
          <lpage>483</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>7. B. De Cat</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Denecker</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Stuckey</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bruynooghe</surname>
          </string-name>
          .
          <article-title>Lazy model expansion: Interleaving grounding with search</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          ,
          <volume>52</volume>
          :
          <fpage>235</fpage>
          -
          <lpage>286</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>M.</given-names>
            <surname>Denecker</surname>
          </string-name>
          and D. de Schreye.
          <article-title>SLDNFA: An abductive procedure for abductive logic programs</article-title>
          .
          <source>The Journal of Logic Programming</source>
          ,
          <volume>34</volume>
          (
          <issue>2</issue>
          ):
          <fpage>111</fpage>
          -
          <lpage>167</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fink</surname>
          </string-name>
          , G. Ianni,
          <string-name>
            <given-names>T.</given-names>
            <surname>Krennwallner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Redl</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Schüller</surname>
          </string-name>
          .
          <article-title>A model building framework for answer set programming with external computations</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          ,
          <year>2015</year>
          . arXiv:
          <volume>1507</volume>
          .01451 [cs.
          <source>AI]</source>
          , to appear.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          and
          <string-name>
            <surname>G. Gottlob.</surname>
          </string-name>
          <article-title>The Complexity of Logic-Based Abduction</article-title>
          .
          <source>Journal of the ACM</source>
          ,
          <volume>42</volume>
          (
          <issue>1</issue>
          ):
          <fpage>3</fpage>
          -
          <lpage>42</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. T. Eiter, G. Gottlob, and
          <string-name>
            <given-names>N.</given-names>
            <surname>Leone</surname>
          </string-name>
          .
          <article-title>Abduction from logic programs: Semantics and complexity</article-title>
          .
          <source>Theoretical Computer Science</source>
          ,
          <volume>189</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>129</fpage>
          -
          <lpage>177</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. T. Eiter, G. Ianni, and
          <string-name>
            <given-names>T.</given-names>
            <surname>Krennwallner</surname>
          </string-name>
          .
          <article-title>Answer Set Programming: A Primer</article-title>
          .
          <source>In Reasoning Web Summer School, Lecture Notes in Computer Science</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Fung</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          .
          <article-title>The IFF proof procedure for abductive logic programming</article-title>
          .
          <source>The Journal of Logic Programming</source>
          ,
          <volume>33</volume>
          (
          <issue>2</issue>
          ):
          <fpage>151</fpage>
          -
          <lpage>165</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>M. Gebser</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Kaminski</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Kaufmann</surname>
            , and
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Schaub</surname>
          </string-name>
          . Answer Set Solving in Practice. Morgan Claypool,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>M. Gebser</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Kaufmann</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Kaminski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ostrowski</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Schaub</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Schneider</surname>
          </string-name>
          .
          <article-title>Potassco: The Potsdam Answer Set Solving Collection</article-title>
          .
          <source>AI Communications</source>
          ,
          <volume>24</volume>
          (
          <issue>2</issue>
          ):
          <fpage>107</fpage>
          -
          <lpage>124</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>M.</given-names>
            <surname>Gelfond</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          .
          <article-title>The Stable Model Semantics for Logic Programming</article-title>
          .
          <source>In International Conference and Symposium on Logic Programming (ICLP/SLP)</source>
          , pages
          <fpage>1070</fpage>
          -
          <lpage>1080</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>S.</given-names>
            <surname>Heymans</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Van Nieuwenborgh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and D.</given-names>
            <surname>Vermeir</surname>
          </string-name>
          .
          <article-title>Open Answer Set Programming with Guarded Programs</article-title>
          .
          <source>ACM Transactions on Computational Logic</source>
          ,
          <volume>9</volume>
          (
          <issue>4</issue>
          ):
          <fpage>26</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>J. R. Hobbs</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Stickel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>and D.</given-names>
          </string-name>
          <string-name>
            <surname>Edwards</surname>
          </string-name>
          .
          <article-title>Interpretation as abduction</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>63</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>69</fpage>
          -
          <lpage>142</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>N.</given-names>
            <surname>Inoue</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Inui</surname>
          </string-name>
          .
          <article-title>ILP-based Inference for Cost-based Abduction on First-order Predicate Logic</article-title>
          .
          <source>Information and Media Technologies</source>
          ,
          <volume>9</volume>
          :
          <fpage>83</fpage>
          -
          <lpage>110</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>A. C. Kakas</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          <string-name>
            <surname>Kowalski</surname>
            , and
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Toni</surname>
          </string-name>
          .
          <article-title>Abductive Logic Programming</article-title>
          .
          <source>Journal of Logic and Computation</source>
          ,
          <volume>2</volume>
          (
          <issue>6</issue>
          ):
          <fpage>719</fpage>
          -
          <lpage>770</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>A. C. Kakas</surname>
            ,
            <given-names>B. Van Nuffelen</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>and M.</given-names>
            <surname>Denecker</surname>
          </string-name>
          .
          <article-title>A-System: Problem solving through abduction</article-title>
          .
          <source>In International Joint Conference on Artificial Intelligence (IJCAI)</source>
          , pages
          <fpage>591</fpage>
          -
          <lpage>596</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Kate</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Mooney</surname>
          </string-name>
          .
          <article-title>Probabilistic Abduction using Markov Logic Networks</article-title>
          .
          <source>In IJCAI Workshop on Plan, Activity, and Intent Recognition</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>S.</given-names>
            <surname>Kok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sumner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Richardson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Singla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Poon</surname>
          </string-name>
          , L. D,
          <string-name>
            <surname>J. Wang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Nath</surname>
            , and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Domingos</surname>
          </string-name>
          .
          <article-title>The Alchemy system for statistical relational AI</article-title>
          .
          <source>Technical report</source>
          , Department of Computer Science and Engineering, University of Washington,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <given-names>N.</given-names>
            <surname>Leone</surname>
          </string-name>
          , G. Pfeifer,
          <string-name>
            <given-names>W.</given-names>
            <surname>Faber</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          .
          <article-title>The DLV system for knowledge representation and reasoning</article-title>
          .
          <source>ACM Transactions on Computational Logic</source>
          ,
          <volume>7</volume>
          (
          <issue>3</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>58</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <given-names>V.</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          . What Is Answer Set Programming ?
          <source>In AAAI Conference on Artificial Intelligence</source>
          , pages
          <fpage>1594</fpage>
          -
          <lpage>1597</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26. P. Mancarella,
          <string-name>
            <given-names>G.</given-names>
            <surname>Terreni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sadri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>U.</given-names>
            <surname>Endriss</surname>
          </string-name>
          .
          <article-title>The CIFF proof procedure for abductive logic programming with constraints: Theory, implementation and experiments</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          ,
          <volume>9</volume>
          (
          <issue>6</issue>
          ):
          <fpage>691</fpage>
          -
          <lpage>750</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <given-names>H.</given-names>
            <surname>Ng</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mooney</surname>
          </string-name>
          .
          <article-title>Abductive Plan Recognition and Diagnosis: A Comprehensive Empirical Evaluation</article-title>
          .
          <source>In Knowledge Representation and Reasoning (KR)</source>
          , pages
          <fpage>499</fpage>
          -
          <lpage>508</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <given-names>H. T.</given-names>
            <surname>Ng</surname>
          </string-name>
          . A
          <article-title>General Abductive System with Applications to Plan Recognition and Diagnosis</article-title>
          .
          <source>Phd thesis</source>
          , University of Texas at Austin,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Peirce</surname>
          </string-name>
          .
          <article-title>Abduction and Induction</article-title>
          . In Philosophical Writings of Peirce, chapter
          <volume>11</volume>
          , pages
          <fpage>150</fpage>
          -
          <lpage>156</lpage>
          . Dover Publications,
          <year>1955</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <given-names>M.</given-names>
            <surname>Richardson</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Domingos</surname>
          </string-name>
          .
          <article-title>Markov logic networks</article-title>
          .
          <source>Machine Learning</source>
          ,
          <volume>62</volume>
          (
          <issue>1- 2</issue>
          ):
          <fpage>107</fpage>
          -
          <lpage>136</lpage>
          , Jan.
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <given-names>P.</given-names>
            <surname>Schüller</surname>
          </string-name>
          .
          <article-title>Tackling Winograd Schemas by Formalizing Relevance Theory in Knowledge Graphs</article-title>
          .
          <source>In International Conference on Principles of Knowledge Representation and Reasoning (KR)</source>
          . AAAI Press,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <given-names>P.</given-names>
            <surname>Singla</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Mooney</surname>
          </string-name>
          .
          <article-title>Abductive Markov Logic for Plan Recognition</article-title>
          .
          <source>In AAAI Conference on Artificial Intelligence</source>
          , pages
          <fpage>1069</fpage>
          -
          <lpage>1075</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>J. D. Ullman</surname>
          </string-name>
          .
          <article-title>Principles of Database and Knowledge Base Systems</article-title>
          . Computer Science Press,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>