<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Minimal Corrections in ASP</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mikol´aˇs Janota</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joao Marques-Silva</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IST/INESC-ID</institution>
          ,
          <addr-line>Lisbon</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>LaSIGE, Faculty of Science, University of Lisbon</institution>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <fpage>56</fpage>
      <lpage>70</lpage>
      <abstract>
        <p>As a programming paradigm, answer set programming (ASP) brings about the usual issue of the human error. Hence, it is desirable to provide automated techniques that could help the programmer to find the error. This paper studies the question of computing a subset-minimal correction of a contradictory ASP program. A contradictory ASP program is often undesirable and we wish to provide an automated way of fixing it. We consider a minimal correction set of a contradictory program to be an irreducible set of rules whose removal makes the program consistent. In contrast to propositional logic, corrections of ASP programs behave non-monotonically. Nevertheless, we show that a variety of algorithms for correction set computation in propositional logic can be ported to ASP. An experimental evaluation was carried showing that having a portfolio of such algorithms is indeed of benefit.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Answer set programming (ASP) is a powerful paradigm for modeling and solving
combinatorial and optimization problems in artificial intelligence. An
inconsistent program is such a program that does not have a solution. This might be
due to a bug in the program and for such cases it is desirable to provide the
programmer with tools that would help him identify the issue.</p>
      <p>The primary motivation for this paper is to focus on program’s input. In
theory, an input to an ASP program is really just another ASP program that is
joined to the original one. In practice, however, the conceptual division between
a program and its input plays an important role in the program’s development.
Indeed, a program, as a set of rules, expresses the semantics of the problem
being solved. The input is a set of facts describing a particular instance of the
problem. Such input is typically large.</p>
      <p>This paper asks the question, how to resolve situations when there is an error
in the given input? In particular, we consider scenarios when the input leads to
a contradiction in the program. Consider the following simple program.</p>
      <p>← not move(a).
(1)
(2)
(3)
By rule (1) the program requires move(a) be true. To achieve that, however,
stone(b) must be true and stone(c) must be false (by rule (2)). The input is
specified as the fact (3). Altogether, the program is inconsistent since rule (2)
is not applicable. If we wish to modify the input so it becomes consistent, the
fact stone(c) must be removed and the fact stone(b) must be added. It is the
aim of this paper to compute such corrections to inconsistent inputs. Further,
we aim at corrections that are irreducible, i.e. that do not perform unnecessary
changes. Note that this program could also be made consistent by modifying
the rules (1) and (2). Similarly, one could add the fact move(a). This might be
undesirable as these represent the rules of the modeled move (in a game, for
instance). In the end, however, it is the programmer that must decide what to
correct. The objective of the proposed tool-support is to pin-point the source of
the inconsistency.</p>
      <p>We show that the problem of inconsistency corrections is closely related to a
problem of maximal consistency, which we define as identifying a subset-maximal
set of atoms that can be added to the program as facts while preserving
consistency. Maximal consistency lets us provide a solution to inconsistency correction
but it is also an interesting problem to study on its own.</p>
      <p>
        Maximal consistency is closely related to the concept of maximally satisfiable
sets (MSS) in propositional logic. For a formula in conjunctive normal form, an
MSS is a subset of the formula’s clauses that is satisfiable and adding any clause
to it makes it unsatisfiable. There is also a dual, minimal correction subset
(MCS), which is a complement of an MSS [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
      <p>
        An important difference between propositional logic and ASP is that ASP
is not monotone. This means that an algorithm for calculating MSSes cannot
be immediately used for ASP. We show, however, that it is possible to port
existing MSS algorithms to ASP. This represents a great potential for calculating
maximally consistent sets in ASP as a bevy of algorithms for MSS exist [
        <xref ref-type="bibr" rid="ref31 ref32 ref4">4,31,32</xref>
        ].
      </p>
      <p>The main contributions of this paper are the following. (1) It devises a
technique for adapting algorithms from MSS computation to maximal consistency
in ASP. (2) Using the technique a handful of MSS algorithms is adapted to
ASP. (3) It is shown how maximal consistency can be used to calculate minimal
corrections to ASP inputs. (4) The proposed algorithms were implemented and
evaluated on a number of benchmarks.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <p>
        We assume the reader’s familiarity with standard ASP syntax and semantics,
e.g. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Here we briefly review the basic notation and concepts. In particular, a
(normal) logic program is a finite set of rules of the following form.
      </p>
      <p>
        a ← b1, . . . , bm, not cm+1, . . . , not cn,
where a, bi, cj are atoms. A literal is an atom or its default negation not a. A rule
is called a fact if it has an empty body; in such case we don’t write the symbol ←.
For a rule r, we write body(r) to denote the literals b1, . . . , bm, not cm+1, . . . , not cn
and we write head(r) to denote the literal a. We write B+(r) for b1 . . . , bm
and B−(r) for cm+1, . . . , cn. Further, we allow choice rules of the form n ≤
{a1, . . . , ak}. (this is a special case of weight constraint rules [
        <xref ref-type="bibr" rid="ref34">34, 43, 44</xref>
        ]).
      </p>
      <p>A program is called ground if it does not contain any variables. A ground
instance of a program P , denoted as grd(P ), is a ground program obtained by
substituting variables of P by all constants from its Herbrand universe.</p>
      <p>
        The semantics of ASP programs can be defined via a reduct [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ]. Let I
be a set of ground atoms. The set I is a model of a program P if head(r) ∈ I
whenever B+(r) ⊆ I and B−(r) ∩ I = ∅ for every r ∈ grd(P ). The reduct of a
program P w.r.t. the set I is denoted as P I and defined as follows.
      </p>
      <p>P I =</p>
      <p>head(r) ← B+(r) | r ∈ grd(P ), I ∩ B−(r) = ∅</p>
      <p>
        The set I is an answer set of P if I is a minimal model of P I . Intuitively,
this definition guarantees that an answer set contains only atoms that are
justified by the rules of the program (cf. [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]). A choice rule l ≤ {s1, . . . , sk}. in a
program additionally guarantees that any answer set contains at least l atoms
from s1, . . . , sk, and, the rule provides a justification for any of those atoms.
For precise semantics see [
        <xref ref-type="bibr" rid="ref34">34, 44</xref>
        ]. A program is consistent if it has at least one
answer set, it is inconsistent otherwise.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Maximal Consistency in ASP</title>
      <p>This section studies the problem of computing a maximal subset of given atoms
whose addition to the program, as facts, yields a consistent program.
Definition 1 (maximal consistent subset). Let P be a consistent ASP
program and S be a set of atoms. A set L ⊆ S is a maximal consistent subset of S
w.r.t. P if the program P ∪ {s. | s ∈ L} is consistent and for any L , such that
L L ⊆ S, the program P ∪ s. | s ∈ L is inconsistent. If P is clear from the
context, we simply say that L is an (in)consistent set of atoms.</p>
      <p>The definition of maximal consistent subset is syntactically similar to
maximal models or MSSes in propositional logic. Semantically, however, there is
an important difference due to nonmonotonicity of ASP. While in propositional
logic the satisfiability of φ∧x∧y guarantees satisfiability of φ∧x, in ASP it is not
necessarily the case. Hence, algorithms for MSSes in propositional logic cannot
be readily used for our maximal consistent subset. We will show, however, that
it is possible to port these algorithms to ASP.</p>
      <p>The following presentation uses the following auxiliary functions. The
function choice(S) yields the choice rule 0 ≤ {s1, . . . , sk} over the set S = {s1, . . . , sk}.
The function atleast1({s1, . . . , sk}) yields the choice rule 1 ≤ {s1, . . . , sk}.
The function solve(P ) represents an ASP solver, which returns a pair (res, µ)
where res is true and µ is an answer set of P if some exists. If P is inconsistent,
res is false and µ is undefined.</p>
      <p>A brute force approach to calculating a maximal consistent subset would be
to enumerate all subsets of S and for each test whether it is still consistent. As
there are 2| S | subsets of S, this approach is clearly unfeasible.</p>
      <p>Algorithm 1: Atleast-1 algorithm for maximal consistency.
1 L ← ∅
2 while true do
3 P ← P ∪ {s. | s ∈ L}
4 P ← P ∪ {atleast1(S
5 (res, µ) ← solve(P )
6 if ¬res then return L
7 L ← µ ∩ S</p>
      <p>L).}</p>
      <p>
        A better approach is to maximize the number of s ∈ S that are set to true.
Such ensures finding a maximal consistent subset with maximum cardinality.
This can be done by iteratively calling an ASP solver while imposing increasing
cardinality on the set S. However, modern ASP solvers directly support
minimization constraints through which maximization can be specified by minimizing
negation of the atoms in S. This approach was taken elsewhere [
        <xref ref-type="bibr" rid="ref21">21, 45</xref>
        ].
      </p>
      <p>Finding a maximal correction subset with maximum cardinality might be
computationally harder than finding some maximal correction subset. This is
the purpose of the rest of the section.</p>
      <p>We begin by an important observation that it is possible to check whether a
set of atoms L can be extended into a consistent set of atoms by a single call to
an ASP solver.</p>
      <p>Observation 1 Let P be an ASP program, L ⊆ S be sets of atoms, and P
be defined as P = P ∪ {s. | s ∈ L} ∪{choice(S L).}. There exists a consistent
set L s.t. L ⊆ L ⊆ S iff P has an answer set µ such that L = S ∩µ.</p>
      <p>Observation 1 enables us to check whether a set of atoms can be extended into
a consistent set. This is done by letting the solver choose the additional atoms.
Similar reasoning enables us to devise a test for checking that a consistent set L
is already maximal. This is done by enforcing that L is extended by at least one
element.</p>
      <p>Observation 2 Let P be an ASP program and S be a set of atoms, and P be
defined as P = P ∪ {s. | s ∈ L} ∪{atleast1(S L)}. A set L ⊆ S is a maximal
consistent subset of S iff P ∪ {s. | s ∈ L} is consistent and P is inconsistent.</p>
      <p>Combining Observation 1 and Observation 2 gives us Algorithm 1. The
algorithm maintains a lower bound L, which is initialized to the empty set. The set
L grows incrementally until a maximal set is found. In each iteration it is tested
whether L is already maximal in accord with Observation 2. The invariant of the
loop is that the program P ∪ {s. | s ∈ L} is consistent. The set L grows
monotonically with the iterations because whenever a new answer set µ is obtained,
it must contain all the atoms from the previous L. Hence, the algorithm
terminates in at most |S| calls to an ASP solver. Note that the algorithm does not
necessarily need all of the |S| iterations since more than one element might be</p>
      <p>
        Algorithm 2: Iterative algorithm for maximal consistency.
1 L ← ∅
2 while S = ∅ do
3 sf ← pick an arbitrary element from S
4 S ← S {sf }
5 L ← L ∪{sf }
6 P ← P ∪{choice(S).}
7 P ← P ∪ {s. | s ∈ L}
8 (res, µ) ← solve(P )
9 if ¬res then L ← L {sf }
10 else L ← L ∪(µ ∩ S)
11 return L
added to L in one iteration. Similar algorithms have been proposed in different
contexts, including computing the backbone of a propositional formula [48] and
computing a minimal correction set (MCS) of a propositional formula [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ].
      </p>
      <p>The second algorithm we propose also maintains a lower bound on a maximal
consistent set and tests, one by one, the elements to be added. The pseudo-code
can be found in Algorithm 2. In each iteration it tests whether an element sf can
be added to the current lower bound L. If it is possible, the algorithm continues
with a larger L. If it is not, then sf is removed from L and never inspected again.</p>
      <p>
        In contrast to Algorithm 1, here it is not immediately clear that the returned
set is indeed maximal. Indeed, how do we know that if it was impossible to add
some element sf to an earlier L1, that is it still impossible for the final Lf ?
This follows from Observation 1 and from the fact that L grows monotonically
throughout the algorithm. More precisely, it holds that L1 ⊆ Lf and our test
checks that sf cannot be added to any superset of L1, i.e. if it was not possible to
extend L1 with sf , then it is also impossible to extend Lf with it. As Algorithm 1,
Algorithm 2 also performs at most | S | calls to an ASP solver. Similar algorithms
have been proposed in a number of contexts, including computing an MCS [
        <xref ref-type="bibr" rid="ref36 ref4">4,36</xref>
        ].
      </p>
      <p>
        The third algorithm we consider is inspired by the progression algorithm [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ].
The algorithm is shown in Algorithm 3. It tries to add more than a single atom
to the lower bound in one iteration—atoms are added in chunks. This is done
progressively: in the very first iteration the chunk contains a single atom. The
chunk size is doubled each time the current chunk is added with success. If it is
not possible to extend the current lower bound L with the current chunk, the
size is reset again to 1. Whenever a chunk of size 1 cannot be added to L the
clause comprising the chunk is no longer inspected. This guarantees termination.
Note that the algorithm aims at constructing L as quickly as possible by adding
larger chunks of atoms at a time. The use of chunks finds other applications,
including redundancy removal [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
      </p>
      <p>Algorithm 3: Progression-based algorithm for maximal consistency.
In this section we see how maximal consistency is useful for calculating
corrections to an inconsistent program. Hence, the objective is to calculate an
irreducible correction to a given inconsistent program so it becomes consistent.</p>
      <p>
        The concept of (minimal) correction sets commonly appears in propositional
logic [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Correction sets, however, are meant to be removed from the formula
in order to make it consistent. In ASP, however, corrections might consist of
removal or addition. Indeed, unlike in propositional logic, an ASP program may
become consistent after a fact (or rule) is added.
      </p>
      <p>Enabling corrections by addition brings about a substantial difficulty as the
universe of rules that can be potentially added to the program can be easily
unwieldy or even infinite. Hence, we assume that the user (the programmer)
provides us with a set of rules R that can be removed from the program and a
set of rules A that can be added to the program.</p>
      <p>Definition 2 ((A, R)-correction). Let A and R be sets of rules and P be an
inconsistent logic program. An (A, R)-correction of P is a pair (Mr, Ma) such
that Mr ⊆ R and Ma ⊆ A and the program (P \ Mr) ∪ Ma is consistent.</p>
      <p>An (A, R)-correction (Mr, Ma) is minimal if for any (A, R)-correction (Mr, Ma)
such that Mr ⊆ Mr and Ma ⊆ Ma, it holds that Ma = Ma and Mr = Mr.</p>
      <p>We refer to the problem of calculating a minimal correction as MinCorrect;
it accepts as input sets A, R, and a program P and outputs a pair (Mr, Ma) in
accord with Definition 2.</p>
      <p>We show how to translate MinCorrect to maximal consistent subset
calculation. Construct a program P from P as follows3.
1. Introduce fresh atoms srr for each r ∈ R and sra for each r ∈ A.
2. Replace each rule r ∈ R with head(r) ← srr, body(r)
a
3. Replace each rule r ∈ A with head(r) ← not sr, body(r).</p>
      <p>Let S = {srr | r ∈ R} ∪ {sra | r ∈ A}. Then any maximal consistent subset L
of S w.r.t. P gives us a solution to MinCorrect by setting Mr = {r | srr ∈/ L}
and Ma = {r | sra ∈/ L}. It is easy to see why that is the case. From the definition
of maximal consistent subset, there must be an answer set µ of the program
P ∪ {s. | s ∈ L}. If srr ∈ µ for r ∈ R, then µ also satisfies the original rule r,
hence it is not necessary to remove r to achieve consistency. Similarly, if sra ∈ µ,
then the body of head(r) ← not sra, body(r) is false, i.e. the rule is ineffective, and
therefore it is not necessary to add the rule r to the original program to achieve
consistency. Conversely, adding any further srr or sra to µ leads to inconsistency
due to the definition of maximal consistent subset. Hence, Mr and Ma defined
as above are irreducible.
4.1</p>
      <p>Minimal Corrections for Program Inputs
Here we come back to the problem that we used to motivate the paper in the
introduction, which is to calculate corrections of program’s data so that the
program becomes consistent. Hence, we assume that there is a program Pe—
representing the encoding of the problem and a program Pd—representing data
to the encoding, consisting only of facts. Assuming that Pe ∪ Pd is inconsistent,
we wish to identify correction to Pd that would make it consistent.</p>
      <p>For such, we use the concept of minimal corrections as described above.
What is needed is to specify the sets R and A. To specify the set R, the user
just needs to identify facts in the given input that can be removed. This can
for instance be done by giving predicate names. Specifying the set A is more
challenging because it may comprise facts that do not appear in the program.
Hence, we let the user give expressions E of the form p(A1) : t(A2) where p
and t are predicate symbols and A1,A2 appropriate arguments. The set A is
obtained by an additional call to an ASP solver. This call constructs a program
r
P just as above and adds the rule atleast1 ({sr | r ∈ R} ∪ {E1, . . . , Eu}). If
this call is successful, the obtained answer set contains facts of the form p(A1)
that constitute the set A.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Experimental Evaluation</title>
      <p>
        The proposed algorithms were implemented using the state-of-the-art ASP solver
clingo [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], version 4.2.1. Algorithms 1–3 were implemented as presented, i.e. the
3 Similar construction appears elsewhere [45].
      </p>
      <p>ASP solver is invoked repeatedly from a python script. Computing a
cardinalitymaximal consistent subset was done by a single call to clingo, using the
minimization rule syntax (:∼). For readability we refer to the individual approaches by
single letters, Algorithm 1–letter a (atleast-1 constraint); Algorithm 2–letter u
(unit addition); Algorithm 3–letter p (progression); the maximization approach
is denoted by x. The evaluation was carried out on machines with Intel Xeon
5160 3GHz and 4GB of memory. The time limit was set to 1000 seconds and the
memory limit to 2GB. The experimental evaluation considers several problems
from the 2013 ASP Competition4. The following classes were considered.</p>
      <p>Solitaire. In the game a single player moves and removes stones from the
board according to the rule that a stone can be moved by two squares if that move
jumps over another stone; the stone being jumped over is removed (similarly to
Checkers). The problem to be solved in the context of this game is to perform
T steps given T and an initial configuration. For instance, the problem does not
have a solution if the board contains only one stone and at least one move is
required to be made.</p>
      <p>The size of the board is 7x7 with 2x2-size corners removed, thus comprising
33 squares. The initial configuration of the board is given by the predicates
empty(L) and full(L), where L is a square of the board. The set R was specified
as containing all the input’s facts using empty and full. The set A was specified
by the expressions empty(L) : location(L) and full(L) : location(L).</p>
      <p>To generate unsatisfiable instances of the game, initial board configurations
were generated randomly and the parameter T (number of steps) were fixed. We
considered T = 12 and T = 16; 100 instances were generated for each T and
only the inconsistent ones were considered. This process was repeated for other
benchmarks.</p>
      <p>Knight tour with holes. The input is a chessboard of size N ×N with H holes.
Following the standard chess rules, a knight chess piece must visit all non-hole
positions of the boards exactly once and return to the initial position (which may
be chosen). The objective of correction was to remove as few holes as possible
so that the tour is possible. Random hole positions were generated for fixed N
and H. Instances with N = 8, H = 4 and N = 8, H = 10 were considered.</p>
      <p>Graceful graphs. Given a graph (V, E) the task is to determine whether it is
possible to label its vertices with distinct integers in the range 0..|E| so that each
edge is labeled with the absolute difference between the labels of its vertices and
so that all edge labels are distinct (such graph is called graceful ). The correction,
for a given graph that is not graceful, tries to find its subgraph that maintains
as many edges as possible (a single-edge graph is graceful). Considered instances
were generated with |V | = 10 and |E| = 50 and |V | = 30 and |E| = 20.</p>
      <p>Permutation Pattern Matching. The problem’s input are two permutations
T and P . The task is to determine whether T contains a subsequence that is
order-isomorphic to P . For the purpose of correcting we consider T and P so that
P is not an order-isomorphic subsequence of T with the objective of adding and
4 https://www.mat.unical.it/aspcomp2013/FrontPage</p>
      <p>
        Family a p u x VBS
knight [
        <xref ref-type="bibr" rid="ref10 ref8">8,10</xref>
        ] (95) 74 75 78 60 80
knight [
        <xref ref-type="bibr" rid="ref4 ref8">8,4</xref>
        ] (51) 7 13 13 7 14
patterns [
        <xref ref-type="bibr" rid="ref10 ref16">16,10</xref>
        ] (100) 100 100 100 100 100
patterns [
        <xref ref-type="bibr" rid="ref15 ref20">20,15</xref>
        ] (100) 100 100 100 100 100
solitaire [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] (18) 18 18 18 17 18
solitaire [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] (16) 12 9 11 4 13
graceful graphs [
        <xref ref-type="bibr" rid="ref10">10,50</xref>
        ] (100) 57 75 63 62 83
graceful graphs [
        <xref ref-type="bibr" rid="ref20 ref30">30,20</xref>
        ] (57) 56 57 57 55 57
total (537) 424 447 440 405 465
removing elements to P so that P satisfies the condition. Considered instances
were generated with |T | = 16 and |P | = 10 and |T | = 20 and |P | = 15.
      </p>
      <p>
        Table 1 shows the number of solved instances by each of the approaches for
the different benchmark families. The last column shows the virtual best solver
(VBS), which is calculated by picking the best solver for each of the instances.
While the progression-based algorithm (p) is clearly in the lead, it is not a winner
for all the instances (or even classes of instances). Indeed, the virtual best solver
enables us to solve almost 20 more instances compared to the progression-based
approach. The strength of progression shows on benchmarks with larger target
sets. Such is the case for the class graceful graphs [
        <xref ref-type="bibr" rid="ref10">10,50</xref>
        ], where the target set
is the set of edges E with |E| = 50. On the other hand, the approaches a and u
do not aim at minimizing the number of calls to the ASP solver but represent
calls that are likely to be easier. In u, the solver only must make sure that
one fact from the target set is set to true. In a, the solver can in fact choose
which fact should be set to true. These algorithms are the two best ones for
solitaire [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Fig. 1 shows a cactus plot across all the considered families (300
easiest instances were cut off for better readability). This plot confirms data in
Table 1. Progression provides the most robust solution but at the same time,
there is a significant gap between progression and the virtual best solver.
      </p>
      <p>Out of the considered algorithms, cardinality maximization (x) performs the
worst. The scatter plot in Fig 2 compares progression to cardinality
maximization. There are some instances where maximization performs well but overall
progression dominates maximization; by orders of magnitude in a number of
cases.</p>
      <p>VBS
p
u
a
x
300
350</p>
      <p>400
instances
450
500</p>
      <p>While Algorithms 1–3 give us an advantage over the cardinality maximization
approach, the data so far does not show how the minimal corrections differ from
the minimum-cardinality corrections in size. This is shown in Table 2. The table
shows a distribution of how much the value |Ma| + |Mr| differs from the value
obtained from the approach x. For instance, row p, column 3 shows that in 12
cases the value differs by 3 from the minimum one. Naturally, these data only
come from instances where x and the given approach finished with success. The
distributions have a characteristic “heavy tail”. In the majority of cases, the
actual minimum is obtained (over 300 for all the approaches). Outliers exist but
they are small in numbers.
6</p>
    </sec>
    <sec id="sec-5">
      <title>Related Work</title>
      <p>
        The proposed approach is closely related to the work on debugging on answer set
programs, which has a long history [
        <xref ref-type="bibr" rid="ref1 ref10 ref11 ref12 ref13 ref15 ref2 ref21 ref25 ref30 ref37 ref39 ref7 ref8">1, 2, 7, 8, 10–13, 15, 21, 25, 30, 37, 39, 42, 45–47</xref>
        ].
Namely, the approach of Brain et al., which also may produce new facts in a
correction [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Such facts, however, may only be heads of existing rules. Hence,
our approach to data debugging by corrections gives the user a tighter control
over the given set of inputs.
      </p>
      <p>
        Nevertheless, the existing work on ASP debugging (in particular [
        <xref ref-type="bibr" rid="ref21 ref8">8, 21, 45</xref>
        ]),
also needs to deal with removing redundancy. For such, the mentioned works use
the cardinality-maximal/minimal sets. Hence, our approach to maximal subset
consistency could be applied instead. Our experimental evaluation suggests that
this could improve efficiency of those approaches.
      </p>
      <p>
        Other approaches to maximality in ASP exist. In particular, Gebser et al. [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]
use meta-modeling techniques to optimize given criteria and Nieves and Osorio
propose calculation of maximal models using ASP [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. Both approaches,
however, hinge on disjunctive logic programming (DLP). Hence, these approaches
require solving a problem in the second level of the polynomial hierarchy.
Intuitively, this means worst-case exponential calls to an NP oracle. In contrast, all
our approaches require polynomial calls to an NP oracle. However, we do not
claim that theoretically there does not exist a non-disjunctive approach to the
problem.
      </p>
      <p>
        Some of the algorithms described in the paper are inspired by work on
computing MSS/MCSes of propositional formulas in conjunctive normal form, which
in term can be traced back to Reiter’s seminal work on model-based
diagnosis [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ]. The well-known grow procedure is described for example in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and more
recently in [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ]. However, it is used in many other settings, including backbone
computation [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] or even prime implicant computation [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. There has been
recent renewed interest in the computation of MCSes [
        <xref ref-type="bibr" rid="ref31 ref32">31, 32</xref>
        ]. Additional
algorithms include the well-known QuickXplain algorithm [
        <xref ref-type="bibr" rid="ref18 ref27">18, 27</xref>
        ] or dichotomic
search [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Clearly, and similarly to the case of ASP, MaxSAT can be used for
computing and enumerating MCSes [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
      <p>
        Although the paper considers a restricted sample of algorithms for
computing MCS algorithms, adapted from [
        <xref ref-type="bibr" rid="ref31 ref32 ref4">4, 31, 32</xref>
        ], other options would include the
additional approaches summarized above.
      </p>
      <p>
        Our approach to effectively modifying the debugged program is conceptually
akin to belief revision and abduction reasoning in ASP [
        <xref ref-type="bibr" rid="ref16 ref41">16, 41, 49</xref>
        ]. However, our
main focus is to compare the computational efficiency of the different presented
algorithms.
      </p>
      <p>
        Minimization in the context of ASP is addressed in asprin [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and recent
work of Faber et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. It is the subject of future work to explore the context
presented in these works.
7
      </p>
    </sec>
    <sec id="sec-6">
      <title>Summary and Future Work</title>
      <p>Motivated by debugging of data to ASP programs, the paper studies the problem
of minimal (irreducible) corrections of inconsistent programs and the problem of
maximal consistency. The two problems are closely related. Indeed, a minimal
correction to an inconsistent program can be calculated by computing a maximal
consistency with respect to only slightly modified program.</p>
      <p>We show that algorithms for calculating maximally satisfiable sets (MSSes) in
propositional logic can be adapted to ASP. This is an interesting result as unlike
propositional logic, ASP is non-monotonic. For three MSS algorithms we show
how they are ported to ASP (Algorithms 1–3). A similar approach, however,
could be used for other MSS algorithms.</p>
      <p>
        The algorithms for maximal consistency let us then calculate minimal
corrections. For these corrections we assume a general scheme where rules may be
added or removed in order to restore consistency. For evaluation we return to the
initial motivation of the paper and that is the calculation of corrections to data
that make a program inconsistent. A number of instances from various problem
classes were considered. The progression-based algorithm (Algorithm 3) turned
out to be the most effective overall. Nevertheless, it was not a winner for each
of the considered classes. In contrast, the maximum-cardinality approach clearly
performed the worst. Here, however, we should point out that we used the
default implementation of minimization in clingo and it is the subject of future
work to evaluate other algorithms. (A number of MaxSAT algorithms are being
adapted to Max-ASP [
        <xref ref-type="bibr" rid="ref3 ref38">3, 38</xref>
        ]). Overall, the evaluation suggests that a portfolio
comprising the different algorithms would provide the best solution.
      </p>
      <p>
        The paper opens a number of avenues for future work. Irredundancy is needed
in other approaches to debugging, e.g. [
        <xref ref-type="bibr" rid="ref21 ref8">8, 21, 45</xref>
        ]. It is the subject of future
work to evaluate the proposed algorithms also in these contexts. The prototype
used for the evaluation uses the ASP solver in a black-box fashion. It is likely
that integrating the algorithms directly into an ASP solver would give further
performance boost. In this respect we believe it would be beneficial to make use
of the existing asprin framework [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Acknowledgments. This work was supported by national funds through
Funda¸c˜ao para a Ciˆencia e a Tecnologia (FCT) with reference UID/CEC/50021/2013
and by FCT funding of LASIGE Research Unit, reference UID/CEC/00408/2013.
42. Schulz, C., Satoh, K., Toni, F.: Characterising and explaining inconsistency in logic
programs. In: Logic Programming and Nonmonotonic Reasoning (LPNMR). pp.
467–479 (2015)
43. Simons, P.: Extending the stable model semantics with more expressive rules. In:</p>
      <p>
        Gelfond et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], pp. 305–316
44. Simons, P., Niemel¨a, I., Soininen, T.: Extending and implementing the stable model
semantics. Artif. Intell. 138(1-2), 181–234 (2002)
45. Syrj¨anen, T.: Debugging inconsistent answer set programs. In: Workshop on
Nonmonotonic Reasoning (NMR). pp. 77–83 (2006)
46. Ulbricht, M., Thimm, M., Brewka, G.: Measuring inconsistency in answer set
programs. In: Logics in Artificial Intelligence - 15th European Conference, JELIA. pp.
577–583 (2016)
47. Viegas Dam´asio, C., Analyti, A., Antoniou, G.: Justifications for logic
programming. In: LPNMR. pp. 530–542 (2013)
48. Zhu, C.S., Weissenbacher, G., Malik, S.: Post-silicon fault localisation using
maximum satisfiability and backbones. In: Formal Methods in Computer-Aided Design
(FMCAD). pp. 63–66 (2011)
49. Zhuang, Z., Delgrande, J.P., Nayak, A.C., Sattar, A., et al.: Reconsidering
AGMstyle belief revision in the context of logic programs. In: European Conference on
Artificial Intelligence (ECAI). pp. 671–679 (2016)
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Alviano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dodaro</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Anytime answer set optimization via unsatisfiable core shrinking</article-title>
          .
          <source>TPLP</source>
          <volume>16</volume>
          (
          <issue>5-6</issue>
          ),
          <fpage>533</fpage>
          -
          <lpage>551</lpage>
          (
          <year>2016</year>
          ), https://doi.org/10.1017/ S147106841600020X
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Alviano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dodaro</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Anytime computation of cautious consequences in Answer Set Programming</article-title>
          .
          <source>TPLP</source>
          <volume>14</volume>
          (
          <issue>4-5</issue>
          ),
          <fpage>755</fpage>
          -
          <lpage>770</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Andres</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaufmann</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matheis</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaub</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Unsatisfiability-based optimization in clasp</article-title>
          .
          <source>In: ICLP (Technical Communications)</source>
          . pp.
          <fpage>211</fpage>
          -
          <lpage>221</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bailey</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stuckey</surname>
            ,
            <given-names>P.J.:</given-names>
          </string-name>
          <article-title>Discovery of minimal unsatisfiable subsets of constraints using hitting set dualization</article-title>
          .
          <source>In: Practical Aspects of Declarative Languages (PADL)</source>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Baral</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Knowledge representation, reasoning and declarative problem solving</article-title>
          . Cambridge University Press (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Belov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Janota</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lynce</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marques-Silva</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Algorithms for computing minimal equivalent subformulas</article-title>
          .
          <source>Artif. Intell</source>
          .
          <volume>216</volume>
          ,
          <fpage>309</fpage>
          -
          <lpage>326</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Brain</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Vos</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Debugging logic programs under the answer set semantics</article-title>
          .
          <source>In: ASP Workshop</source>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Brain</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Declarative problem solving using answer set semantics</article-title>
          .
          <source>In: International Conference on Logic Programming (ICLP)</source>
          . vol.
          <volume>4079</volume>
          , pp.
          <fpage>459</fpage>
          -
          <lpage>460</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Brewka</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Delgrande</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romero</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaub</surname>
          </string-name>
          , T.:
          <article-title>asprin: Customizing Answer Set Preferences without a headache</article-title>
          .
          <source>In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence</source>
          . pp.
          <fpage>1467</fpage>
          -
          <lpage>1474</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Brummayer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , J¨arvisalo, M.:
          <article-title>Testing and debugging techniques for answer set solver development</article-title>
          .
          <source>TPLP</source>
          <volume>10</volume>
          (
          <issue>4-6</issue>
          ) (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Cabalar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fandinno</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>An algebra of causal chains</article-title>
          .
          <source>In: Workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP)</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Console</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friedrich</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Dupr´e, D.T.:
          <article-title>Model-based diagnosis meets error diagnosis in logic programs (extended abstract)</article-title>
          .
          <source>In: First International Workshop on Automated and Algorithmic Debugging</source>
          . pp.
          <fpage>85</fpage>
          -
          <lpage>87</lpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. Dam´asio,
          <string-name>
            <given-names>C.V.</given-names>
            ,
            <surname>Pires</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.M.</given-names>
            ,
            <surname>Analyti</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Unifying justifications and debugging for answer-set programs</article-title>
          .
          <source>In: International Conference on Logic Programming (ICLP)</source>
          (
          <year>2015</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>1433</volume>
          /tc_84.pdf
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. D´eharbe,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Fontaine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Berre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.L.</given-names>
            ,
            <surname>Mazure</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          :
          <article-title>Computing prime implicants</article-title>
          .
          <source>In: Formal Methods in Computer-Aided Design (FMCAD)</source>
          . pp.
          <fpage>46</fpage>
          -
          <lpage>52</lpage>
          . IEEE (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Dodaro</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gasteiger</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Musitsch</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shchekotykhin</surname>
            ,
            <given-names>K.M.:</given-names>
          </string-name>
          <article-title>Interactive debugging of non-ground ASP programs</article-title>
          .
          <source>In: Logic Programming and Nonmonotonic Reasoning (LPNMR)</source>
          . pp.
          <fpage>279</fpage>
          -
          <lpage>293</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Eiter</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
          </string-name>
          , N.:
          <article-title>Abduction from logic programs: Semantics and complexity</article-title>
          .
          <source>Theoretical computer science 189(1-2)</source>
          ,
          <fpage>129</fpage>
          -
          <lpage>177</lpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Faber</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vallati</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cerutti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacomin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Solving set optimization problems by cardinality optimization with an application to argumentation</article-title>
          .
          <source>In: European Conference on Artificial Intelligence (ECAI)</source>
          . pp.
          <fpage>966</fpage>
          -
          <lpage>973</lpage>
          . IOS Press (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Felfernig</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schubert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zehentner</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>An efficient diagnosis algorithm for inconsistent constraint sets</article-title>
          .
          <source>AI</source>
          EDAM
          <volume>26</volume>
          (
          <issue>1</issue>
          ),
          <fpage>53</fpage>
          -
          <lpage>62</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Gebser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaminski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , Kaufmann,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Ostrowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Schaub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Wanko</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Theory solving made easy with Clingo 5</article-title>
          .
          <source>In: Technical Communications of the 32nd International Conference on Logic Programming</source>
          , ICLP. pp.
          <volume>2</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          :
          <fpage>15</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Gebser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaminski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaub</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Complex optimization in answer set programming</article-title>
          .
          <source>TPLP</source>
          <volume>11</volume>
          (
          <issue>4-5</issue>
          ),
          <fpage>821</fpage>
          -
          <lpage>839</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Gebser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Pu¨hrer, J.,
          <string-name>
            <surname>Schaub</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tompits</surname>
          </string-name>
          , H.:
          <article-title>A meta-programming technique for debugging answer-set programs</article-title>
          .
          <source>In: Conference on Artificial Intelligence (AAAI)</source>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfeifer</surname>
            ,
            <given-names>G</given-names>
          </string-name>
          . (eds.):
          <source>Logic Programming and Nonmonotonic Reasoning (LPNMR)</source>
          , vol.
          <volume>1730</volume>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>The stable model semantics for logic programming</article-title>
          .
          <source>In: ICLP/SLP</source>
          . pp.
          <fpage>1070</fpage>
          -
          <lpage>1080</lpage>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Classical negation in logic programs</article-title>
          and disjunctive databases.
          <source>New Generation Comput</source>
          .
          <volume>9</volume>
          (
          <issue>3</issue>
          /4),
          <fpage>365</fpage>
          -
          <lpage>386</lpage>
          (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Greßler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oetsch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tompits</surname>
          </string-name>
          , H.:
          <article-title>Harvey : A system for random testing in ASP</article-title>
          .
          <source>In: Logic Programming and Nonmonotonic Reasoning (LPNMR)</source>
          . pp.
          <fpage>229</fpage>
          -
          <lpage>235</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Hemery</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lecoutre</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sais</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boussemart</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Extracting MUCs from constraint networks</article-title>
          .
          <source>In: European Conference on Artificial Intelligence (ECAI)</source>
          . pp.
          <fpage>113</fpage>
          -
          <lpage>117</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Junker</surname>
          </string-name>
          , U.:
          <article-title>QuickXplain: Preferred explanations and relaxations for overconstrained problems</article-title>
          . In: AAAI. pp.
          <fpage>167</fpage>
          -
          <lpage>172</lpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A model-theoretic counterpart of loop formulas</article-title>
          .
          <source>In: International Joint Conference on Artificial Intelligence (IJCAI)</source>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Liffiton</surname>
            ,
            <given-names>M.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sakallah</surname>
            ,
            <given-names>K.A.</given-names>
          </string-name>
          :
          <article-title>Algorithms for computing minimal unsatisfiable subsets of constraints</article-title>
          .
          <source>J. Autom. Reasoning</source>
          <volume>40</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Lloyd</surname>
            ,
            <given-names>J.W.</given-names>
          </string-name>
          :
          <article-title>Declarative error diagnosis</article-title>
          .
          <source>New Generation Computing</source>
          <volume>5</volume>
          (
          <issue>2</issue>
          ),
          <fpage>133</fpage>
          -
          <lpage>154</lpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Marques-Silva</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heras</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Janota</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Previti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Belov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>On computing minimal correction subsets</article-title>
          .
          <source>In: International Joint Conference on Artificial Intelligence (IJCAI)</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Marques-Silva</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Janota</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Belov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Minimal sets over monotone predicates in Boolean formulae</article-title>
          . In: Sharygina,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Veith</surname>
          </string-name>
          , H. (eds.)
          <source>Computer Aided Verification (CAV)</source>
          . pp.
          <fpage>592</fpage>
          -
          <lpage>607</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Marques-Silva</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Janota</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lynce</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>On computing backbones of propositional theories</article-title>
          .
          <source>In: European Conference on Artificial Intelligence (ECAI)</source>
          . pp.
          <fpage>15</fpage>
          -
          <lpage>20</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Niemel</surname>
          </string-name>
          <article-title>¨a, I.,</article-title>
          <string-name>
            <surname>Simons</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soininen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Stable model semantics of weight constraint rules</article-title>
          .
          <source>In: Gelfond et al. [22]</source>
          , pp.
          <fpage>317</fpage>
          -
          <lpage>331</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Nieves</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Osorio</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Generating maximal models using the stable model semantics</article-title>
          .
          <source>In: LA-NMR</source>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36. N¨ohrer,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Biere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Egyed</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Managing SAT inconsistencies with HUMUS</article-title>
          .
          <source>In: VaMoS</source>
          . pp.
          <fpage>83</fpage>
          -
          <lpage>91</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Oetsch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Pu¨hrer, J.,
          <string-name>
            <surname>Tompits</surname>
          </string-name>
          , H.:
          <article-title>Catching the Ouroboros: On debugging nonground answer-set programs</article-title>
          .
          <source>TPLP</source>
          <volume>10</volume>
          (
          <issue>4-6</issue>
          ) (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Oikarinen</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , J¨arvisalo, M.:
          <string-name>
            <surname>Max-ASP</surname>
          </string-name>
          :
          <article-title>Maximum satisfiability of answer set programs</article-title>
          .
          <source>In: Logic Programming and Nonmonotonic Reasoning (LPNMR)</source>
          . pp.
          <fpage>236</fpage>
          -
          <lpage>249</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Pereira</surname>
            ,
            <given-names>L.M.</given-names>
          </string-name>
          , Dam´asio,
          <string-name>
            <given-names>C.V.</given-names>
            ,
            <surname>Alferes</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.J.:</surname>
          </string-name>
          <article-title>Debugging by diagnosing assumptions</article-title>
          .
          <source>In: Automated and Algorithmic Debugging: First International Workshop (AADEBUG)</source>
          . pp.
          <fpage>58</fpage>
          -
          <lpage>74</lpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Reiter</surname>
          </string-name>
          , R.:
          <article-title>A theory of diagnosis from first principles</article-title>
          .
          <source>Artif. Intell</source>
          .
          <volume>32</volume>
          (
          <issue>1</issue>
          ),
          <fpage>57</fpage>
          -
          <lpage>95</lpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Sakama</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Inoue</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>An abductive framework for computing knowledge base updates</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          <volume>3</volume>
          (
          <issue>6</issue>
          ),
          <fpage>671</fpage>
          -
          <lpage>715</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>