<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Robin Arcangioli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nadjib Lazaar</string-name>
          <email>lazaarg@lirmm.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>LIRMM, University of Montpellier</institution>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <abstract>
        <p>QUACQ is a constraint acquisition system that assists a non-expert user to model her problem as a constraint network by classifying (partial) examples as positive or negative. For each negative example, QUACQ focuses onto a constraint of the target network. The drawback is that the user may need to answer a great number of such examples to learn all the constraints. In this paper, we provide a new approach that is able to learn a maximum number of constraints violated by a given negative example. Finally we give an experimental evaluation that shows that our approach improves the state of the art.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Constraint programming is a powerful paradigm for
modeling and solving combinatorial problems. However, it has long
been recognized that expressing a combinatorial problem as a
constraint network requires significant expertise in constraint
programming [Freuder, 1999]. To alleviate this issue, several
techniques have been proposed to acquire a constraint
network. For example, the matchmaker agent [Freuder and
Wallace, 2002] interactively asks the user to provide one of the
constraints of the target problem each time the system
proposes an incorrect (negative) solution. In [Beldiceanu and
Simonis, 2012], Beldiceanu and Simonis have proposed
MODELSEEKER, a system devoted to problems with regular
structures, like matrix models. Based on examples of solutions
and non-solutions provided by the user, CONACQ.1 [Bessiere
et al., 2005] learns a set of constraints that correctly
classifies all the examples given so far. As an active learning
version, CONACQ.2 [Bessiere et al., 2007] proposes examples to
the user to classify (i.e., membership queries). In
[Shchekotykhin and Friedrich, 2009], the approach used in CONACQ.2
has been extended to allow the user to provide arguments as
constraints to converge more rapidly.</p>
      <p>QUACQ is a recent active learner system that is able to
ask the user to classify partial queries [Bessiere et al., 2013].
QUACQ iteratively computes membership queries. If the user
says yes, QUACQ reduces the search space by removing all
constraints violated by the positive example. In the case of
a negative example, QUACQ focuses onto one, and only one,
constraint of the target network in a number of queries
logarithmic in the size of the example. This key component of
QUACQ allows it to always converge on the target set of
constraints in a polynomial number of queries. However, even
that good theoretical bound can be hard to put in practice.
For instance, QUACQ requires the user to classify more than
9000 examples to get the complete Sudoku model.</p>
      <p>An example can be classified as negative because of more
than one violated target constraint. In this paper, we extend
the approach used in QUACQ to make constraint acquisition
more efficient in practice by learning, not only one constraint
on a negative example, but a maximum number of constraints
violated by this negative example. Inspired by the work on
enumerating infeasibility called MUS (Minimal
Unsatisfiability Subsets) in SAT [Liffiton and Sakallah, 2008], we
propose an algorithm that computes all minimal scopes of target
constraints violated by a given negative example.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <p>The constraint acquisition process can be seen as an interplay
between the user and the learner. For that, user and learner
need to share a same vocabulary to communicate. We
suppose this vocabulary is a set of n variables X and a domain
D = fD(x1); : : : ; D(xn)g, where D(xi) Z is the finite set
of values for xi. A constraint cY is defined as a pair where
Y is a subset of variables of X, called the constraint scope,
and c is a relation over D of arity jY j. A constraint network
is a set C of constraints on the vocabulary (X; D). An
assignment eY on a set of variables Y X is rejected by a
constraint cS if S Y and the projection eS of eY on the
variables in S is not in cS . An assignment on X is a solution
of C if and only if it is not rejected by any constraint in C.
sol(C) represents the set of solutions of C.</p>
      <p>In addition to the vocabulary, the learner owns a language
of relations from which it can build constraints on specified
sets of variables. Adapting terms from machine learning, the
constraint bias, denoted by B, is a set of constraints built
from the constraint language on the vocabulary (X; D)
from which the learner builds the constraint network.
Formally speaking, B = fcS j (S X) ^ (c 2 )g. A concept
is a Boolean function over DX = xi2X D(xi), that is, a
map that assigns to each example e 2 DX a value in f0; 1g.
We call target concept the concept fT that returns 1 for e
if and only if e is a solution of the problem the user has in
mind. In a constraint programming context, the target
concept is represented by a target network denoted by CT .</p>
      <p>A membership query ASK(e) is a classification question
asked to the user, where e is a complete assignment in DX .
The answer to ASK(e) is “yes” if and only if e 2 sol(CT ).
A partial query ASK(eY ), with Y X, is a classification
question asked to the user, where eY is a partial assignment in
DY = xi2Y D(xi). A set of constraints C accepts a partial
assignment eY if and only if there does not exist any
constraint c 2 C rejecting eY . The answer to ASK(eY ) is “yes”
if and only if CT accepts eY . A classified assignment eY is
called a positive or negative example depending on whether
ASK(eY ) is “yes” or “no”. For any assignment eY on Y ,
B(eY ) denotes the set of all constraints in B rejecting eY .</p>
      <p>We now define convergence, which is the constraint
acquisition problem we are interested in. We are given a set E of
(partial) examples labelled by the user as positive or negative.
We say that a constraint network C agrees with E if C
accepts all the examples labelled as positive in E and does not
accept those labelled as negative. The learning process has
converged on the learned network CL B if CL agrees with
E and for every other network C0 B agreeing with E, we
have sol(C0) = sol(CL). If there does not exist any CL B
such that CL agrees with E, we say that we have collapsed.
This happens when CT 6 B.</p>
      <p>Finally, we introduce the notion of Minimal No Scope,
noted MNS, which is similar to the notion of MUS (Minimal
Unsatisfiable Subset) in SAT [Liffiton and Sakallah, 2008].
Definition 1 (Minimal No Scope). Given a negative example
e, an MNS is a subset of variables U X s.t. ASK(eU ) =
no and 8xi 2 U : ASK(eUnxi ) = yes.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Multiple constraint acquisition approach</title>
      <p>In this section, we propose MULTIACQ for Multiple
Acquisition. MULTIACQ takes as input a bias B on a vocabulary
(X,D) and returns a constraint network CL equivalent to the
target network CT by asking (partial) queries. The main
difference between QUACQ [Bessiere et al., 2013] and
MULTIACQ is the fact that QUACQ learns and focuses on one
constraint each time we have a negative example, where
MULTIACQ tries to learn more than one explanation (constraint) of
why the user classifies a given example as negative.
3.1</p>
      <sec id="sec-3-1">
        <title>Description of MULTIACQ</title>
        <p>MULTIACQ (see algorithm 1) starts by initializing the CL
network to the empty set (line 1). If CL is unsatisfiable (line
3), the acquisition process will reach a collapse state. At line
4, we compute a complete assignment e satisfying the
current learned network CL and violating at least a constraint in
B. If such an example does not exist (line 5), then all the
constraints in B are implied by CL, and we have converged.
Otherwise, we call the function findAllScopes on the
example e (line 8). If e is a positive example, the bias B will
be reduced. However, when e is negative, findAllScopes
will compute a set of MNS (i.e., the set M N Ses). Each</p>
        <p>Algorithme 1 : MULTIACQ- Multiple acquisition via
partial queries.
1 CL ?</p>
      </sec>
      <sec id="sec-3-2">
        <title>2 while true do</title>
        <p>3 if sol(CL) = ? then return “collapse”
choose e in DX accepted by CL and rejected by B
if e = nil then return “convergence on CL”
else</p>
        <p>M N Ses ?
findAllScopes (e; X)
foreach Y 2 M N Ses do
cY findC(e; Y )
if cY = nil then return “collapse”
else CL</p>
        <p>CL [ fcY g
Y 2 M N Ses represents the scope of a violated constraint
that must be learned. The function findC is called for each
computed MNS (line 10). It takes as parameter the negative
example e and an MNS. For each MNS, it returns a constraint
from CT with the given MNS that rejects e. We do not give
the code of findC function since it is implemented as it
appear in [Bessiere et al., 2013]. If no constraint is returned
(line 11), this is a second condition for collapsing as we could
not find in the bias B a constraint rejecting the according
negative example. Otherwise, the constraint returned by findC
is added to the learned network CL (line 12).
3.2</p>
      </sec>
      <sec id="sec-3-3">
        <title>Description of the function findAllScopes</title>
        <p>Function 1 : findAllScopes (e; Y ) : boolean</p>
        <p>Output : The set M N Ses contains all MNS of e
1 if Y 2 M N Ses then return f alse
2 if B(eY ) = ? then return true
3 if @M 2 M N Ses : M Y ^ ASK(eY ) = yes then
4 B B n B(eY )
5 return true
6 isM N S true
7 P subSets(Y; jY j 1)
8 foreach Pi 2 P do
9 isM N S findAllScopes(Pi; e) ^ isM N S
10 if isM N S then M N Ses M N Ses [ fY g
11 return f alse
The recursive function findAllScopes (see function 1)
takes as input a complete example e and a subset of variables
Y (X for the first call). The function starts by checking if
the subset Y is already reported as an MNS (line 1). If this
occurs, we return f alse. As we assume that the bias is
expressive enough to learn CT , when B(eY ) = ? (i.e. there
is no violated constraint in B to learn on Y ), it implies the
fact that ASK(eY ) = yes and we return true (line 2). As
a third check (line 3), we verify if we have not reported a
subset of Y as an MNS. If that is the case, we are sure that
ASK(eY ) = no and we get into the main part of the
algorithm. If not, at line 3, we ask the user to classify the
(partial) example eY . If it is positive, we remove from B
all the constraints rejecting eY and we return true (lines
45). If ASK(eY ) = no, this means that there is an MNS
in Y and in what follows, we try to report it. For that, we
compute all subsets of Y of size jY j 1 (line 7) and we
recall findAllScopes on each subset. If any sub-call to
findAllScopes returns false, the boolean isM N S will be
set to f alse, which means that Y itself is not an MNS (line
11). Otherwise, when all the sub-calls of findAllScopes on
Y subsets return true, this means that Y is an MNS (line 10).
Notice that the returned boolean corresponds to the
classification of the partial example eY (positive or negative).
3.3</p>
      </sec>
      <sec id="sec-3-4">
        <title>Analysis</title>
        <p>We analyze the soundness and completeness of
findAllScopes. We also give a complexity study of
MULTIACQ in terms of queries it can ask of the user.
Lemma 1. If ASK(eY ) = yes then for any Y 0
ASK(eY 0 ) = yes.</p>
        <sec id="sec-3-4-1">
          <title>Y we have</title>
          <p>Proof. Assume that ASK(eY ) = yes. Hence, there exists
no constraint from CT violated by eY . For any Y 0 subset of
Y , the projection eY 0 also does not violate any CT constraint
(i.e., ASK(eY 0 ) = yes).</p>
          <p>Lemma 2. If ASK(eY ) = no then for any Y 0
ASK(eY 0 ) = no.</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>Y we have</title>
          <p>Proof. Assume that ASK(eY ) = no. Hence, there exists
at least one constraint from CT violated by eY . For any Y 0
superset of Y , eY 0 also violates at least the constraint violated
by eY (i.e., ASK(eY 0 ) = no).</p>
          <p>Given a bias B and a target network CT , we say that CT
is learnable if it is representable by B. Thus, ASK(eY ) =
no ) B(eY ) 6= ? , which is equivalent to B(eY ) = ? )
ASK(eY ) = yes.</p>
          <p>Theorem 1. Given any bias B and any target network CT
representable by B, the findAllScopes function is sound
and complete.</p>
          <p>Proof. (Sketch.)</p>
          <p>Soundness : Assume that we have a set of variables
X = fx1; : : : ; xng and a complete assignment eX .
We show that any subset Y = fxj ; : : : ; xkg added to
M N Ses by findAllScopes is an MNS. If Y is added
to M N Ses (line 10), this means that ASK(eY ) = no
and the isM N S = true. As isM N S = true, it means
that all sub-calls of findAllScopes (line 9) on Y
subsets of size jY j 1 return true. Hence, as we
supposed the bias B to be expressive enough, 8Y 0 Y
of jY j 1 size, we have ASK(eY 0 ) = yes and knowing
that ASK(eY ) = no imply that Y is an MNS (def. 1).
Completeness : Assume that we have a set of variables
X = fx1; : : : ; xng :
i) We have a complete negative assignment eX
(ASK(eX ) = no) with an MNS M = fxj ; : : : ; xkg.
The three conditions at lines 1, 2 and 3 are not
satisfied (ASK(eX ) = no and B(eX ) 6= ?). Then,
findAllScopes is called on all the subsets of X of size
jXj 1. As M X is an MNS of e, ASK(eM ) = no
and so 8Y XnM : ASK(eM[Y ) = no (Lemma
2). Hence, any call of findAllScopes on a superset
of M will behave exactly as on X. That is, by
recurrence, there will be a call of findAllScopes on M . By
assumption, M is an MNS and for any subset M 0 we
have ASK(eM0 ) = yes (def.1). Thus, M is added to
M N Ses by findAllScopes as an MNS at line 10.
ii) We have a complete positive assignment eX
(ASK(eX ) = yes). Here, findAllScopes is called
only once and only on X. That is, the third condition at
line 3 is satisfied and findAllScopes returns true with
M N Ses = ?.</p>
          <p>Theorem 2. Given a bias B built from a language of
bounded arity constraints, and a target network CT ,
MULTIACQ uses O(jCT j (jXj + j j) + jBj) queries to prove
convergence on the target network or to collapse.
Proof. i) We will show first that the maximal number of
queries asked by findAllScopes is bounded above by
jCT j (jXj 1) + jBj.</p>
          <p>– Let us start with tablethe number of queries asked
by findAllScopes and classified as negative by
the user. Given an MNS M X, we need to ask at
most jXj 1 queries that are classified as negative
by the user. That is, knowing that findAllScopes
uses a depth first search, we need to ask queries
at each level from the root (i.e., jXj) to the node
M . Since all visited sets contain M , the queries
are classified as negative. Here, the worst case
is when jM j = 1 where we will have jXj 1
negative queries. Then, the use of Lemma 2 at
line 1 and 3 ensures the fact that we will never
ask a query on a superset of M . With the fact that
the total number of MNS is bounded by jCT j, we
can say that the total number of negative queries
asked by findAllScopes is bounded above by
1).</p>
          <p>jCT j (jXj
– Let us now see the number of queries asked
by findAllScopes and classified as positive
by the user. This number is bounded above by
jBj. Let us take two subsets Y and Y 0 where</p>
          <p>B(eY 0 ) B(eY ). If an ask on Y is classified
as positive, we remove B(eY ) from B (line 4)
and therefore eY 0 will be classified as positive
without a query where B(eY 0 ) becomes empty
(line 2). The worst case is when we remove, at
each time, one constraint from B (j B(eY )j = 1)
and therefore findAllScopes asks jBj queries
classified as positive.</p>
          <p>As has been shown, the maximal number of queries
asked by the findAllScopes function is bounded
above by jCT j (jXj 1) + jBj.
ii) findC function uses O(j j) queries to return a
constraint from CT [Bessiere et al., 2013] and therefore
O(jCT j j j) queries to return all the constraints.</p>
          <p>From i) and ii), the total number of queries asked by
MULTIACQ to converge to the target network is bounded above by
jCT j (jXj 1) + jBj + jCT j j j.</p>
          <p>Before addressing our experimental evaluation, it is
important to stress here that QUACQ converges on a constraint
of the target network in a logarithmic number of queries
[Bessiere et al., 2013]. Whereas MULTIACQ converges on
a set of constraints of the target network but on a linear
number of queries in the size of the example. However, we believe
that MULTIACQ can be more efficient than QUACQ in
practice.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experimental results</title>
      <p>We made some experiments to evaluate the performance of
our MULTIACQ algorithm and its findAllScopes function.
Our tests were conducted on an Intel Core 2 Duo E4400
2x2.0GHz with 2.0GB of RAM. We first present the
benchmark problems we used for our experiments.</p>
      <p>
        N-queens.
        <xref ref-type="bibr" rid="ref7">(prob054 in [Gent and Walsh, 1999])</xref>
        The problem
is to place n queens on a n n chessboard so that none can
attack each other. The target network is encoded with n
variables corresponding to the n queens, and binary constraints.
For our experiments, we selected the 4-queens instance with
a CT of 18 constraints (6 constraints 6= on rows and 12
constraints N otInDiagonal, 6 for each direction) and a bias of
108 constraints.
      </p>
      <p>
        Golomb Rulers.
        <xref ref-type="bibr" rid="ref7">(prob006 in [Gent and Walsh, 1999])</xref>
        The
problem is to find a ruler where the distance between any two
marks is different from that between any other two marks.
The target network is encoded with n variables
corresponding to the n marks, and constraints of varying arity. For our
experiments, we selected the 8-marks ruler instance with a
CT of 385 constraints and a bias of 798 constraints.
Sudoku. We used the Sudoku logic puzzle with 9 9 grid.
The grid must be filled with numbers from 1 to 9 in such a
way that all the rows, all the columns and the 9 non
overlapping 3 3 squares contain the numbers 1 to 9. The target
network of the Sudoku has 81 variables with domains of size
9 and 810 binary 6= constraints on rows, columns and squares.
We use a bias of 6480 binary constraints.
      </p>
      <p>Algorithm jCLj
s
n QUACQ
e
e
uQ MULTIACQ
14
12
bm QUACQ 95
o
lo MULTIACQ 146
G
ouk QUACQ 622
d
Su MULTIACQ 810
#q
104
58
976
666
9732
3821
q
2.64
2.41
4.78
4.69</p>
      <p>Table 1 displays a comparative performance of
MULTIACQ and QUACQ. We report the size jCLj of the learned
network, the total number of queries #q, the average size q
of all queries, the number of complete (resp. partial) negative
queries #qc (resp. #qp ), and the average time Tmns (resp.
Tq) needed to compute an MNS (resp. a query) in seconds.</p>
      <p>If we compare our MULTIACQ to QUACQ, the main
observation is that the use of findAllScopes to find all MNS
of a negative example reduces significantly the number of
queries required for convergence. For 4-queens instance and
using MULTIACQ we reduce the number of queries by 44%.
Here, MULTIACQ needs only 7 complete negative examples
to learn 12 constraints that are equivalent to the target
network. Whereas QUACQ needs twice as many negatives (14
examples) to converge to the target network.</p>
      <p>Let us take Golomb rulers instance. Here the gain in terms
of queries is of 31%. The same observation can be made on
the number of complete (partial) negative examples.
However, the time needed to compute a query is twice the time
needed by QUACQ (0.54s instead of 0.20s).</p>
      <p>The Sudoku instance is quite interesting where the problem
is expressed on 81 variables and we have to learn a network
equivalent to 810 6= constraints. Despite of the fact that our
algorithm learns more constraints to converge than the basic
QUACQ (810 instead of 622), MULTIACQ needs 60% less
queries than QUACQ to converge. Here, MULTIACQ needs
only one complete negative example to learn the whole
network, where QUACQ needs 622 complete negative examples.
Another result relates to the average size of queries. Here,
q = 3:5 is significantly smaller than the number of
variables (i.e., 81 variables) and the average size for QUACQ (i.e.,
37:61). This is an interesting result where short queries are
probably easier to answer by the user. These good results in
terms of queries are somewhat to the detriment of time where
using MULTIACQ, we need more than 0:38s to compute a
query (instead of 0:20s using QUACQ).
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>We have proposed a new approach to make constraint
acquisition more efficient in practice by learning a maximum
number of constraints from a negative example. As a constraint
acquisition system, QUACQ focuses on the scope of one
target constraint each time we give it a negative example. We
have proposed MULTIACQ with a findAllScopes function
that aims to report all minimal scopes (MNS) of violated
constraints. We have experimentally evaluated the benefit of our
approach on some benchmark problems. The results show
that i) MULTIACQ dramatically improves the basic QUACQ
in terms of number of queries ii) the queries are often much
shorter than membership queries, so are easier to handle for
the user iii) and MULTIACQ could be time-consuming. An
interesting direction would be to use a dichotomic search of
MNS and the use of cutoffs during search in order to obtain a
good tradeoff (#queries/time).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[Beldiceanu and Simonis</source>
          , 2012]
          <string-name>
            <given-names>Nicolas</given-names>
            <surname>Beldiceanu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Helmut</given-names>
            <surname>Simonis</surname>
          </string-name>
          .
          <article-title>A model seeker: Extracting global constraint models from positive examples</article-title>
          .
          <source>In CP</source>
          , pages
          <fpage>141</fpage>
          -
          <lpage>157</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Bessiere et al.,
          <year>2005</year>
          ]
          <string-name>
            <given-names>Christian</given-names>
            <surname>Bessiere</surname>
          </string-name>
          , Remi Coletta, Fre´de´ric Koriche, and
          <string-name>
            <surname>Barry O'Sullivan</surname>
          </string-name>
          .
          <article-title>A sat-based version space algorithm for acquiring constraint satisfaction problems</article-title>
          .
          <source>In ECML</source>
          , pages
          <fpage>23</fpage>
          -
          <lpage>34</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Bessiere et al.,
          <year>2007</year>
          ]
          <string-name>
            <given-names>Christian</given-names>
            <surname>Bessiere</surname>
          </string-name>
          , Remi Coletta,
          <string-name>
            <surname>Barry O'Sullivan</surname>
            ,
            <given-names>and Mathias</given-names>
          </string-name>
          <string-name>
            <surname>Paulin</surname>
          </string-name>
          .
          <article-title>Query-driven constraint acquisition</article-title>
          .
          <source>In IJCAI</source>
          , pages
          <fpage>50</fpage>
          -
          <lpage>55</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Bessiere et al.,
          <year>2013</year>
          ]
          <string-name>
            <given-names>Christian</given-names>
            <surname>Bessiere</surname>
          </string-name>
          , Remi Coletta, Emmanuel Hebrard, George Katsirelos, Nadjib Lazaar, Nina Narodytska,
          <string-name>
            <surname>Claude-Guy Quimper</surname>
            , and
            <given-names>Toby</given-names>
          </string-name>
          <string-name>
            <surname>Walsh</surname>
          </string-name>
          .
          <article-title>Constraint acquisition via partial queries</article-title>
          .
          <source>In IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence</source>
          , Beijing, China,
          <source>August 3-9</source>
          ,
          <year>2013</year>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Freuder and Wallace</source>
          , 2002] Eugene C. Freuder and
          <string-name>
            <given-names>Richard J.</given-names>
            <surname>Wallace</surname>
          </string-name>
          .
          <article-title>Suggestion strategies for constraintbased matchmaker agents</article-title>
          .
          <source>International Journal on Artificial Intelligence Tools</source>
          ,
          <volume>11</volume>
          (
          <issue>1</issue>
          ):
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Freuder</source>
          , 1999]
          <string-name>
            <given-names>E.</given-names>
            <surname>Freuder</surname>
          </string-name>
          . Modeling:
          <article-title>The final frontier</article-title>
          .
          <source>In 1st International Conference on the Practical Applications of Constraint Technologies and Logic Programming</source>
          , pages
          <fpage>15</fpage>
          -
          <lpage>21</lpage>
          , London, UK,
          <year>1999</year>
          . Invited Talk.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Gent and Walsh</source>
          , 1999]
          <string-name>
            <given-names>I.P.</given-names>
            <surname>Gent</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Walsh</surname>
          </string-name>
          .
          <article-title>Csplib: a benchmark library for constraints</article-title>
          . http://www.csplib.org/,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Liffiton and Sakallah</source>
          , 2008]
          <string-name>
            <given-names>Mark H.</given-names>
            <surname>Liffiton</surname>
          </string-name>
          and
          <string-name>
            <given-names>Karem A.</given-names>
            <surname>Sakallah</surname>
          </string-name>
          .
          <article-title>Algorithms for computing minimal unsatisfiable subsets of constraints</article-title>
          .
          <source>J. Autom. Reasoning</source>
          ,
          <volume>40</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Shchekotykhin and Friedrich</source>
          , 2009]
          <string-name>
            <surname>Kostyantyn</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Shchekotykhin</surname>
            and
            <given-names>Gerhard</given-names>
          </string-name>
          <string-name>
            <surname>Friedrich</surname>
          </string-name>
          .
          <article-title>Argumentation based constraint acquisition</article-title>
          .
          <source>In ICDM 2009, The Ninth IEEE International Conference on Data Mining</source>
          , Miami, Florida, USA,
          <fpage>6</fpage>
          -9
          <source>December</source>
          <year>2009</year>
          , pages
          <fpage>476</fpage>
          -
          <lpage>482</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>