<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Learning Techniques for Pseudo-Boolean Solving</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jose´ Santos IST/UTL</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>INESC-ID</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Portugal</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vasco Manquinho IST/UTL</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>INESC-ID</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Portugal</string-name>
        </contrib>
      </contrib-group>
      <fpage>103</fpage>
      <lpage>112</lpage>
      <abstract>
        <p>The extension of conflict-based learning from Propositional Satisfiability (SAT) solvers to Pseudo-Boolean (PB) solvers comprises several different learning schemes. However, it is not commonly agreed among the research community which learning scheme should be used in PB solvers. Hence, this paper presents a contribution by providing an exhaustive comparative study between several different learning schemes in a common platform. Results for a large set of benchmarks are presented for the different learning schemes, which were implemented on bsolo, a state of the art PB solver.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>Preliminaries</title>
      <p>
        In a propositional formula, a literal l j denotes either a variable x j or its complement x¯ j. If a
literal l j = x j and x j is assigned value 1 or l j = x¯ j and x j is assigned value 0, then the literal is
said to be true. Otherwise, the literal is said to be false. A pseudo-Boolean (PB) constraint is
defined as a linear inequality over a set of literals of the following normal form:
such that for each j 2 f1; ; ng, a j 2 Z+ and l j is a literal and b 2 Z+. It is well-known that
any PB constraint (with negative coefficients, equalities or other inequalities) can be converted
into the normal form in linear time [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In the remaining of the paper, it is assumed that all PB
constraints are in normal form.
      </p>
      <p>In a given constraint, if all a j coefficients have the same value k, then it is called a cardinality
constraint, since it requires that dbi=ke literals be true in order for the constraint to be satisfied. A
PB constraint where any literal set to true is enough to satisfy the constraint, can be interpreted
as a propositional clause. This occurs when the value of all a j coefficients are greater than or
equal to b.</p>
      <p>An instance of the Pseudo-Boolean Satisfiability (PB-SAT) problem can be defined as
finding an assignment to the problem variables such that all PB constraints are satisfied.
2.1</p>
      <sec id="sec-2-1">
        <title>Boolean Constraint Propagation</title>
        <p>
          Boolean Constraint Propagation (BCP) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] in PB algorithms is a generalization of the application
of the unit clause rule [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] used in propositional satisfiability (SAT) algorithms. A constraint w is
said to be unit if given the current partial assignment, w is not currently satisfied and there is
at least one literal l j that if assigned value 0, w cannot be satisfied. Hence, l j must be assigned
value 1. Let aumax denote the largest coefficient of the unassigned literals in w. Moreover, let the
slack s of constraint w to be defined as follows:
(2)
(3)
(4)
(5)
s = ( å a j) b
        </p>
        <p>lj6=0
If aumax &gt; s then w is a unit constraint and literal lmuax must be assigned value 1. Notice that given
the same partial assignment, more than one literal can be implied by the same PB constraint.</p>
        <p>During the search, let x j = vx@k denote the assignment of vx to variable x j at decision level
k. In the following sections we often need to associate dependencies (or an explanation) with
each implied variable assignment. Dependencies represent sufficient conditions for variable
assignments to be implied. For example, let x j = vx be a truth assignment implied by applying
the unit clause rule to w. Then the explanation for this assignment is the set of assignments
associated with the false literals of w.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Constraint Operations</title>
        <p>
          It is well-known that the fundamental operation used to infer new constraints using
propositional clauses is resolution [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. For PB constraints, instead of resolution, the technique of cutting
planes [
          <xref ref-type="bibr" rid="ref5 ref9">5, 9</xref>
          ] can be used. This operation allows the linear combination of a set of constraints, in
order to generate a new constraint that is a logical consequence of the original constraints. For
any two pseudo-Boolean constraints and coefficients c and c0 we can combine them as follows:
        </p>
        <p>If c or c0 is non-integer, a new constraint with non-integer coefficients may result after
applying the cutting plane operation. In order to obtain a new constraint with integer coefficients,
the rounding operation can be applied as follows:
It should be noted that the rounding operation might weaken the constraint such that the
number of models of the resulting constraint is larger. For example, suppose we have the constraint
w : 2x1 + x2 + x3 3. Applying a coefficient c = 0:5 to w we get a new constraint w0:
c(å j a jx j
c0(å j a0jx j
b)
b0)
c å j a jx j + c0 å j a0jx j</p>
        <p>cb + c0b0
å j a jx j
å jda jex j
b
dbe
w0 : x1 + 0:5x2 + 0:5x3</p>
        <p>1:5
Applying the rounding operation to w0 results in a new constraint wr0 : x1 + x2 + x3 2. Clearly,
all models of w0 are also models of wr0, but not all models of wr0 are models of w0.</p>
        <p>
          Another operation that can be used on PB constraints is reduction [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. This operation allows
the removal of literals by subtracting the value of the coefficient from the right hand side.
        </p>
        <p>Consider the constraint w : x1 + 3x2 + 3x¯3 + 2x¯4 + x5 7. If reduction is applied to w in order
to remove x1 and x¯3, we would get w0 : 3x2 + 2x¯4 + x5 3. Note that if w is a problem instance
constraint, it is not possible to replace w with w0. Constraint w0 is a logical consequence from
w, but the converse is not true.</p>
        <p>
          Algorithms to generate cardinality constraints from general pseudo-Boolean constraints can
be found in [
          <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
          ]. These algorithms find the minimum number of literals that must be set to
1 in order to satisfy the constraint. This is achieved by accumulating the literal coefficients of
the constraint, in decreasing order, starting with the highest a j. Let m(w) denote the minimum
number of literals to be set to true in order to satisfy constraint w. Then, cardinality reduction
can be defined as follows:
w : å j a jl j b
å j l j m(w)
(6)
One should note that the resulting constraint is weaker than the original general
pseudoBoolean constraint. More details for cardinality reduction can be found in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], which presents
a stronger cardinality reduction.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Pseudo-Boolean Algorithms</title>
      <p>Generally, pseudo-Boolean satisfiability (PB-SAT) algorithms follow the same structure as
propositional satisfiability (SAT) backtrack search algorithms. The search for a satisfying
assignment is organized by a decision tree (explored in depth first) where each node specifies an
assignment (also known as decision assignment) to a previously unassigned problem variable.
A decision level is associated with each decision assignment to denote its depth in the decision
tree.</p>
      <p>
        Algorithm 1 shows the basic structure of a PB-SAT algorithm. The decide procedure
corresponds to the selection of the decision assignment thus extending the current partial
assignment. If a complete assignment is reached, then a solution to the problem constraints has been
found. Otherwise, the deduce procedure applies Boolean Constraint Propagation (and
possibly other inference methods). If a conflict arises, i.e. a given constraint cannot be satisfied by
extending the current partial assignment, then a conflict analysis [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] procedure is carried out
to determine the level to which the search process can safely backtrack to. Moreover, a no-good
constraint is also added to the set of problem constraints.
      </p>
      <p>The main goal of the conflict analysis procedure is to be able to determine the correct
explanation for a conflict situation and backtrack (in many cases non-chronologically) to a
decision level such that the conflict does no longer occur. Moreover, a no-good constraint results
from this process. However, the strategy for no-good generation that results from the conflict
analysis differs between several state of the art PB-SAT solvers. These different strategies are
reviewed and analysed in section 4.</p>
      <p>
        Another approach to PB-SAT is to encode the problem constraints into a propositional
satisfiability (SAT) problem [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and then use a powerful SAT solver on the new formulation.
Although this approach is successful for some problem instances [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], in other cases the resulting
SAT formulation is much larger than the original PB formulation so that it provides a huge
overhead to the SAT solver.
Algorithm 1 Generic Structure of Algorithms for PB-SAT Problem
while TRUE do
if decide() then
while deduce()=CONFLICT do
blevel analyseConflict()
if blevel 0 then
      </p>
      <p>return UNSATISFIABLE;
else</p>
      <p>backtrack(blevel);
end if
end while
else</p>
      <p>return SATISFIABLE;
end if
end while
4</p>
    </sec>
    <sec id="sec-4">
      <title>Conflict Analysis</title>
      <p>
        Consider that the assignment of a problem variable x j is inferred by Boolean Constraint
Propagation due to a PB constraint wi. In this case, wi is referred to as the antecedent constraint [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
of the assignment to x j. The antecedent assignment of x j, denoted as Aw (x j), is defined as the
set of assignments to problem variables corresponding to false literals in wi. Similarly, when
wi becomes unsatisfied, the antecedent assignment of its corresponding conflict, Aw (k), will be
the set of all assignments corresponding to false literals in w.
      </p>
      <p>
        The implication relationships of variable assignments during the PB-SAT solving process
can be expressed as an implication graph [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In the implication graph, each vertex corresponds
to a variable assignment or to a conflict vertex. The predecessors of a vertex are the other
vertexes corresponding to the assignments in Awi (x j). Next, several learning schemes are
presented that result from analyzing the implication graph in a conflict situation.
4.1
      </p>
      <sec id="sec-4-1">
        <title>Propositional Clause Learning</title>
        <p>
          When a logical conflict arises, the implication sequence leading to the conflict is analysed to
determine the variable assignments that are responsible for the conflict. The conjunction of
these assignments represents a sufficient condition for the conflict to arise and, as such, its
negation must be consistent with the PB formula [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. This new constraint (known as the
conflict constraint), is then added to the PB formula in order to avoid the repetition of the same
conflict situation and thus pruning the search space.
        </p>
        <p>
          When an assignment to a problem variable x j is implied by a PB constraint wi, one can see
it as the conjunction of Awi (x j) implying the assignment to x j. Moreover, when a constraint
implies a given assignment, the coefficient reduction rule can be used to eliminate all positive
and unassigned literals except for the implied literal. Hence, the conflict analysis used in SAT
solvers by applying a sequence of resolution steps in a backward traversal of the implication
graph can be directly applied to PB formulas [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Additionally, techniques such as the
detection of Unique Implication Points (UIPs) [
          <xref ref-type="bibr" rid="ref11 ref18">11, 18</xref>
          ] can also be directly used in PB-SAT conflict
analysis. As a result, a new propositional clause is generated and added to the original formula.
        </p>
        <p>
          In the last pseudo-Boolean solver evaluation, some PB solvers used this approach, namely
bsolo [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and PBS4 (an updated version of the original PBS solver [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]). This strategy is simple
to implement in PB solvers, since it is a straightforward generalization of the one used in SAT
solvers. Moreover, considering the use of lazy data structures [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] for clause manipulation,
the overhead of adding a large number of clauses during the search is smaller than with other
types of constraints.
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Pseudo-Boolean Constraint Learning</title>
        <p>The use of PB constraint learning is motivated by the fact that PB constraints are more
expressive. It is known that a single PB constraint can represent a large number of propositional
clauses. Therefore, the potential pruning power of PB conflict constraints is much larger than
that of propositional clauses.</p>
        <p>
          The operation on PB constraints which corresponds to clause resolution is the cutting plane
operation (section 2.2). As such, to learn a general PB constraint, the conflict analysis algorithm
must perform a sequence of cutting plane steps instead of a sequence of resolution steps. In
each cutting plane step one implied variable is eliminated. As with the clause learning conflict
analysis, a backward traversal of the implication graph is performed and the implied variables
are considered in reverse order. The procedure stops when the conflict constraint is unit at a
previous decision level (1UIP cut) [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>After the conflict analysis, the algorithm uses the learned constraint to determine to which
level it must backtrack as well as implying a new assignment after backtracking. Therefore, the
learned constraint must be unsatisfied under the current assignment. Moreover, it must also be
an assertive constraint (must become unit after backtracking).</p>
        <p>
          Consider the application of a cutting plane step to two arbitrary constraints w1 with slack
s1 and w2 with slack s2. Moreover, consider that a and b are used as the multiplying factors.
In this situation, the slack of the resulting constraint, here denoted by sr, is given by linearly
combining the slacks of w1 and w2: sr = (a s1) + (b s2). As such, before the application of each
cutting plane step, the learning algorithm verifies if the resulting constraint is still unsatisfied
under the current assignment. If it is not, the implied constraint must be reduced to lower its
slack [
          <xref ref-type="bibr" rid="ref17 ref4">4, 17</xref>
          ]. This process is guaranteed to work since the repeated reduction of constraints will
eventually lead to a simple clause with slack 0.
        </p>
        <p>Algorithm 2 presents the pseudo-code for computing the conflict-induced PB constraint
w(k). This algorithm performs a sequence of cutting plane steps, starting from the unsatisfied
constraint w(c). Notice that this algorithm can also implement a clause learning scheme. In this
case, functions reduce1 and reduce2 remove all non-negative literals in w(k) and wi,
respectively, except for the implied literal. Next, these procedures can trivially reduce the obtained
constraint to a clause. In order to implement a general PB learning scheme, function reduce2
must eliminate only enough non-negative literals in wi to guarantee that after the cutting plane
step, the resulting constraint remains unsatisfied at the current decision level.
4.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Other Learning Schemes</title>
        <p>
          Learning general PB constraints slows down the deduction procedure because the watched
literal strategy is not as efficient with general PB constraints as it is with clauses or cardinality
constraints [
          <xref ref-type="bibr" rid="ref16 ref4">4, 16</xref>
          ]. Note that in a clause, as well as in a cardinality constraint, it is only necessary
to watch a fixed number of literals, whereas in a general PB constraint the number of watched
literals varies during the execution of the algorithm.
        </p>
        <p>
          The approach for cardinality constraint learning used in Galena [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] is based on the
approach described for the general pseudo-Boolean learning scheme. The difference is that a
Algorithm 2 Generic Pseudo-Boolean Learning Algorithm
== W corresponds to the set of constraints in the PB formula and w(c) to the conflicting constraint
V fxi j x j corresponds to a false literal in w(c) at current decision levelg;
w(k) reduce1(w(c))
while TRUE do
x j removeNext(V );
wi implyingConstraint(x j);
if (wi 6= NULL ^ jV j &gt; 1) then
wi0 reduce2(wi, w(k));
w(k) cutResolve(w(k), wi0, x j);
        </p>
        <p>V V n fx jg [ fxk j xk corresponds to a false literal in wi0 at current decision levelg
else
w(k) reduce3(w(k));
Add w(k) to W ;
btLevel assertingLevel(w(k));
if btLevel &lt; 0 then</p>
        <p>return CONFLICT;
else
backtrack(btLevel);
return NO CONFLICT;
end if
end if
end while
post-reduction procedure is carried out so that the learned constraint is reduced into a weaker
cardinality constraint. In Algorithm 2 this would be done in function reduce3.</p>
        <p>
          Finally, a hybrid learning scheme was already proposed and used in Pueblo [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. The
authors noted that any solver which performs PB learning can be modified to additionally
perform clause learning with no significant extra overhead. Moreover, despite the greater pruning
power of PB learning, clause learning has its own advantages: it always produces an assertive
constraint and it does not compromise as heavily the propagation procedure as general PB
learning. As such, in their solver Pueblo, they implement a hybrid learning method [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5 Implementation Issues</title>
      <p>In implementing a pseudo-Boolean solver, several technical issues must be addressed. In this
section the focus is on generating the implication graph and on the use of cutting planes for PB
constraint learning.
5.1</p>
      <sec id="sec-5-1">
        <title>Generating the Implication Graph</title>
        <p>
          It is widely known that lazy data structures [
          <xref ref-type="bibr" rid="ref12 ref17">12, 17</xref>
          ] for constraint manipulation are essential
for the solver’s performance. Nevertheless, the order of propagation of variable assignments in
BCP has not been thoroughly studied. In the version of bsolo submitted to the last PB solver
evaluation (PB’07) [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], the order of propagation in the implication graph was depth-first. In
this paper it is shown that by changing it to a breadth-first propagation, bsolo was able to
solve a larger number of instances (see section 6).
        </p>
        <p>When generating the implication graph in a breadth-first way, one can guarantee that there
is no other possible implication graph such that the length of the longest path between the
decision assignment vertex and the conflict vertex is lower than the one considered. Therefore,
the learned constraint is probably determined using a smaller number of constraints. Moreover,
considering that in PB formulations the same constraint can imply more than one variable
assignment, the motivation for a breadth-first propagation is larger than in SAT formulations.
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Dealing with Large Coefficients</title>
        <p>When performing general PB Learning or any learning scheme that requires performing a
sequence of cutting plane steps each time a conflict occurs, the coefficients of the learned
constraints may grow very fast. Note that in each cutting plane step two PB constraints are linearly
combined. Given two constraints: å j a j l j b and å j c j l j d, the size of the largest coefficient
of the resulting constraint may be maxfb d; max jfa jg max jfc jgg in the worst case. Therefore,
it is easy to see that during a sequence of cutting plane steps the size of the coefficients of the
accumulator constraint may, in the worst case, grow exponentially in the number of cutting
plane steps (which is of the same order of the number of literals assigned at the current level).</p>
        <p>One problem that may occur in the cutting plane operation is integer overflow. To avoid
this problem, a maximum coefficient size was established (we used 106). Therefore, every time
the solver performs a cutting plane step, all coefficients of the resulting constraint are checked
if one of them is bigger than the established limit. If it is, the solver repeatedly divides the
constraint by 2 (followed by rounding) until its largest coefficient is lower than a second maximum
coefficient size (we used 105).</p>
        <p>During the conflict analysis the accumulator constraint must always have negative slack.
However the division rule does not preserve the slack of the resulting constraint, since it does
not guarantee that the slack of the resulting constraint is equal to the slack of the original one,
which can be verified in the next example where the constraint is divided by 2, followed by
rounding:
5 slack =
3 slack = 0
1
As such, before dividing by 2 a coefficient associated with a slack contributing literal, the solver
must check if it is odd. In this case it must perform a coefficient reduction step before the
division (note that the coefficient reduction rule when applied to slack contributing literals
preserves the slack).</p>
        <p>3x1(0@1) + 3x2(0@1) + 3x3(1@1) + x4(1@1) 5 slack =
3x1(0@1) + 3x2(0@1) + 2x3(1@1) 3 slack = 1
2x1(0@1) + 2x2(0@1) + x3(1@1) 2 slack = 1
1
5.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>An Initial Classification Step</title>
        <p>After implementing different learning schemes, it was observed that each of the versions proved
to be more effective than the others for some group of instances. One can easily conclude that
different learning schemes behave better (or worse) depending on some characteristics of the
instances given as input. For instance, a cardinality constraint learning scheme is more
appropriate to deal with an instance with a large number of cardinality constraints than a clause
learning scheme.</p>
        <p>
          Our goal was then to try to define an initial classification step that could choose the best
fitting learning scheme to solve a given problem instance. Therefore, in a very preliminary
work, algorithm C4.5 [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] was used to generate a decision tree that given a problem instance,
determines which learning scheme is more appropriate to it. The classification of each instance
was done according to structural attributes of the formula, namely number of variables, literals,
types of constraints, among others.
6
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Experimental Results</title>
      <p>
        This section presents the experimental results of applying different learning schemes to the
small integer non-optimization benchmarks from the PB’07 evaluation [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. All the learning
schemes were implemented on top of bsolo, a state of the art PB solver. Experimental results
were obtained on a Intel Xeon 5160 server (3.0Ghz, 4GB memory) running Red Hat Enterprise
Linux WS 4. The CPU time limit for each instance was set to 1800 seconds.
      </p>
      <p>In Table 1, results for several learning schemes are presented. Each line of the table
represents a set of instances depending on their origin or encoded problem. Each column represents
a version of bsolo for a different learning scheme. Finally, each cell denotes the number of
benchmark instances that were found to be satisfiable/unsatisfiable/unknown.</p>
      <p>The basis for our work was the bsolo version submitted to PB’07 solver evaluation. This
version implements a clause learning scheme and is identified with CL1. Version CL2
corresponds to the previous version, but with a breadth-first approach for generating the implication
graph. Next, results for the general PB learning scheme PB and cardinality learning scheme
CARD are presented. One should note that both the PB and CARD learning schemes are hybrid
(as in Pueblo) and always learn an assertive clause. Preliminary results on pure PB and
cardinality learning schemes were not as effective as an hybrid learning scheme. In version COMB,
an initial classification step was used in order to select the best fitting learning scheme for each
instance (see section 5.3). The training set for the COMB version was composed of 100 instances
(out of the total of 371 instances).</p>
      <p>It can be observed from Table 1 that the original solver was greatly improved just by
changing the propagation order. Version CL2 was able to solve more 17 instances, most of them in
the tsp benchmark set. Both the PB and CARD learning schemes improve on the original
version of the solver. However, in the PB version, the overhead associated with maintaining the
additional no-good PB constraints is a drawback, in particular on the FPGA and pigeon hole
instances. Overall, the CARD learning scheme performs much better, proving to be a nice
compromise between pruning power of the generated constraints, and the underlying overhead of
constraint manipulation. Finally, the COMB version shows that a combination of all these
learning schemes allows an even better performance. Finally, one should note that improvements
are essentially on unsatisfiable instances. The gain on satisfiable instances is smaller.</p>
      <p>In Table 2, the results of the best known solvers can be checked and compared with bsolo.
Pueblo is able to solve 6 more instances than the current version of bsolo. By using a hybrid
learning scheme, Pueblo seems to have found a nice balance between the additional overhead
of new no-good constraints (mostly clauses) and the pruning power of new no-good PB
constraints. Nevertheless, the work presented in this paper shows that bsolo can be competitive
in a large set of problem instances.</p>
      <p>
        In the Pseudo-Boolean Optimization (PBO) problem, the objective is to find an assignment
such that all constraints are satisfied and a linear cost function is optimized. PB solvers can
be easily modified [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] in order to also tackle this problem. Table 3 presents the results of the
new version of bsolo in comparison with other solvers. Each cell contains the following
information: the number of instances for which the optimum value was found, the number of
satisfiable but non-optimal solutions, the number of unsatisfiable instances and the number of
unknown instances. Clearly, bsolo is able to prove optimality for a larger number of instances
than other solvers. Additionally, due to the new learning schemes, bsolo is also able to find a
larger number of non-optimal solutions.
7
      </p>
    </sec>
    <sec id="sec-7">
      <title>Conclusions</title>
      <p>Considering the disparity of results concerning the application of learning schemes in several
state of the art PB solvers, the main goal of this work is to provide a contribution by
implementing them in the same platform. Our results confirm that hybrid learning schemes perform
better on a large set of instances. Moreover, our results show that cardinality constraint learning
is more effective and robust learning scheme than others. It obtained much better results than
the original clause learning scheme and also on our implementation of the PB hybrid learning
scheme included in Pueblo. In our opinion, cardinality constraints are easier to propagate
than PB constraints and are also more expressive than clauses. Therefore, this learning scheme
seems a reasonable compromise between PB learning and pure clause learning.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Aloul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramani</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Markov</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakallah</surname>
          </string-name>
          .
          <article-title>Generic ILP versus specialized 0-1 ILP: An update</article-title>
          .
          <source>In Proceedings of the International Conference on Computer Aided Design</source>
          , pages
          <fpage>450</fpage>
          -
          <lpage>457</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Barth</surname>
          </string-name>
          .
          <article-title>Logic-Based 0-1 Constraint Programming</article-title>
          . Kluwer Academic Publishers,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D. Le</given-names>
            <surname>Berre</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Parrain</surname>
          </string-name>
          .
          <article-title>On Extending SAT-solvers for PB Problems. 14th RCRA workshop Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion</article-title>
          ,
          <year>July 2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chai</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          .
          <article-title>A Fast Pseudo-Boolean Constraint Solver</article-title>
          .
          <source>In Proceedings of the Design Automation Conference</source>
          , pages
          <fpage>830</fpage>
          -
          <lpage>835</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Chva</surname>
          </string-name>
          <article-title>´tal. Edmonds polytopes and a hierarchy of combinatorial problems</article-title>
          .
          <source>Discrete Mathematics</source>
          ,
          <volume>4</volume>
          :
          <fpage>305</fpage>
          -
          <lpage>337</lpage>
          ,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Davis</surname>
          </string-name>
          , G. Logemann, and
          <string-name>
            <surname>D. Loveland.</surname>
          </string-name>
          <article-title>A machine program for theorem-proving</article-title>
          .
          <source>Communications of the Association for Computing Machinery</source>
          ,
          <volume>5</volume>
          :
          <fpage>394</fpage>
          -
          <lpage>397</lpage>
          ,
          <year>July 1962</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Dixon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ginsberg</surname>
          </string-name>
          , E. Luks,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Parkes. Generalizing Boolean Satisfiability</surname>
          </string-name>
          <string-name>
            <surname>I</surname>
          </string-name>
          :
          <article-title>Background and survey of existing work</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          ,
          <volume>21</volume>
          :
          <fpage>193</fpage>
          -
          <lpage>243</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ee</surname>
          </string-name>
          <article-title>´n and N. S o¨rensson. Translating Pseudo-Boolean Constraints into SAT</article-title>
          .
          <source>Journal on Satisfiability, Boolean Modeling and Computation</source>
          ,
          <volume>2</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.E.</given-names>
            <surname>Gomory</surname>
          </string-name>
          .
          <article-title>Outline of an algorithm for integer solutions to linear programs</article-title>
          .
          <source>Bulletin of the American Mathematical Society</source>
          ,
          <volume>64</volume>
          :
          <fpage>275</fpage>
          -
          <lpage>278</lpage>
          ,
          <year>1958</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Manquinho</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Marques-Silva</surname>
          </string-name>
          .
          <article-title>Effective lower bounding techniques for pseudo-boolean optimization</article-title>
          .
          <source>In Proceedings of the Design and Test in Europe Conference</source>
          , pages
          <fpage>660</fpage>
          -
          <lpage>665</lpage>
          ,
          <year>March 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Marques-Silva</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakallah. GRASP</surname>
          </string-name>
          :
          <article-title>A new search algorithm for satisfiability</article-title>
          .
          <source>In Proceedings of the International Conference on Computer Aided Design</source>
          , pages
          <fpage>220</fpage>
          -
          <lpage>227</lpage>
          ,
          <year>November 1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moskewicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Madigan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Malik</surname>
          </string-name>
          . Chaff:
          <article-title>Engineering an Efficient SAT Solver</article-title>
          .
          <source>In Proceedings of the Design Automation Conference</source>
          , pages
          <fpage>530</fpage>
          -
          <lpage>535</lpage>
          ,
          <year>June 2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Quinlan</surname>
          </string-name>
          .
          <source>C4</source>
          .
          <article-title>5: Programs for Machine Learning</article-title>
          . Morgan Kaufmann,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>J. A. Robinson.</surname>
          </string-name>
          <article-title>A machine-oriented logic based on the resolution principle</article-title>
          .
          <source>Journal of the Association for Computing Machinery</source>
          ,
          <volume>12</volume>
          (
          <issue>1</issue>
          ):
          <fpage>23</fpage>
          -
          <lpage>41</lpage>
          ,
          <year>January 1965</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>O.</given-names>
            <surname>Roussel</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Manquinho. Third</surname>
          </string-name>
          pseudo-boolean evaluation
          <year>2007</year>
          . http://www.cril.
          <source>univartois.fr/PB07</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H.</given-names>
            <surname>Sheini</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakallah. Pueblo</surname>
          </string-name>
          :
          <string-name>
            <given-names>A Modern</given-names>
            <surname>Pseudo-Boolean SAT</surname>
          </string-name>
          <article-title>Solver</article-title>
          .
          <source>In Proceedings of the Design and Test in Europe Conference</source>
          , pages
          <fpage>684</fpage>
          -
          <lpage>685</lpage>
          ,
          <year>March 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>H.</given-names>
            <surname>Sheini</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakallah. Pueblo</surname>
          </string-name>
          :
          <string-name>
            <given-names>A Hybrid</given-names>
            <surname>Pseudo-Boolean SAT</surname>
          </string-name>
          <article-title>Solver</article-title>
          .
          <source>Journal on Satisfiability, Boolean Modeling and Computation</source>
          ,
          <volume>2</volume>
          :
          <fpage>157</fpage>
          -
          <lpage>181</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Madigan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Moskewicz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Malik</surname>
          </string-name>
          .
          <article-title>Efficient Conflict Driven Learning in a Boolean Satisfiability Solver</article-title>
          .
          <source>In Proceedings of the International Conference on Computer Aided Design</source>
          , pages
          <fpage>279</fpage>
          -
          <lpage>285</lpage>
          ,
          <year>November 2001</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>