<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Solving hypertree structured CSP : Sequential and parallel approaches</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammed Lalou</string-name>
          <email>mohammed.lalou@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zineb Habbas</string-name>
          <email>zineb@univ-metz.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kamal Amroun</string-name>
          <email>amroun25@yahoo.fr</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>LITA, University of Metz</institution>
          ,
          <addr-line>e-mail :</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Bejaia</institution>
          ,
          <addr-line>e-mail :</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Bejaia</institution>
          ,
          <addr-line>e-mail : k</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Solving CSP is in general N P-Complete. However, there are various subsets of CSPs that can be solved in polynomial time. Some of them can be identified by analyzing their structure. Unfortunately the proposed methods for exploiting these structural proprieties are not efficient in practice. So exploiting these structural properties for solving CSPs is a crucial challenge. In this paper, we propose efficient algorithms which exploit these structural proprieties, for both sequential and parallel resolutions. Some experiments done on academic benchmarks show the efficiciency of our approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>time assumptions. The experimental results outline the practical efficiency
of this algorithm and its performances in term of space memory.</p>
      <p>
        This paper is organized as follows : section 2 gives preliminaries of
constraint satisfaction problems and well known CSP decomposition methods
by developing more particulary the (generalized) hypertree decomposition
method which is the most general one. In section 3, we present our
sequential algorithm called S HBR (for Sequential Hash Based Resolution) which
solves the hypertree structured CSP and we give some experiments of our
proposition with respect to the algorithm proposed in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In section 4, we
present our parallel algorithm and in section 5, we give some experimental
results of this algorithm. Finally, in section 6, we give our conclusion and
some perspectives to this work.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Preliminary notions</title>
      <p>
        The notion of Constraint Satisfaction Problem CSP was introduced by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>Definition 1. Constraint Satisfaction Problem: A constraint satisfac</title>
        <p>tion problem is defined as a 3-tuple P = hX; D; Ci where :
X = fx1; x2; :::; xng is a set of n variables.</p>
        <p>D = fd1; d2; :::; dng is a set of finite domains; a variable xi takes its values
in its domain di.</p>
        <p>C = fC1; C2; :::; Cmg is a set of m constraints. Each constraint Ci is a pair
(S(Ci), R(Ci)) where S(Ci) µ X, is a subset of variables, called scope of
Ci and R(Ci) ½ Qxk2S(C)i) dk is the constraint relation, which specifies the
allowed values combinations.</p>
        <p>A solution of a CSP is an assignment of values to variables which
satisfies all constraints.</p>
        <p>
          Definition 2. Hypergraph: The constraint hypergraph [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] of a CSP
P =&lt; X; D; C &gt; is given by H =&lt; V; E &gt; where E is a set of
hyperedges corresponding to the scopes of the constraints in C, V is the set of
variables of P.
        </p>
        <p>
          In this paper, hyperedges(H) is the set of the hyperedges of the
hypergraph H. If h is a hyperedge, then var(h) is the set of variables of h.
Definition 3. A join tree for a hypergraph H is a tree T whose nodes are
the hyperedges of H, such that, when a vertex v of H occurs in two
hyperedges e1 and e2 then v occurs in each node of the unique path connecting e1
and e2 in the tree T.
Definition 4. Hypertree: A hypertree for a hypergraph H is a triple
&lt; T; Â; ¸ &gt; where T = (N; E) is a rooted tree, and Â and ¸ are labelling
functions which associate each vertex p 2 N with two sets Â(p) µ var(H)
and ¸(p) µ hyperedges(H). If T 0 = (N 0; E0) is a subtree of T, we define
Â(T 0) = Sv2N0 Â(v). We denote the set of vertices N of T by vertices(T)
and the root of T by root(T). Tp denotes the subtree of T rooted at the node p.
Proposition 1. A CSP whose structure is acyclic can be solved in
polynomial time [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
        <p>The goal of all structural decomposition methods is to transform a CSP
into an equivalent acyclic CSP which can be solved in polynomial time.
A decomposition method D associates to each hypergraph H a parameter
D-width called the width of H. The method D ensures that for a fixed k,
each CSP instance with D-width · k is tractable and then can be solved in
polynomial time. Among these methods, the generalized hypertree
decomposition (GHD) dominates all the other structural decomposition methods.
In the next paragraph, we present the (generalized) hypertree decomposition
method.</p>
        <p>
          Definition 5. A generalized hypertree decomposition [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] of a hypergraph
H =&lt; V; E &gt;, is a hypertree HD =&lt; T; Â; ¸ &gt; which satisfies the following
conditions :
1. For each edge h 2 E, there exists p 2 vertices(T ) such that var(h) µ
Â(p). We say that p covers h.
2. For each variable v 2 V , the set fp 2 vertices(T )jv 2 Â(p)g induces a
connected subtree of T.
3. For each vertex p 2 vertices(T ), Â(p) µ var(¸(p)).
        </p>
        <p>A hypertree decomposition of a hypergraph H =&lt; V; E &gt;, is a
generalized hypertree decomposition HD =&lt; T; Â; ¸ &gt; which additionally satisfies
the following special condition :
For each vertex p 2 vertices(T ), var(¸(p)) T Â(Tp) µ Â(p).</p>
        <p>The width of a (generalized) hypertree decomposition &lt; T; Â; ¸ &gt; is
maxp2vertices(T )j¸(p)j. The (generalized) hypertree width (g)hw(H) of a
hypergraph H is the minimum width over all its (generalized) hypertree
decompositions.</p>
        <p>A hyperedge h of a hypergraph H = hV; Ei is strongly covered in
HD = hT ; Â; ¸i if there exists p 2 vertices(T ) such that the vertices in h
are contained in Â(p) and h 2 ¸(p). A (generalized) hypertree decomposition
HD = hT ; Â; ¸i of H = hV; Ei is called complete if every hyperedge h of H
is strongly covered in HD.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>A sequential algorithm</title>
      <sec id="sec-3-1">
        <title>The basic algorithm</title>
        <p>
          The sequential resolution of a CSP represented by its hypertree
decomposition is given by the following algorithm [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] (see algorithm 1), called in this
paper B A algorithm (for Basic algorithm).
        </p>
        <sec id="sec-3-1-1">
          <title>Algorithm 1 B A algorithm</title>
          <p>1: Input : HD = hT ; Â; ¸i
2: step1: complete the hypertree decomposition HD
3: step2: Solve each ¸-term using multi-join operation.
4: step3: Solve the resulting CSP using semi-join operation.
5: Output: A solution of the given problem.</p>
          <p>The second step consists in Solving each sub-problem represented as
the ¸-term in each node of the hypertree. As a result, we have a new
hypertree whose the ¸-terms are replaced by a unique constraint relation
Rp corresponding to the projection on the variables in Â(p) of the join of
the constraint relations in ¸(p). More formally Rp = (./t2¸(p) t)[Â(p)]. This
operation is expensive. The third step consists in semi-join operation which
is an other expensive operation.
3.2</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>The S HBR algorithm</title>
        <p>Some notations : #i represents the intersection variables of the
constraint Ci with the set of all variables in the union of Cj where j &lt; i in a
given order. HT i is the hash table associated to Ri.</p>
        <p>
          The S HBR for Sequential Hash Based Resolution algorithm, is
formally presented by Algorithm 2. It proceeds in two steps : The SP HBR
for Sub Problem Hash Based Resolution algorithm (lines 3 to 6 ) is the
optimization of the join operation. The A HBR for (Acyclic Hash Based
Resolution) algorithm (line7) is an optimization of the classical Acyclic
solving algorithm [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>We now give more details about SP HBR and A HBR algorithms.
a) SP HBR algorithm: The result of the SP HBR execution is a
join-tree whose every node n contains only one table Rn composed of the
tuples satisfying all sub-problems constraints. Nodes of this join-tree are
represented as follows: Rn = hÂ(p); Hrel(p)i where Â(p) is the set of
variables of the node p, and Hrel(p) is the constraint relation generated by
the join operation. Additionally, for the leaves nodes, Hrel(p)’s tuples are
hashed on intersection variables with the parent node of p. This algorithm
decomposes each node of the join tree into connected components. Then it
applies the join operation algorithm for each component using the hash join
Algorithm 2 Sequential hash Based Resolution(S HBR)
1: Input: a hypertree &lt; T ; Â; ¸ &gt; of a hypergraph &lt; H = V; E &gt; where
T is a tree, Â associates to each t of T a set of nodes Â(t) ½ V , and ¸,
a set of arcs ¸(t) ½ E
2: Output : A solution.
3: Step1:
4: for each node n of T do
5: Apply the SP HBR algorithm on the node n.
6: end for
7: Step2: Apply A HBR algorithm on T .
principle. For doing that, we establish an order among relations in the same
component. Then, we hash each relation Ri of the constraint Ci, except the
first one, on intersection variables with the relations of the constraints that
are before Ci in this order. The join operation is applied using the Join
procedure. These procedures cannot be described in this paper for restricted
space reason. We illustrate SP HBR by the following example.
Example 1. We consider a node p of a hypertree HD labelled as follows:
¸(p) = fC1; C2; C3; C4; C5; C6g, with: S(C1) = fa; b; cg, S(C2) = fi; jg,
S(C3) = fa; dg, S(C4) = fd; b; f g, S(C5) = ff; gg, S(C6) = fi; kg.</p>
        <p>There are two connected components R1 = fC1; C3; C4; C5g and R2 =
fC2; C6g. According to the order of the constraints in each component, we
have:
#3 = S(C3) \ S(C1) = f g</p>
        <p>a , #4 = S(C4) \ fS(C1) [ S(C3)g = fb; dg,
#5 = S(C5) \ fS(C1) [ S(C3) [ S(C4)g = ff g, #6 = S(C6) \ S(C2) = fig.(if
C is a hyperedge, then S(C) denotes the scope of C).</p>
        <p>Therefore, for the first component, the R3’s tuples will be hashed on the
set of intersection variables #3 = fag, those of R4 and R5 on, respectively,
#4 = fb; dg and #5 = ff g. For the second component, the tuples of R6
will be hashed on #6 = fig. After computing a cartesian product of the
components, the resulting tuples t are hashed according to the intersection
variables with the parent node of p, if p is a leaf. So, t will be ready for the
semi-join operation in A-HBR algorithm. We do not add t to its related
partition P unless if P = Á. Thus, we will have one and only one tuple
in each partition by eliminating the duplicated tuples. This elimination has
no impact on the resolution because, for the A-HBR algorithm, each parent
node is filtered on its sons 1. Moreover, we are interested to get only one
solution for the CSP.</p>
        <p>b) The A HBR algorithm: The A HBR algorithms tests the
consistency of a constraint acyclic network and generates a solution if it exists.
1It is sufficient to have only one tuple by partition in the son node relations.
This algorithm takes the hypertree resulting from the SP HBR algorithm
as its input. The A HBR algorithm (see. Algorithm 3 ) proceeds in two
steps: the CONTRACT step and the S SEARCH step. The first step
contracts the deep-rooted hypertree HT . A semi-join operation is associated
to each contraction operation. The result is a directional from root toward
leaves arc consistent hypertree HT 0 (line 3). The second step is the solution
search operation, which develops the solution from the HT 0’s root to leaf
nodes (line 4).</p>
        <p>Algorithm 3 Acyclic Hash Based Resolution algorithm (A HBR)
1: Input: A hypertree HT = hT ; Â; ¸i
2: Output : Determine a directional arc consistent and generate a solution.
3: HT 0 = CONTRACT (HT )
4: S SEARCH (HT 0),</p>
        <p>The CONTRACT algorithm strengths the directional arc consistency of
the input hypertree. Given a hypertree, a contraction operation is realized
between each leaf node and its parent. For each contraction operation we
hash each tuple t of a parent node (p) relation on intersection variables with
its sons leaves nodes ai. We denote hi the hashed values. If ai’s hash tables
partitions related to hi are not empty , we hash t on intersection variables
of p with its parent node and save it. Otherwise, we eliminate t. The leaves
nodes ai are marked and considered as pruned. We make the same with
the obtained hypertree, and so on, until all the nodes, excepted the root,
will be marked. The final result of the contraction operation of a parent
node p with its sons leaves nodes, is a node p0 = hÂ(p0); Hrel(p0)i, whose
relation is arc consistent with those of the son nodes relations, and hashed
on the intersection variables of p with its parent node. If the hashed relation
Hrel(p0) is empty, the problem has no solution.</p>
        <p>The S SEARCH algorithm is a classical Backtrack free algorithm applied
to the directional arc consistent hypertree resulting from the CONTRACT
algorithm execution.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Complexity of S HBR</title>
        <p>The complexity of the SP HBR algorithm is
Where r is the the maximum relation size, h is the hypertree width, P is the
maximum number of tuples in the partition of hashing on the variables of #
(# represents the intersection variables of Ci with all variables in Cj where
j &lt; i ). For the A HBR algorithm, its complexity is O(nP 2). Where n is
h¡1
X(r ¤ P i)
i=1
the number of constraints in the obtained CSP. But in our case, we have
only one tuple by partition, so the complexity of A HBR algorithm is only
O(n).
3.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>Experiments</title>
        <p>
          We have implemented the S HBR algorithm and the basic algorithm [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
(noted here B A). We have tested the two solvers on a set of CSP
benchmarks. This benchmarks collection is represented using the new XCSP 2.1
format proposed by The organizational committee of the 3th international
competition of the CSPs solvers 2[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The solvers inputs are CSPs instances
and the corresponding hypertrees in GML format. For this, we have
exploited a GML Parser proposed by Raitner and Himsol 3 and another one,
proposed by Roussel 4, for XCSP file. The experiments are made on Linux
using an Intel Pentium IV, with 2.4 GHz of CPU and 600 Mb of RAM. In
our results, jV j, jEj and jRj represent respectively the variables number, the
constraints number and maximum number of tuples by relation associated
to a given CSP. Nd and HT W are respectively the nodes number of the
hypertree and the hypertree width. S-HBR(sec) illustrate the S-HBR
computational time in seconds, and B A(sec) the time of the B A algorithm.
The table 1 summarizes our experiments. The hash function used in this
paper is
f (t) =
        </p>
        <p>n
X((xi + 1) ¤ 10ln(f(t))+1)
i=1
(2)
We observe that S-HBR clearly improves the B A algorithm in terms of
CPU time for all the instances.</p>
        <p>The S HBR algorithm gets more better results for the instances which
have large relations, because it browses only a sub- part of the relation rather
than the totality one as in B A.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>The parallel Algorithm</title>
      <p>In this section, we introduce a parallel version of our previous sequential
algorithm S HBR. We called it P HT R for Parallel Hypertree Resolution
algorithm and it is described by the algorithm 4. This algorithm uses both
pipeline technique and the parallel tree contraction technique. The
pipeline avoids the storage of intermediate results so it reduces the
memory space explosion. The parallel tree contraction is the most well known
adapted approach for the semi-join operation. However the known parallel
2http://www.cril.univ-artois.fr/ lecoutre/research/benchmarks/benchmarks.html
3http://www.cs.rpi.edu/ puninj/XGMML/GML-XGMML/gml-parser.html
4http://www.cril.univ-artois.fr/ roussel/CSP-XML-parser/
Name
3-insertions-3-3
domino-100-100
domino-100-200
domino-100-300
series-7
hanoi-7
haystacks-06
haystacks-07
langford-2-4
Renault
bf-1355-e-63
bf-1355-g-63
bf-2670-b
bf-2670-c
tree contraction considers at each iteration, for each parent node with only
one son node leaf. To increase the parallelism degree in this operation, we
developed a new alternative of this technique, based on partitioning the
parent node relation (the critical resource) between all processors, which allows
an asynchronous updating.</p>
      <p>The P HT R(algorithm 4) procedure proceeds in two steps. The first one
concerns the parallel resolution of sub-problems using the P SBR algorithm
(line 4). The second step is the resolution of the whole problem. It is
performed using the P Solving algorithm (line 6).</p>
      <p>Algorithm 4 P HT R (Parallel HyperT ree Resolution) algorithm
1: Input: a hypertree hT ; Â; ¸i of a hypergraph H = (V(H); E (H))
2: Output : The solution of the problem
3: for each node n of T in parallel do
4: P SBR (n) // Apply the P SBR algorithm on the node n.
5: end for
6: P Solving(T’) // Apply P Solving algorithm on T 0 (T 0 is the obtained
hypertree in the step 3).
4.1</p>
      <sec id="sec-4-1">
        <title>The P SBR algorithm</title>
        <p>The P SBR for Parallel Sub Problem Resolution algorithm uses the hash
technique. The P SBR algorithm proceeds as the SP HBR algorithm.
After establishing an order on the constraints, it builds a pipeline line
composed of join operators whose the first relation is the probe relation and the
remaining ones are build relations. The join operation is performed
according to the hash pipelined join principle. Thus, each build relation is hashed
on its intersection variables (#i) with the union of the previous relations.
The parallelism in this algorithm is provided by the Pipeline. At each level
of the pipeline corresponds a join operation with two operands. The first
one is R1 (the probe relation) for the first operator, Rtmp[i ¡ 1] for the ith
operator. The second one is the build relation Ri for the ith operator. We
make a join operation between each tuple of the first operand and all the
tuples of the second operand. The obtained tuples from the ith operator are
saved in the temporary relation of (Rtmp[i + 1]) of the following operator.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>P Solving algorithm</title>
        <p>P Solving algorithm is the parallel version of A HBR algorithm. It
consists in two operations, P CON T RACT (for Parallel CONTRACT), and
P S SEARCH (for Parallel SEARCH ). The first operation concerns the
parallelization of the CONTRACT operation, while the second one
corresponds to the parallel version of the S SEARCH operation.</p>
        <sec id="sec-4-2-1">
          <title>Algorithm 5 P Solving algorithm</title>
          <p>1: Input: A join tree T .
2: Output : The solution for the problem.
3: T 0 = P CON T RACT (T ).</p>
          <p>4: P S SEARCH(T 0).
4.2.1</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>P CON T RACT algorithm</title>
          <p>
            The P CON T RACT operation strengths the directional arc consistency in
the input hypertree. It is based on two operations, P RAKE (for Parallel
RAKE), and P COM P RESS (for Parallel COMPRESS ), where RAKE
and COM P RESS are the basic operations of a parallel tree contraction [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ].
We explain these two operations.
          </p>
          <p>P RAKE operation: is based on the fact that the existence of only
one resource is the reason of the sequential execution. This resource is the
parent node relation. This resource must be manipulated simultaneously
by all processors. Therefore, it must be shared between them. Thus, this
technique consists in partitioning the parent relation between all processors
which compute the semi-join operation of this last relation with its son nodes
relations. These partitions are manipulated periodically between processors.</p>
          <p>P COM P RESS operation : is a pipelined execution of COMPRESS
operation. It is applied on a sequence of nodes or a chain. Let fn1; n2; :::; nj g
be a chain of a tree T where ni+1 is the ni’s only son, the application of
P COM P RESS to this chain results in a new tree contracted T 0 in which
the node n1 is transformed thus:
² ÂT 0 (n1) = ÂT (n1)
² HrelT 0 (n1) = ΠÂ(n1)(HrelT (n2) ./ HrelT (n3) ./ ¢ ¢ ¢ ./ HrelT (nj )).</p>
          <p>It makes a pipeline of join operations between the chain’s nodes, and at
each time when it takes a tuple from the probe relation, it joins it with all
tuples of the corresponding partition in the hash table of the build relation.
Then it communicates it to the probe relation of the next operator. After the
execution of the last operator, it projects the tuples results on the variable set
Â of each node of the chain, it hashes it on the intersection variables of each
node with its parent node, and puts the tuple result in the corresponding
node relation.
P S SEARCH algorithm is the parallel version of S SEARCH algorithm.
It works in the same way, except that, each time, after taking a tuple from a
parent node, the research of corresponding tuples in the son nodes is made
in parallel.
For a binary hypertree, the complexity of the P SBR algorithm is
O(r ¤ P h¡1)
(3)
using O(h) operations in a PRAM of type EREW. r is the the maximum
relation size, h is the hypertree width, P is the maximum number of tuples
in the partition of hashing on the variables of # (# represents the intersection
variables of Ci with all variables in Cj where j &lt; i ). For the P Solving
algorithm, its complexity is O(log n2 ) using O(n) operations. n is the hypertree
size (number of nodes).
5</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Experiments of parallel algorithm</title>
      <p>We have developed a simulation model for P SBR and P Solving
algorithms based on logical time assumptions.
5.1</p>
      <sec id="sec-5-1">
        <title>Simulation model</title>
        <p>Our system is composed of simple modules described as classical sequential
programs, which are collected and executed in a parallel way by the
simulator. A module is the set of operations performed by a processor we called a
process. For the internal working of a process, we split operations that can
be performed in three main types : Reading a tuple, Searching in hash tables
including the tuples join operation and Writing a result tuple. These
operations are executed sequentially between them and in parallel with those
of the other processes. Each process explores the maximum of potential
parallelism if its environment (its input variables) do not conflict during the
other process execution. For a pipeline execution, if the processes are not
adjacent, there is no conflict. Otherwise, we use the semaphore mechanism
to synchronize them.
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Simulation process</title>
        <p>Each operation or each event is performed in an interval of time and the
simulator must therefore keep a logical time counter. The asked question
is : When we increment this counter, and what is the relation between this
counter and the events execution time (physical counter)? We make a
stepby-step simulation so that for each step (logical time unit) all operations
which can be performed at the same time start as follows :
1. The simulator executes all processes and it increments the logical time
counter at each iteration.
2. For all processes that are not in conflict, the simulator increments the
physical time counter.
3. After a logical time step, the simulator updates all the shared variables.
4. It restarts a new step by considering the new variable states.
5.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Experimental protocol</title>
        <p>
          We developed a simulation of P HT R algorithm in order to check: first, if
it gives good temporal and spatial complexities and second, if the new
approach of parallel tree contraction proposed in this paper can be considered
as an interesting alternative of the one considered in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. In order to compare
our results, we simulate PT AC algorithm. In this section, experimentations
are divided in three categories :
1. Simulation of P SBR algorithm in order to estimate its interest in
term of space memory, since the sub-problems resolution is the source
of a bad spatial complexity in general.
2. Simulation of P RAKE in order to evaluate its performances.
3. Simulation of P Solving algorithm in order to estimate its temporal
optimization, and to measure the contribution of the new parallel tree
contraction technique P Rake by leading a comparison with the one
proposed in (PT AC algorithm).
        </p>
        <p>The simulation has been accomplished on a PC under Linux 6.0.52
version with CPU Intel Pentium IV 2.4 GHz and 512 Mb of RAM. We use a
set of benchmarks CSP which exist in the literature. We take the average
of the data obtained after several executions. In our results, jV j, jEj et
jrj represent respectively the variables number, the constraints number and
the maximum relations cardinality. Nd, Nf and HT W are respectively, the
nodes number, the leaf nodes number and the hypertree width. Np
represents the processors number and N bS represents the number of pipeline
stages.</p>
        <p>In order to be hardware independent, measure units are Tuple for space
and Operation (Reading, Searching or Writing) for time.
5.4</p>
      </sec>
      <sec id="sec-5-4">
        <title>P SBR algorithm simulation</title>
        <p>The table 2 presents the space memory required for a pipeline (R P SBR)
and no pipeline (R join) execution of the join operations for different CSP
instances (consistent and inconsistent).</p>
        <p>N N W &gt; 2 denotes the number of nodes in the hypertree with width
more than 2. We observed that the gain in space memory is very
important in instances for which the pipeline executions number (NNW&gt;2) is
important (haystacks-07 ). We also observed that the gain depends on the
number of nodes in hypertree (mug-100-25-4 ). It is also proportional to
the number of tuples by relation. As the two factors that influence on CSP
problems complexity are the constraints relations size, and the structural
decomposition width, and in order to estimate the contribution of P SBR,
we evaluate in figure 1, in (a) and (b), the P SBR algorithm behavior w.r.t
respectively the constraints relations size and the stages number in pipeline
for two classes of CSP problems: aim-100-6 and full-Insertion.</p>
        <p>N.B: P SBR algorithm forms a pipeline in each node of the hypertree.
So, the stages number in the pipeline is the number of constraints in the
node minus one.</p>
        <p>Spatial gain
(a)</p>
        <p>Spatial gain
(b)
Relations size</p>
        <p>Pipeline stages number</p>
      </sec>
      <sec id="sec-5-5">
        <title>P RAKE algorithm simulation</title>
        <p>P RAKE tries to optimize the CPU time of the whole problem. We present
here a simulation of this operation by computing its efficiency Ef f =</p>
        <p>Ts
NP ¤Tp where NP is the processors number and Ts(reps. Tp) the time of
sequential (reps. parallel) resolution of the sub- hypertree.</p>
        <p>Speed up</p>
        <p>Theoritical</p>
        <p>P-RAKE</p>
        <p>Nbr of processors</p>
        <p>The P RAKE algorithm speedup is presented in Figure 2. It is obvious
that it is close to theoretical speedup (linear).</p>
      </sec>
      <sec id="sec-5-6">
        <title>P Solving algorithm simulation</title>
        <p>Before comparing P Solving with A HBR, it seems natural to compare
P CON T RACT algorithm with an algorithm applying the tree contraction.</p>
        <p>Name
hayst06
dom200
hanoi-6
mug25-4
myc-3-4
myc-4-4
Renault
series-6
hole-6</p>
        <p>Nc represents the number of contraction operations performed and T the
computational time. We have considered 1000 operations as a measure unit.
Table 4 gives the CPU time of both A HBR algorithm and P T AC
algorithm (CONTRACT operation) and the one of P Solving (P CON T RACT
operation). It results that P CON T RACT generates a gain in temporal
complexity inversely proportional to the number of contractions. More the
number of the leaf nodes contracted is big more the number of contraction is
less and more the gain in time is important (myc-3-4, Renault and hole-6 ),
and vice versa, which is the case for hayst06 and mug25-4. We can also
remark for the problem hole-6, the CPU time gain (29 instead of 1201) is
considerable. This is due to the fact the hypertree is contracted on only one
step, instead of taking 81 steps.
In this paper, we have presented sequential and parallel algorithms to solve
CSPs by exploiting their structural properties. The algorithms exploit the
hash technique. For the sequential algorithm, we have done some
experiments on benchmarks from the literature and the results we have obtained
are promising. Good results are observed when the number of tuples of
relations is important. The parallel algorithm is based on the notion of pipeline
and parallel tree contraction which is a parallel version of the well known
tree contraction. We performed simulations on some benchmarks and the
results are very good. Future works include the execution of our parallel
algorithm on a parallel machine.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Gyssens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohen</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeavons</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A unified theory of structural tractability for constraint satisfaction problems</article-title>
          .
          <source>J. Comp. and Sys. Sci</source>
          .
          <volume>74</volume>
          ,
          <fpage>721</fpage>
          -
          <lpage>743</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Dechter</surname>
            ,
            <given-names>R.: Constraint</given-names>
          </string-name>
          <string-name>
            <surname>Processing</surname>
          </string-name>
          . Morgan Kaufmann, (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Dechter</surname>
            ,
            <given-names>R.: Constraint</given-names>
          </string-name>
          <string-name>
            <surname>Networks</surname>
          </string-name>
          .
          <source>Encyclopedia of Artificial Intelligence</source>
          , pages
          <fpage>276</fpage>
          -
          <lpage>285</lpage>
          , (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Dechter</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Tree clustering for constraint networks</article-title>
          .
          <source>Art. Int</source>
          .
          <volume>38</volume>
          ,
          <fpage>353</fpage>
          -
          <lpage>366</lpage>
          , (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A comparison of structural csp decomposition methods</article-title>
          .
          <source>Art. Int</source>
          .
          <volume>124</volume>
          ,
          <fpage>243</fpage>
          -
          <lpage>282</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Gyssens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeavons</surname>
            ,
            <given-names>P.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          :
          <article-title>Decomposing constraint satisfaction problems using database techniques</article-title>
          .
          <source>Art. Int</source>
          .
          <volume>66</volume>
          ,
          <fpage>57</fpage>
          -
          <lpage>89</lpage>
          (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Montanari</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Networks of constraints: Fundamental properties and applications to pictures processing</article-title>
          .
          <source>Inf. Sci. 7</source>
          ,
          <fpage>95</fpage>
          -
          <lpage>132</lpage>
          (
          <year>1974</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] XML Representation of Constraint Networks Format XCSP 2.1</article-title>
          , http://www.cril.
          <source>univ-artois.fr/CPAI08</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Karp</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , Y.:
          <article-title>Randomized Parallel Algorithms for Backtrack Search and Branch-and-Bound Computation</article-title>
          ,
          <source>Journal of the Association for Computing Machinery</source>
          ,
          <volume>40</volume>
          ,
          <issue>3</issue>
          ,
          <fpage>765</fpage>
          -
          <lpage>789</lpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Hypertree Decompositions: A survey</article-title>
          ,
          <source>Proceedings of MFCS '01</source>
          ,(
          <year>2001</year>
          ),
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>