Polynomial Constraint Solving over Finite Domains with the Modified Bernstein Form Federico Bergenti, Stefania Monica, and Gianfranco Rossi Dipartimento di Matematica e Informatica Università degli Studi di Parma Parco Area delle Scienze 53/A, 43124 Parma, Italy {federico.bergenti,stefania.monica,gianfranco.rossi}@unipr.it Abstract. This paper describes an algorithm that can be used to effec- tively solve polynomial constraints over finite domains. Such constraints are expressed in terms of inequalities of polynomials with integer coeffi- cients whose variables are assumed to be defined over proper finite do- mains. The proposed algorithm first reduces each constraint to a canon- ical form, i.e., a specific form of inequality, then it uses the modified Bernstein form of resulting polynomials to incrementally restrict the do- mains of variables. No approximation is involved in the solving process because the coefficients of the modified Bernstein form of the considered type of polynomials are always integer numbers. 1 Introduction and Motivation In this paper we study constraints written in terms of polynomials whose coef- ficients are integer numbers and whose variables are defined over proper finite subsets of integer numbers. Such constraints, that we call polynomial constraints over finite domains, are ubiquitous in many areas of system analysis and veri- fication (as discussed, e.g., in [1]) and they deserve specific treatment because common finite-domain techniques cannot adequately process them, even in sim- ple cases. The language of CLP(FD) is suitable to express polynomial constraints over finite domains, but the fact that FD solvers are incomplete sometimes does not allow to treat such constraints adequately. For example, if constraints are expressed as non-linear polynomials, the FD solver that ships with SWI-Prolog (version 6.6.6) may fail in completely reducing the domains of variables [6]. Con- sider the constraint X in -10..10, X^2 #>= 9 which involves only one variable over a finite interval, namely I = [−10..10]. It is easy to observe that such a constraint is satisfied only for values in the interval IR = [−10.. − 3] ∪ [3..10]. However, the FD solver exhibits its well-known incompleteness and it rewrites the constraint as X in -10..-1\/1..10, Y in 9..100, X^2 #= Y which introduces an unneeded variable and counts {−2, −1, 1, 2} as admissible values. Concerning non-linear constraints that involve more than one variable, consider X in -10..10, Y in -10..10, X*Y #>= 21 where both variables are over the finite domain I. Again, such a constraint is satisfied only for values in IR for both variables, but the FD solver introduces an unneeded variable and it also allows inadmissible values in the domains of variables. As a matter of fact, the FD solver rewrites the constraint as X in -10..-1\/1..10, Y in -10..-1\/1..10, Z in 21..100, X*Y #= Z. In this paper we describe an algorithm to solve polynomial constraints over finite domains that overcomes the well-known incompleteness of FD solvers. The proposed algorithm ensures that the domains of variables after the solution pro- cess contain only admissible values by a proper use of the specific form of allowed constraints, namely polynomial inequalities. Polynomials with real coefficients whose variables admit values in known intervals can always be rewritten in so called Bernstein Form (BF ) (see, e.g., [4] or [5]) to effectively compute a lower bound and an upper bound of their range. Such bounds can be used to effectively verify polynomial inequalities, and they have been used extensively for optimiza- tion problems. Unfortunately, even if a polynomial has only integer coefficients, no guarantees can be provided that its equivalent in BF has integer coefficients. Therefore, the BF has been traditionally used only with polynomials with real or rational variables. In this paper we show how to use a variant of the BF, that we call Modified Bernstein Form (MBF ), to process polynomial constraints over finite domains with no involvement of rational or real numbers. Note that the MBF is also known in the literature as scaled BF [2], but we adopted the modified adjective since the introduced multiplicative factor is not the same for all coefficients of the BF, and the resulting patch is not geometrically scaled. As a matter of fact, if a polynomial has integer coefficients, its equivalent in MBF always has integer coefficients. This property allows effective reasoning on poly- nomial inequalities with integer coefficients. In details, even if the MBF does not provide accurate bounds for the range of polynomials, it provides information on the sign of suitable bounds, which is sufficient to reason on polynomial con- straints over finite domains. The algorithm that we propose uses only such signs, and the actual values of bounds are not used. The current implementation of the proposed algorithm was used to validate all examples presented in this paper and it is available for download (cmt.dmi.unipr.it/software/clppolyfd.zip). The paper is organized as follows. Section 2 presents the proposed approach and the underlying constraint language. Section 3 introduces the MBF and it proves notable properties that allow using the coefficients of the MBF to compute the signs of suitable lower and upper bounds of polynomials, as needed by the proposed algorithm. Section 4 illustrates in details the behaviour of the proposed algorithm by means of specific examples. Finally, Section 5 concludes the paper and presents planned developments. 2 The Proposed Approach The interesting characteristics of the constraint solving approach that we pro- pose derive from the specific form of constraints that we address. Polynomial constraints over finite domains are expressed in terms of inequalities of polyno- mials with integer coefficients whose variables are defined over finite subsets of integer numbers. The relative constraint language is described as follows. 2.1 The Constraint Language The syntax of the language of polynomial constraints that we consider is defined, as usual, by its signature Σ, a triple Σ =< V, F , Π > where V is a denumerable set of variable symbols, F is the set of constant and function symbols, and Π is the finite set of constraint predicate symbols allowed in the language. The set F of constant and function symbols is F = O ∪ Z, where O = {+, ∗} is the set of function symbols representing binary operations over integer numbers, and Z = {0, 1, −1, 2, −2, . . .} is the denumerable set of constants representing integer numbers. The set Π of constraint predicate symbols is Π = {=, 6=, <, ≤, >, ≥}, and it contains only binary predicate symbols. A primitive constraint is any atomic predicate built using the symbols from signature Σ. A (non-primitive) constraint is a conjunction of primitive con- straints. The semantics of the language defined over signature Σ is trivial and it allows expressing systems of polynomial equalities, inequalities and disequalities with integer coefficients. Example 1 The following are primitive constraints expressed using signature Σ with V = {x, y, z}: x∗x ≥ 9 3∗x∗x+9 ≤ 5∗y∗y+z x 6= y + z. Note that other common function symbols and predicate symbols are supported in terms of syntactic abbreviations, as follows v in a..b → (v + (−1) ∗ a) ∗ (v + (−1) ∗ b) ≤ 0 (1) v nin a..b → (v + (−1) ∗ a) ∗ (v + (−1) ∗ b) > 0 (2) −t → (−1) ∗ t (3) t1 − t2 → t1 + (−1) ∗ t2 (4) t0 → 1 (5) tn → t| ∗ t ∗{z· · · ∗ }t (6) n>0 where v is a variable symbol, a and b are integer constant symbols such that a ≤ b, and t, t1 , and t2 are terms. Note that the extended language introduced by the aforementioned syntactic abbreviations does not increase the set of expressible constraints, and that we allow operator precedence and parenthesized expressions in this language. The following example is meant to clarify why v in a..b and v nin a..b (read v not in a..b) are rewritten using the mentioned syntactic abbreviations. Example 2 Syntactic rules rewrite the constraint x in 1..3 as (x + (−1) ∗ 1) ∗ (x + (−1) ∗ 3) ≤ 0. As a matter of fact, p(x) = (x − 1)(3 − x) represents a downward parabola whose vertex is x = 2 (which corresponds to p(2) = 1) and whose intersections with the x-axis are x = 1 and x = 3 (so that p(1) = p(3) = 0). For all values of x not in [1..3] the values of the parabola p(x) are negative and it can then be concluded that the rewritten constraint is equivalent to x in [1..3]. Analogously, syntactic rules rewrite the constraint x nin 1..3 as (x + (−1) ∗ 1) ∗ (x + (−1) ∗ 3) > 0. As a matter of fact, p(x) = (x − 1)(x − 3) represents an upward parabola whose vertex is x = 2 (which corresponds to p(2) = −1) and whose intersections with the x-axis are x = 1 and x = 3 (so that p(1) = p(3) = 0). For all the values of x not in [1..3], instead, the values of the parabola p(x) are positive, so that it can be concluded that the rewritten constraint is equivalent to x not in [1..3]. 2.2 Constraints in Canonical Form If a total order on variable symbols is fixed, any primitive constraint can be rewritten to a canonical form. The chosen canonical form expresses any primitive constraint as t ≥ 0, where t is a term in canonical form. We say that a term t is in canonical form if it is a summation term (i.e., t = t1 + . . . + th ) involving only terms with the following structure ti = k∗x1 ∗ x1 ∗ · · · ∗ x1 ∗ x2 ∗ x2 ∗ · · · ∗ x2 ∗ · · ·∗xm ∗ xm ∗ · · · ∗ xm i ∈ [1..h] | {z } | {z } | {z } n1 n2 nm where k is a constant symbol and the {xi }m i=1 are distinct variable symbols arranged using the fixed total order. The fixed total order induces a lexicographic order of product terms that is used to arrange the {ti }hi=1 in t. We can define a function deg(v, t) on product terms by returning nv if variable symbol v is repeated nv times in t. Function deg(v, t) allows writing product terms in a compact form as ti = k ∗ x1 ∗ x1 ∗ · · · ∗ x1 ∗ x2 ∗ x2 ∗ · · · ∗ x2 ∗ · · · ∗ xm ∗ xm ∗ · · · ∗ xm | {z } | {z } | {z } deg(x1 ,ti ) deg(x2 ,ti ) deg(xm ,ti ) = k ∗ x1 ˆdeg(x1 , ti ) ∗ x2 ˆdeg(x2 , ti ) ∗ · · · ∗ xm ˆdeg(xm , ti ) i ∈ [1..h] Besides ordinary algebraic manipulations and the total order of variable sym- bols, the conversion to the canonical form uses the following ideas. First, observe that any constraint of the form f (x) ⊙ g(x), where f (x) and g(x) are polyno- mials with integer coefficients, x is the vector of variables of polynomials, and ⊙ ∈ Π, can be expressed in the form p(x) ⊙ 0, where p(x) = f (x) − g(x) is still a polynomial with integer coefficients. Second, in order to convert constraints in canonical form we use the following lemma, which holds for every polynomial p(x) with integer coefficients. Lemma 1 The following co-implications hold for every polynomial p(x) with integer coefficients and variables x p(x) ≤ 0 ⇐⇒ −p(x) ≥ 0 (7) p(x) > 0 ⇐⇒ p(x) ≥ 1 ⇐⇒ p(x) − 1 ≥ 0 (8) p(x) < 0 ⇐⇒ p(x) ≤ −1 ⇐⇒ −p(x) − 1 ≥ 0 (9) 2 2 p(x) = 0 ⇐⇒ p (x) ≤ 0 ⇐⇒ −p (x) ≥ 0 (10) 2 2 p(x) 6= 0 ⇐⇒ p (x) > 0 ⇐⇒ p (x) − 1 ≥ 0 (11) The following illustrative examples are provided to exemplify how generic constraints can be written in canonical form. Example 3 Consider the following primitive constraints in the form (x1 +x2 )⊙0 where ⊙ ∈ Π. 1. The constraint x1 + x2 ≤ 0 can be written as −x1 − x2 ≥ 0 by co-implication (7) of lemma 1, which is in canonical form. 2. The constraint x1 +x2 > 0 can be written as x1 +x2 −1 ≥ 0 by co-implication (8) of lemma 1, which is in canonical form. 3. The constraint x1 + x2 < 0 can be written as x1 + x2 ≤ −1 by co-implication (9) of lemma 1, which corresponds to −x1 − x2 − 1 ≥ 0 in canonical form. 4. The constraint x1 + x2 = 0 can be written as (x1 + x2 )2 ≤ 0 by co-implication (10) of lemma 1, which is −x21 − x22 − 2x1 x2 ≥ 0 in canonical form. 5. The constraint x1 + x2 6= 0 can be written as (x1 + x2 )2 > 0 by co-implication (11) of lemma 1, and it can be written as x21 +2x1 x2 +x22 −1 ≥ 0 in canonical form. 2.3 The Constraint Solving Algorithm The canonical form of primitive constraints has the interesting property of re- ducing constraint solving to the study of the sign of polynomials with integer coefficients whose variables are defined over finite subsets of integer numbers. In the rest of this paper we assume that constraints are always written in canonical form since the reformulation of constraints to their respective canonical forms can be considered a simple preprocessing step. As discussed in Section 1, we assume that the user provides initial bounds for all variables, i.e., we can assume that before the solving process begins xi ∈ [xi ..xi ] xi ∈ Z xi ∈ Z i ∈ [1..m] (12) where {xi }m m i=1 are all the variables involved in the constraints, {xi }i=1 are their m lower bounds, and {xi }i=1 are their upper bounds. The lower bounds and the upper bounds identify a box in Zm that can be used as an initial approximation of the domain of variables. The constraint solving algorithm refines the initial box and possibly breaks it into disjoint boxes until boxes are sufficiently refined and the consistency of constraints can be certified in each of them. In details, the algorithm considers all primitive constraints and it verifies if each constraint is consistent in the current box. If no consistency can be certified, a variable is selected and the current box is split into two disjoint boxes by splitting the cur- rent domain of the selected variable. The obtained disjoint boxes are processed recursively. The algorithms always terminates because the consistency of con- straints can always be verified in a box formed by intervals that contain only one element. The performances of the algorithm depend significantly on the details of the subdivision, i.e., on the techniques adopted to select a variable and to split a box into two disjoint boxes, and a large number of alternatives have been studied in the literature (see, e.g., [3]). In the rest of this paper, we assume that variables are selected using a fixed lexicographic order and that once a variable xi is selected, its domain [xi ..xi ] is split at   (xi + xi ) si = (13) 2 into two disjoint intervals [xi ..si ] and [(si + 1)..xi ]. For each primitive constraint p(x) ≥ 0, the algorithm computes suitable lower bound l and upper bound u, with l ≤ u, for the polynomial p(x) in the current box, and it uses such values to verify if p(x) ≥ 0 in the current box. The following procedure summarizes the core of the constraint solving algorithm: 1. If u < 0 the constraint is not consistent in the current box; 2. If l ≥ 0 the constraint is consistent in the current box; 3. Otherwise, the current box should be split and analysed recursively. The following example shows how the proposed algorithm works for a simple constraint under the assumption that suitable lower bounds and upper bounds can be computed. Example 4 Let us consider the constraint x − 3 ≥ 0 involving only variable x, which is assumed to be defined in I = [0..5]. Using the notation introduced in the algorithm described above, suitable values of the lower and upper bounds are l = −3 and u = 2, respectively, so that none of the first two conclusive cases of the algorithm applies. The interval I is then split into two subintervals, namely IL = [0..2] and IR = [3..5]. First, consider IL with suitable values of the lower and upper bounds as l = −3 and u = −1, respectively, so that case (1) holds, i.e., the constraint is not consistent in the current box IL . Then, consider IR , with suitable lower and upper bounds as l = 0 and u = 2, respectively, so that case (2) holds, i.e., the constraint is consistent in IR . The domain of the variable x can then be set equal to IR , and Figure 1 shows how the initial approximation of the domain is processed. x ∈ [0..5] l = −3, u = 2 x ∈ [0..2] x ∈ [3..5] l = −3, u = −1 l = 0, u = 2 Fig. 1. Description of the refinement of the domain of variable x in Example 4. It is worth noting that we are only interested in the signs of l and u, and not in their actual value. Any algorithm that allows the effective computation of the signs of suitable l and u can be used to support the proposed algorithm. In the following section we present a technique to effectively compute such signs using the modified Bernstein form of polynomials. 3 The Modified Bernstein Form of Polynomials In this section we introduce the Modified Bernstein Form (MBF ) of polynomials and we explain how to use it to solve polynomial constraints over finite domains. As an illustrative special case, we start by considering univariate polynomials. Problems involving only polynomials in one variable are not relevant for typical applications, however, they allow simplifying the notation and, therefore, they allow the proposed approach to be explained in a simpler way. 3.1 Univariate Polynomial Constraints We consider a single constraint expressed in canonical form as p(x) ≥ 0 (14) where n X p(x) = ai xi (15) i=0 is a generic polynomial of degree n, x is the (unique) variable, {ai }ni=0 are the coefficients relative to the canonical basis B given by the following set of mono- mials B = {1, x, x2 , . . . , xn } (16) and we assume that x is defined in a given interval I = [x..x]. In order to write the explicit expression of the polynomial p(x) in the Bernstein basis, we first need to consider a new variable t defined in [0, 1], which is related to x according to the following affine transformation x = x + (x − x)t. (17) By substituting (17) in (15), one obtains n n X i   n X X i i−k X ai (x + (x − x)t)i = ai x (x − x)k tk = c i ti (18) i=0 i=0 k=0 k i=0 where the coefficients {ci }ni=0 are defined as n   X k ci = ak xk−i (x − x)i i ∈ [0..n]. (19) i k=i Observe that {ci }ni=0 are related to the coefficients {ai }ni=0 relative to the canon- ical basis (16) and also to the extremes x and x of the interval I where p(x) is defined. Moreover, since the coefficients {ai }ni=0 are integer numbers, it can be concluded that also {ci }ni=0 are integer numbers. The coefficients {ci }ni=0 defined in (19) are necessary to derive the MBF of p(x). The Bernstein basis is given by the set of n + 1 polynomials {Bin (x)}ni=0 , the i-th of which is given by   n (x − x)i (x − x)n−i Bin (x) = i ∈ [0..n]. (20) i (x − x)n Using this Bernstein basis, the polynomial defined in (15) can be written as n X p(x) = bi Bin (x) (21) i=0 where the coefficients {bi }ni=0 are related to {ci }ni=0 , and, hence, to {ai }ni=0 , according to i i  X j bi = n cj  i ∈ [0..n]. (22) j=0 j The coefficients {bi }ni=0 are useful to find a lower bound and an upper bound of the polynomial p(x). As a matter of fact the following inequalities hold min {bi } ≤ p(x) ≤ max {bi } x ∈ I. (23) i∈[0..n] i∈[0..n] Observe that, even if the coefficients {ci }ni=0 are integer numbers, the co- efficients {bi }ni=0 are not guaranteed to be integer because of the divisions by binomial coefficients. For this reason, the treatment of the {bi }ni=0 requires rep- resenting rational numbers, which often involves approximations. However, it is possible to consider a modified Bernstein basis whose i-th element is given by (x − x)i (x − x)n−i B̃in (x) = i ∈ [0..n]. (24) (x − x)n Each element B̃in (x) of the modified Bernstein basis is obtained from the corre- sponding element Bin (x) of the Bernstein basis using the following equality   n Bin (x) = B̃in (x). (25) i From (21), the polynomial p(x) can be written in terms of the modified Bernstein basis as Xn p(x) = b̃i B̃in (x) (26) i=0 where the coefficients {b̃i }ni=0 are   n b̃i = bi i ∈ [0..n]. (27) i The right-hand side of (26) represents the MBF of polynomial p(x). The advan- tage of considering the MBF of p(x) instead of its Bernstein form is that the coefficients {b̃i }ni=0 are integer numbers and they can therefore be represented with no approximation, provided that integer numbers can be represented with arbitrary precision. As a matter of fact, since   i   n j n−j = (28) i nj i−j the coefficients {b̃i }ni=0 can be written as i   X n−j b̃i = cj i ∈ [0..n] (29) j=0 i−j from which it can be easily deduced that they are integer numbers. It is worth noting that binomial coefficients are integer numbers and they can be computed without divisions. As a matter of fact, the following recursive formula holds for binomial coefficients, when n ≥ j > 0       n n−1 n−1 = + . (30) j j j−1 The recursion terminates because the binomial coefficient equals 1 for j = 0. Observe that (23) holds for the coefficients {bi }ni=0 and not for {b̃i }ni=0 . How- ever, as previously discussed, we are only interested in the signs of the coefficients {bi }ni=0 , which are the same as that of corresponding {b̃i }ni=0 since they only dif- fer by a positive multiplicative factor. For this reason, it is possible to focus only on the integer coefficients {b̃i }ni=0 . 3.2 Multivariate Polynomial Constraints A more sophisticated notation needs to be introduced in the multivariate case. We denote as {xj }kj=1 the considered variables where k ≥ 1 and j is an index that we associate with each variable. We define the vector x of all variables x = (x1 , . . . , xk ) (31) and we define a multi-index I as I = (i1 , . . . , ik ), so that xI = xi11 · . . . · xikk . A k-variate polynomial p(x) with real coefficients can be written as X p(x) = aI xI (32) I≤N where N = (n1 , . . . , nk ) is a multi-index, and operations on multi-indices are defined component-wise. Observe that the inequality I ≤ N among multi-indices is also component-wise, namely it corresponds to i j ≤ nj j ∈ [1..k]. (33) As in the univariate case, we start by considering a single constraint, namely p(x) ≥ 0 (34) and we assume that each variable is defined over a finite interval. More precisely, we denote as x = (x1 , . . . , xk ) and x = (x1 , . . . , xk ) the extremes of the box I = [x, x] where the polynomial p(x) is defined. In order to find the explicit expression in the Bernstein basis of the polynomial p(x) defined in (32), it is necessary to consider an affine change of variable between x and a new variable t defined over the unit box [0, 1]k . Using the affine transformation xi = xi + (xi − xi )ti (35) it is possible to show that X X p(x) = aI xI = cI tI (36) I≤N I≤N where the coefficients cI are related to aI according to N   X L L−I cI = aI x (x − x)I (37) I L=I and       L l1 lk = · ...· . (38) I i1 ik Then, it is possible to write the polynomial p(x) defined in (32) as X p(x) = bI BIN (x) (39) I≤N where the polynomials {BIN (x)}I≤N represent the Bernstein basis of the consid- ered polynomial space with the following explicit expression BIN (x) = Bin11 (x1 ) · Bin22 (x2 ) · . . . · Binkk (xk ) (40) and   n nj (xj − xj )ij (xj − xj )nj −ij Bijj (xj ) = ij ∈ {0, . . . , nj } j ∈ {1, . . . , k}. ij (xj − xj )nj The coefficients bI are related to cI (and, hence, to aI ) according to  X I J bI = N  cJ I ≤ N. (41) J≤I J Notably, the range of the polynomial p(x) satisfies range(p) ⊆ [min {bI }, max{bI }]. (42) I≤N I≤N As in the univariate case, such a well-known property of the Bernstein form is useful to study polynomial constraints because it provides effective lower and upper bounds of a polynomial, and it can be used to study its sign in a box. We now aim at deriving the MBF of the polynomial p(x) starting from its Bernstein form in (39). In order to do so, let us consider the modified Bernstein basis given by the following set of polynomials B̃IN (x) = B̃in11 (x1 ) · B̃in22 (x2 ) · . . . · B̃inkk (xk ) (43) where n (xj − xj )ij (xj − xj )nj −ij B̃ijj (xj ) = ij ∈ {0, . . . , nj } j ∈ {1, . . . , k}. (xj − xj )nj Using such a basis, the polynomial p(x) can be written as n X p(x) = b̃I B̃IN (x) (44) i=0 where the coefficients {b̃I }I≤N are  X N  I X N − J  J b̃I = N  c J = cJ I ≤ N. (45) I J I −J J≤I J≤I Observe that, since all factors involved in (45) are integer numbers, the coeffi- cients {b̃I }I≤N of the polynomial p(x) in the MBF are integer numbers. More- over, the binomial coefficients in (45) can be computed with no divisions using (30). This allows evaluating the coefficients {b̃I }I≤N with no approximation. As in the univariate case, even if condition (42) holds for only the coefficients {bI }I≤N , we can focus on the coefficients {b̃I }I≤N since we are only interested in their signs. x ∈ [−10..10] l̃ = −218, ũ = 91 x ∈ [−10..0] x ∈ [1..10] l̃ = −18, ũ = 91 l̃ = −8, ũ = 91 x ∈ [−10.. − 5] x ∈ [−4..0] x ∈ [1..5] x ∈ [6..10] ∀i ∈ [0..2] b̃i ≥ 0 l̃ = −18, ũ = 7 l̃ = −8, ũ = 16 ∀i ∈ [0..2] b̃i ≥ 0 x ∈ [−4.. − 2] x ∈ [−1..0] x ∈ [1..3] x ∈ [4..5] l̃ = −5, ũ = 7 ∀i ∈ [0..2] b̃i < 0 l̃ = −12, ũ = 0 ∀i ∈ [0..2] b̃i ≥ 0 x ∈ [−4.. − 3] x ∈ [−2] x ∈ [1..2] x ∈ [3] ∀i ∈ [0..2] b̃i ≥ 0 ∀i ∈ [0..2] b̃i < 0 ∀i ∈ [0..2] b̃i < 0 ∀i ∈ [0..2] b̃i ≥ 0 Fig. 2. Description of the refinement of the domain of variable x in (46) 4 Examples The first motivating example proposed in Section 1 is X in -10..10, X^2 #>= 9 and such a constraint can be rewritten in canonical form and interpreted as p(x) = x2 − 9 ≥ 0 x ∈ [−10..10]. (46) The trace of the solving process for such a constraint is shown in Figure 2 where light blue nodes are non-terminal nodes, green nodes are terminal nodes with consistent domains and red nodes are terminal nodes with inconsistent domains. In non-terminal nodes, we denote as l̃ and ũ a negative value and a positive value of {b̃i }2i=0 , respectively. The figure shows that after 7 branches the algorithm computes the correct answer x ∈ [−10.. − 3] ∪ [3..10]. (47) The second motivating example shown in Section 1 is X in -10..10, Y in -10..10, X*Y #>= 21 and such a constraint can be rewritten in canonical form and interpreted as p(x) = xy − 21 ≥ 0 x ∈ [−10..10] y ∈ [−10..10]. (48) x ∈ [1..2] y ∈ [1..2] z ∈ [1..2] l̃ = −6, ũ = 0 x ∈ [1..1] x ∈ [2..2] y ∈ [1..2] y ∈ [1..2] z ∈ [1..2] z ∈ [1..2] l̃ = −4, ũ = 0 l̃ = −4, ũ = 0 x ∈ [1..1] x ∈ [1..1] x ∈ [2..2] x ∈ [2..2] y ∈ [1..1] y ∈ [2..2] y ∈ [1..1] y ∈ [2..2] z ∈ [1..2] z ∈ [1..2] z ∈ [1..2] z ∈ [1..2] ∀I b̃I < 0 l̃ = −4, ũ = 0 l̃ = −4, ũ = 0 ∀I b̃I < 0 x ∈ [1..1] x ∈ [1..1] x ∈ [2..2] x ∈ [2..2] y ∈ [2..2] y ∈ [2..2] y ∈ [1..1] y ∈ [1..1] z ∈ [1..1] z ∈ [2..2] z ∈ [1..1] z ∈ [2..2] ∀I b̃I < 0 ∀I b̃I < 0 ∀I b̃I < 0 ∀I b̃I < 0 Fig. 3. Description of the refinement of the domain of variable x, y and z of in (50). The trace of the solving process is too complex to be shown because it counts 41 branches. After all such branches, the algorithm computes the correct answer x ∈ [−10.. − 3] ∪ [3..10] y ∈ [−10.. − 3] ∪ [3..10]. (49) Finally, the following is a well-known example used to show that CLP(FD) solver available in SWI-Prolog (version 6.6.6) sometimes does not capture incon- sistent constraints X in 1..2, Y in 1..2, Z in 1..2, X #\=Y, X #\=Z, Z #\=Y. The expected result is false, but the solver does not change the constraint, even if all different is used instead of the three inequalities. Such a constraint can be written in canonical form and interpreted as x2 − 2xy + y 2 − 1 ≥ 0 x2 − 2xz + z 2 ≥ 0 y 2 − 2yz + z 2 ≥ 0. (50) Figure 3 shows the solving process of the proposed algorithm, where, after 5 branches, all terminal nodes are red, i.e., the constraint is inconsistent. In non- terminal nodes, we denote as ˜l and ũ a negative value and a positive value of {b̃I }I≤N , respectively, relative to a constraint. 5 Conclusion In this paper we presented a solver for polynomial constraints over finite do- mains. Such constraints are expressed as polynomials with integer coefficients whose variables are defined over finite subsets of integer numbers. The proposed algorithm uses the MBF of polynomials to effectively compute the signs of suit- able lower and upper bounds of polynomials. It uses such signs to reason on constraints and to remove inadmissible values from the domains of variables. It is worth noting that the algorithm only involves algebraic manipulations over integer numbers because the coefficients of the MBF of a polynomial with integer coefficients are provably integer. The availability of arbitrary precision integer arithmetic is therefore sufficient to ensure that no approximation is involved in the solution process. The proposed algorithm is open for further developments on possible opti- mizations, at least, for two important choices that impact significantly on the performances of the solution process. First, the choice of the variable for split- ting the current box is crucial. Second, once a variable is selected, the choice of simply halving the current box can be too simplistic. For both choices a number of static and dynamic techniques can be found in the literature (see, e.g., [3]) and we think that both choices could benefit from the peculiar form of constraints that we address. In particular, we plan to investigate how the MFB could help in the computation of the gradient of polynomials, which is a valuable information for both such choices. The current implementation of the proposed algorithm is available for down- load (cmt.dmi.unipr.it/software/clppolyfd.zip) in terms of a reusable Java library that is also usable as a drop-in replacement of the CLP(FD) solver that ships with SWI-Prolog. References 1. C. Borralleras, S. Lucas, A. Oliveras, E. Rodrı́guez-Carbonell, and A. Rubio. SAT modulo linear arithmetic for solving polynomial constraints. Journal of Automated Reasoning, 48(1):107–131, 2010. 2. R. T. Farouki and V. T. Rajan. Algorithms for polynomials in Bernstein form. Computer-Aided Geometric Design, 5(1):1–26, 1988. 3. B. Mourrain and J.P. Pavone. Subdivision methods for solving polynomial equa- tions. Journal of Symbolic Computation, 44(3):292–306, 2009. 4. S. Ray and P. S. V. Nataraj. An efficient algorithm for range computation of polynomials using the Bernstein form. Journal of Global Optimization, 45:403–426, 2009. 5. J. Sanchez-Reyes. Algebraic manipulation in the Bernstein form made simple via convolutions. Computer Aided Design, 35:959–967, 2003. 6. M. Triska. The finite domain constraint solver of SWI-Prolog. In Procs. 11th Int’l Symp. Functional and Logic Programming (FLOPS 2012), pages 307–316, Berlin, 2012. Springer.