=Paper= {{Paper |id=Vol-3754/paper02 |storemode=property |title=Faster bivariate lexicographic Groebner bases modulo x^k |pdfUrl=https://ceur-ws.org/Vol-3754/paper02.pdf |volume=Vol-3754 |authors=Xavier Dahan |dblpUrl=https://dblp.org/rec/conf/sycss/Dahan24 }} ==Faster bivariate lexicographic Groebner bases modulo x^k== https://ceur-ws.org/Vol-3754/paper02.pdf
                         Faster bivariate lexicographic Gröbner bases modulo xk
                         Xavier DAHAN1
                         1
                             Tohoku university, IEHE, Sendai, Japan


                                      Abstract
                                      Given t bivariate polynomials f1 , . . . , ft ∈ K[x, y], and an integer k we report a work-in-progress to
                                      compute a minimal, not reduced, lexicographic Gröbner basis of the ideal ⟨f1 , . . . , ft , xk ⟩ in O∼ (td2 k),
                                      where d is an upper bound on the y-degree of the fi ’s. Using the fast normal form algorithm of Schost
                                      & St-Pierre [1], this implies that we can compute its reduced Gröbner basis in O∼ (td2 k + s2 dk) where
                                      s is the number of polynomials contained in the output Gröbner basis. In many instances this improves
                                      the algorithm of Schost & St-Pierre [2] based on the Howell matrix normal form that runs in time
                                      O∼ (tdω k).

                                      Keywords
                                      Gröbner bases, Lexicographic order, Bivariate, Euclidean division




                         1. Introduction
                         Background Lexicographic Gröbner bases (lexGb for short) play a fundamental role when
                         manipulating polynomial systems due to the elimination property that they are endowed with.
                         But the lexicographic order often does not behave well with standard Gröbner bases algorithms [3],
                         whereas the degree reverse lexicographic order has been often observed to behave the best among
                         monomial orders when computing a Gröbner basis. This is grounded in strong theoretical
                         evidences [4, 5]. As a result, a standard strategy to compute a zero-dimensional lexGb consists
                         first in computing a Gröbner basis for the degree reverse lexicographic order with efficient modern
                         algorithms like F4, F5 [6, 7] and then proceed to a change of order algorithm [8] to compute the
                         reduced lexGb.
                            In the case of two variables only, the situation is different. For t polynomials f1 , . . . , ft ∈ K[x, y],
                         of maximal degree d in y, and total degree dtot , Schost and St-Pierre in [2], obtains a running
                         time in O∼ (tω dω dtot ) when at least one fi has for leading monomial y d . As usual ω is the
                         exponent of matrix multiplication. This algorithm computes the Hermite normal form of a
                         generalized Sylvester matrix of the input polynomials, extending the case t = 2 treated by
                         Lazard [9, Section 5].
                            The same article [2] also considers computing the reduced lexGb modulo xk , that is of the
                         ideal ⟨f1 , . . . , ft , xk ⟩. The idea is to work in the ring R = K[x]/xk and to compute the Howell
                         normal form of a generalized Sylvester matrix of the fi ∈ R[y]. This Howell normal form is the
                         adaptation of the Hermite normal form for matrices of polynomials with coefficients in R. The
                         cost can be made of O∼ (tdω k).

                         Result We consider here the more general moduli P k , for P an irreducible polynomial in K[x]
                         of degree dP . In this paragraph we let dP = 1 to simplify comparisons. Our algorithm based on
                         Euclidean division works in O∼ (td2 k) and computes a minimal but not reduced Gröbner basis.
                            Schost & St-Pierre’s normal form algorithm. The cost of reducing this minimal lexGb is due to
                         another article of Schost & St-Pierre [1, Section 4] and is better stated in term of the output lexGb
                         H = (hs , . . . , h0 ). Assume that lm(hs ) ≺ · · · ≺ lm(h0 ), where ≺ stands for the lexicographic

                         SCSS 2024: 10th International Symposium on Symbolic Computation in Software Science, August 28–30, 2024,
                         Tokyo, Japan
                         email: xdahan@gmail.com (X. DAHAN)
                         orcid: 0000-0001-6042-6132 (X. DAHAN)
                                      © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                      (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings




                                                                                           7
order with x ≺ y. Under our setting we have hs = P k , and if we let degx (P ) = 1 like when
P = x, then degx (hs ) = k. Let degy (h0 ) = n0 be the largest y-degree of the polynomials in
H, and let s + 1 = |H| be the number of polynomials in the output. Then reducing H costs
O∼ (s2 n0 k) (see [1, Prop. 4.4 & Prop. 5.1]).
   Note that s ≤ min{k, d} and n0 ≤ d. So we obtain an algorithm that computes the reduced
lexGb in O∼ (td2 k + s2 n0 k), always within O∼ (td2 k + s2 dk). This is comparable or better than
O∼ (tdω k) as soon as s2 n0 = O(tdω ). In the case t = O(1) and s ≈ d = k, we obtain O∼ (d4 )
which is worth than O∼ (dω+1 ). But in many cases, the asymptotic complexity is better.

Implementation The articles [2, 1] do not mention any implementation. Indeed, implementations
of the Howell normal form are apparently seldom and not available publicly — at least in major
computer algebra systems. We only found Chapter 4 of the PhD thesis of Storjohann which gives
the complexity of O∼ (dω k), see [10, Theorem 4.6], and from which originates the result of [2].
   On the other hand, since the presented algorithm that computes a minimal non-reduced lexGb
in O∼ (td2 k) can be compared to the quadratic-time standard Euclidean algorithm, it is efficient
up to fast univariate arithmetic (polynomial operations involving the x-variable only). Our
implementation in Magma (available at http://xdahan.sakura.ne.jp/lexgb24.html) supports this
claim. As for the normal form algorithm modulo a reduced Gröbner basis of Schost & St-Pierre [1,
Section 4], making an implementation efficient is an interesting challenge. For now we resorted
here to the internal Magma command “Reduce” (in orange).

Some timings We tested the algorithm for two input polynomials modulo xk :
         k                                              k
                                   !                                                       !
        Y                                               Y
                                 i                                               i−1     i
  ak ≡     (y + i + x + · · · + x ) , bk ≡ (y + 1 + 2x)   (y + i + x + · · · + x     + 2x )
           i=1                                                    i=2

The reduced lexGb of ⟨ak , bk , xk ⟩ has s + 1 = k + 1 polynomials (the maximum possible) and is
dense (it has O(k 3 ) coefficients). Therefore, this family of examples is suitable for benchmarking
as it involves worst case situations: the number of recursive calls is maximal, the cost of the
normal form is maximal. Note also that taking only t = 2 polynomials as input is not restrictive
since recursive calls involve more polynomials in the input. We report below on timings for
k = 30, 40, . . . , 120 (left) and k = 70, 140, . . . , 250 (right) over a prime finite field of 64bits. With
t = O(1), k ≈ d, the theoretical cost to obtain a minimal lexGb is O∼ (k 3 ). The internal command
Reduce of Magma becomes quickly the bottleneck (in orange. Time > 500s for k > 200, timings
are not displayed). Without surprise, its timing grows faster than the cost of the fast version
of Schost & St-Pierre, which is here O∼ (k 4 ) (it seems to be closer to something in O∼ (k 5 )).
Although we could not compare with the Howell form approach of [2], we could compare with




                                                     8
the internal Gröbner engine of Magma by calling GroebnerBasis([a, b, xk ]) (red, until k = 100).
As already reported in [11], the timings are incomparably slow.

Scope The motivation behind working modulo P k is twofold. Firstly, this serves as a skeleton for
a similar algorithm that tackles the more general input ⟨f1 , . . . , ft , T ⟩ for an arbitrary polynomial
T ∈ K[x], not necessarily the power of an irreducible one. See [11], which utilizes dynamic
evaluation, for a detailed account when t = 2. Secondly, we would like to target the reduced lexGb
H of the general input ⟨f1 , . . . , ft ⟩, not modulo a univariate polynomial, with our Euclidean-
division based algorithm. After the work [11], a natural question asks how can we compute the
lexGb of two polynomials f1 , f2 from their subresultant sequence? To this end, it is enlightening
to access the lexGbs modulo Piki , where Piki runs over the primary factors of the elimination
polynomial of the (fj )j ’s. The work [12] then permits to understand how these lexGbs can
reconstruct H via Chinese remainders. These remarks lead to a reasonable hope to compute a
minimal lexGb faster than the O∼ (tω dω dtot ) of [2].
   Treating only two variables is clearly limited. Yet, all aspects shall be mastered as there
is a cliff in difficulty when considering more than two variables: no general form of Lazard’s
structural theorem [9] which is key in this work and in [2, 1]. Let us mention though the radical
case where some sort of generalizations of Lazard’s theorem have been shown [13, 14, 15]. The
Euclidean algorithm based approach certainly helps to understand where this difficulty stems
from. One aspect of it can be related to the absence of a MonicForm routine (see Eq. (1))
that would transform a nilpotent polynomial, say in K[x, y, z] modulo a primary ideal in K[x, y].
Think of f = xz 2 + yz + x + y, nilpotent modulo the primary ⟨x2 , y 2 ⟩ ⊂ K[x, y]. It appears
that the reduced lexGb of the ideal ⟨f, y 2 , x2 ⟩ is [f, y 2 , xy, x2 ]. Observe the new polynomial
xy introduced with the smaller variables x and y. This phenomenon does not appear for two
variables only. Therefore, the Euclidean division approach helps to understand better the case of
three variables or more.

Related works Recently, articles dealing with bivariate Gröbner bases have flourished. A
number of them address the question of quasi-optimal asymptotic complexity estimates, with
adequate genericity assumptions, and the relation with the resultant [16, 17, 18, 19, 20]. Focusing
on non-generic lexGbs, the work [11] from which the present work is inspired, generalizes dynamic
evaluation to a non-squarefree modulus. We have already cited [2, 1]. Besides the fast normal
form in Section 4, the article [1] introduces a fast Newton iteration for general bivariate lexGbs.


2. The algorithm
Overview It is based on ideas introduced in [11], which is constrained to two input polynomials
a and b. Let us summarize the content of the first part of [11] which focuses on working modulo
P k (the second part focuses on working modulo an arbitrary monic univariate polynomial). The
divisions occurring in the Euclidean algorithm of a and b modulo P k require invertible leading
coefficients. In the ring R = K[x]/⟨P k ⟩ elements are either invertible or nilpotent. Weierstrass
preparation theorem realized by Hensel lifting permits to circumvent this difficulty, by calculating
a “monic form”: Given f˜ ∈ K[x, y] reduced modulo P k , we denote f, Cf ← MonicForm(f˜, P k )
where:

       Cf = gcd(content(f˜), P k ) ∈ K[x],        f is monic in y,       ⟨f˜, P k ⟩ = ⟨Cf f, P k ⟩.   (1)

The Euclidean algorithm can be pursued with the monic f , and Cf f will be part of the lexGb.
   We adapt this strategy to design the main algorithm H ← Add(f, G) where G is a minimal
                               ′
lexGb such that G ∩ K[x] = ⟨P k ⟩ with k ′ ≤ k, f ∈ K[x, y] and H is a minimal lexGb of ⟨f ⟩ + ⟨G⟩.
Assuming for the moment this algorithm correct and running in time O∼ (d2 k ′ ), the general
algorithm 1 “lexGb” has the following worst-case complexity:




                                                    9
 Algorithm 1: G ← lexGb(f1 , . . . , ft ; P k )
     Input: Bivariate polynomials f1 , . . . , ft . Power of an irreducible polynomial P k ∈ K[x].
     Output: reduced lexGb of ⟨f1 , . . . , ft , P k ⟩
1 f1 , . . . , ft ← f1 mod P k , . . . , ft mod P k                                    // O∼ (tddx ) or free
               k
2 G ← [P ]
3 for i=1,. . . ,t do
4       G ← Add(fi , G)                                                                       // O∼ (d2 kdP )
5    return Reduce(G)                 // Reduce based on the normal form of [1, Section 4]. O∼ (s2 n0 k)



Theorem 1. Let d be the maximal degree in y of the polynomials f1 , . . . , ft . Let dx their maximal
degree in x. Let dP = degx (P ). Algorithm 1 computes a minimal lexGb of ⟨f1 , . . . , ft , P k ⟩ in
O∼ (td2 kdP + tddx ).
  If the input polynomials are reduced modulo P k , or if P = x then the cost is O∼ (td2 kdP ).
  The reduced lexGb requires additionally O∼ (s2 n0 k) operations in K, where s = |G|, n0 =
degy (g0 ) is the largest y-degree of the polynomials in the output.



 Algorithm 2: Add(g, G)
     Input: g ∈ K[x, y], G = [g0 , . . . , gs ] minimal lexGb modulo P k = gs
     Output: minimal lexGb of ⟨g⟩ + ⟨G⟩
6    if G == [constant] then
 7       return [1]
8  f, Cf ← MonicForm(g, P k )                                 // ⟨Cf f, P k ⟩ = ⟨g, P k ⟩, f monic, Cf ∈ K[x]
9  if f == 1 then
10     return AddUnivariate(Cf , G)                   //Special “easy” case where input polynomial ∈ K[x]
11   if |G| == 1 then
12       return [Cf f, P k ]
13   return AddGeneric(f, Cf , G)                                         // Output generates ⟨Cf f ⟩ + ⟨G⟩


The main algorithm 2 “Add” The purpose is given a minimal lexGb G as above, not necessarily
zero-dimensional, and a polynomial f ∈ K[x, y] to construct a minimal lexGb of the ideal ⟨f ⟩+⟨G⟩.
Thus, it is interesting for its own. It builds upon Euclidean divisions, the key point consists
in obtaining a degree (in y) decrease through a Euclidean division (see Lines 24 and 34), and
then to proceed to adequate recursive calls, with smaller input data (Lines 21 and 23). The
algorithm 2 “Add” actually only treats base cases, and then calls Algorithm 3 “AddGeneric”,
whose input are amenable to recursive calls. One base case is when f ∈ K[x] (Line 10) treated
apart in the “easy” AddUnivariate. We omit this short algorithm in this work-in-progress
report. Otherwise Algorithm 3 AddGeneric, called at Line 13, treats “generic” input: f monic,
reduced modulo P k , and df := degy (f ) ≥ 1. Its role essentially boils down to managing four
cases. Write G = [g0 , . . . , gs ] (lm(gs ) ≺ · · · ≺ lm(g0 ), so that gs = P k and degy (g0 ) = n0 ).
     1. Case distinction: ℓ > k or ℓ ≤ k     (equivalently Cf ∤ gs = P k or Cf | gs )
     2. Subcases distinction: df ≤ n0 or df > n0
The first case distinction is treated by renaming variables (if-test at Line 16). The subcase
distinction (if-test at Line 20) leads to call two subroutines AddTwoA and AddTwoB which
looks very similar, but with key differences.

Algorithms AddTwoA and AddTwoB The input are monic bivariate polynomials a, b, monic
univariate polynomials Ca , Cb which are powers of P , and a minimal lexGb G modulo P k .




                                                         10
  Algorithm 3: AddGeneric(f, Cf , G)
     Input: f ∈ K[x, y] monic degy (f ) ≥ 1, Cf = P t ∈ K[x], G minimal lexGb modulo P k
     Output: minimal lexGb of ⟨Cf f ⟩ + ⟨G⟩
14 Let G = [g0 , . . . , gs−1 , P k ], and write g0 = M0 g0,y
15 Ca ← gcd(M0 , Cf )                                                             // Ca = Cf or Ca = M0
16 if Ca == Cf then
17     a ← f, b ← g0,y , Cb ← M0                                                               // Cb b = g0
18 else
19     b ← f, a ← g0,y , Cb ← Cf                                                               // Ca a = g0
20 if degy (a) > degy (b) then             // Always holds ⟨Ca a, Cb b, P ⟩ = ⟨Cf f, g0 , P k ⟩
                                                                                      k

21     return AddTwoA(a, b, Ca , Cb , G≥1 )              // lexGb of ⟨Ca a, Cb b⟩ + ⟨g1 , . . . , gs ⟩
22 else
23     return AddTwoB(a, b, Ca , Cb , G≥1 )              // lexGb of ⟨Ca a, Cb b⟩ + ⟨g1 , . . . , gs ⟩



Additional degree constraints on a, b, Ca , Cb depend on one or the other algorithm. The output
is a minimal lexGb of ⟨Ca a, Cb b⟩ + ⟨G⟩ (whence the name “AddTwo”). The key point is the
degree decrease obtained by the Euclidean division at Lines 24 and 34. Then they undertake
recursive calls. These divisions henceforth imply the complexity stated in Theorem 1.

  Algorithm 4: AddTwoA(a, b, Ca , Cb , G)
     Input: 1. a, b ∈ K[x, y] monic, degy (a) > degy (b)
     2. Ca , Cb ∈ K[x] powers of P , Ca | Cb | P k ,
     3. G minimal lexGb modulo P k , and degy (a) > degy (G)
     Output: minimal lexGb of ⟨Ca a, Cb b⟩ + ⟨G⟩
                                                              k
24 r ← a mod b       // b monic. Over R = K[x]/⟨ PCb ⟩. It holds ⟨Cb a, Cb b, P k ⟩ = ⟨Cb b, Cb r, P k ⟩
                 g1
25 if r ≡ 0 mod
                 Cb then                                    // Here ⟨Cb a, Cb b, P k ⟩ = ⟨Cb b, P k ⟩
         ′′
26
                      1
       G ← Add(b, Cb G)                                               // Here ⟨Cb G ′′ ⟩ = ⟨Cb b⟩ + ⟨G⟩
27 else

28     G ′ ← Add(r, C1b G)                                                  // ⟨Cb G ′ ⟩ = ⟨Cb r⟩ + ⟨G⟩
         ′′            ′             ′′                  ′
29     G ← Add(b, G )        // ⟨Cb G ⟩ = ⟨Cb b⟩ + ⟨Cb G ⟩ = ⟨Cb b, Cb r⟩ + ⟨G⟩ = ⟨Cb b, Cb a⟩ + ⟨G⟩
30 if Ca == Cb then
31     return Cb · G ′′                                              // Here ⟨Cb G ′′ ⟩ = ⟨Cb b, Ca a⟩ + ⟨G⟩
32 else
33     return [Ca a] cat Cb · G ′′              // output generates ⟨Ca a⟩ + ⟨Cb G ′′ ⟩ = ⟨Ca a, Cb b⟩ + ⟨G⟩




  Algorithm 5: AddTwoB(a, b, Ca , Cb , G)
     Input: 1. a, b ∈ K[x, y] monic, degy (a) ≤ degy (b),
     2. Ca , Cb ∈ K[x] powers of P , Ca | Cb | P k ,
     3. G minimal lexGb modulo P k , degy (b) > degy (G)
     Output: minimal lexGb of ⟨Ca a, Cb b⟩ + ⟨G⟩
                                                          k
        Cb
34   r← C a
            b mod a // a monic. Over R = K[x]/⟨ PCa ⟩. It holds ⟨Ca r, Ca a, P k ⟩ = ⟨Cb b, Ca a, P k ⟩.
                    k
35 if r ≡ 0 mod PCa then                                     // Here ⟨Ca a, P k ⟩ = ⟨Ca a, Cb b, P k ⟩
                              1
36     return Ca · Add(a, Ca G)                                    // Output generates ⟨Ca a⟩ + ⟨G⟩
37 else

38     G ′ ← Add(r, C1a G) // ⟨Ca G ′ ⟩ = ⟨Ca r⟩ + ⟨G⟩ return Ca · Add(a, G ′ ) // Output generates
        ⟨Ca a⟩ + ⟨Ca G ′ ⟩ = ⟨Ca a, Cb b⟩ + ⟨G⟩




                                                         11
References
 [1] É. Schost, C. St-Pierre, Newton iteration for lexicographic Gröbner bases in two variables,
     Journal of Algebra 653 (2024) 325–377.
 [2] É. Schost, C. St-Pierre, p-adic algorithms for bivariate Gröbner bases, in: Proceedings of
     the 2023 International Symposium on Symbolic and Algebraic Computation, ACM, New
     York, NY, USA, 2023.
 [3] K. Kalorkoti, Counting and Gröbner bases, J. Symbolic Computation 31 (2001) 307–313.
 [4] D. Bayer, M. Stillman, A criterion for detecting m-regularity, Inventiones mathematicae 87
     (1987) 1–11.
 [5] H. Loh, The converse of a theorem by Bayer and Stillman, Advances in Applied Mathematics
     80 (2016) 62–69.
 [6] J.-C. Faugère, A new efficient algorithm for computing Gröbner bases (F4), Journal of pure
     and applied algebra 139 (1999) 61–88.
 [7] J.-C. Faugère, A new efficient algorithm for computing Gröbner bases without reduction to
     zero (F5), in: Proceedings of the 2002 international symposium on Symbolic and algebraic
     computation, 2002, pp. 75–83.
 [8] J.-C. Faugère, P. Gianni, D. Lazard, T. Mora, Efficient computation of zero-dimensional
     Gröbner bases by change of ordering, Journal of Symbolic Computation 16 (1993) 329–344.
 [9] D. Lazard, Ideal bases and primary decomposition: case of two variables, Journal Symbolic
     Computation 1 (1985) 261–270.
[10] A. Storjohann, Algorithms for matrix canonical forms, Ph.D. thesis, ETH Zürich, 2000.
[11] X. Dahan, Lexicographic Gröbner bases of bivariate polynomials modulo a univariate one,
     Journal of Symbolic Computation 110 (2022) 24–65.
[12] X. Dahan, Chinese remainder theorem for bivariate lexicographic Gröbner bases, in: Proceed-
     ings of the 2023 ACM International Symposium on Symbolic and Algebraic Computation,
     ISSAC ’23, ACM press, New York, NY, USA, 2023.
[13] M. Lederer, The vanishing ideal of a finite set of closed points in affine space, J. of Pure
     and Applied Algebra 212 (2008) 1116–1133.
[14] M. Marinari, T. Mora, A remark on a remark by Macaulay or enhancing Lazard structural
     theorem, Bull. Iranian Math. Soc. 29 (2003) 1–45, 85.
[15] B. Felszeghy, B. Ráth, L. Rónyai, The lex game and some applications, Journal of Symbolic
     Computation 41 (2006) 663 – 681.
[16] G. Villard, On computing the resultant of generic bivariate polynomials, in: Proceedings of
     the 2018 ACM International Symposium on Symbolic and Algebraic Computation, ISSAC
     ’18, ACM, 2018, p. 391–398.
[17] G. Villard, Elimination ideal and bivariate resultant over finite fields, in: Proceedings of
     the 2023 ACM International Symposium on Symbolic and Algebraic Computation, ISSAC
     ’23, ACM press, New York, NY, USA, 2023.
[18] J. van der Hoeven, R. Larrieu, Fast reduction of bivariate polynomials with respect to
     sufficiently regular Gröbner bases, in: Proceedings of the 2018 ACM International Symposium
     on Symbolic and Algebraic Computation, ISSAC ’18, ACM, New York, NY, USA, 2018, pp.
     199–206.
[19] J. van der Hoeven, R. Larrieu, Fast Gröbner basis and polynomial reduction for generic
     bivariate ideal, Applicable Algebra in Engineering, Communication and Computing 30
     (2019) 509–539.
[20] J. van der Hoeven, G. Lecerf, Fast computation of generic bivariate resultants, Journal of
     Complexity 62 (2021) 101499.




                                               12