<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Satisfiability Checking and Symbolic Computation, August</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>SC-Square: Future Progress with Machine Learning?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Matthew England</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Coventry University</institution>
          ,
          <addr-line>Coventry</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>1</volume>
      <fpage>9</fpage>
      <lpage>20</lpage>
      <abstract>
        <p>The algorithms employed by our communities are often underspecified, and thus have multiple implementation choices, which do not efect the correctness of the output, but do impact the eficiency or even tractability of its production. In this extended abstract, to accompany a keynote talk at the 2021 SC-Square Workshop, we survey recent work (both the author's and from the literature) on the use of Machine Learning technology to improve algorithms of interest to SC-Square.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;machine learning</kwd>
        <kwd>symbolic computation</kwd>
        <kwd>computer algebra systems</kwd>
        <kwd>satisfiability checking</kwd>
        <kwd>SMT solvers</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. ML Classification for CAD Variable Ordering</title>
      <sec id="sec-2-1">
        <title>2.1. Cylindrical algebraic decomposition</title>
        <p>A Cylindrical Algebraic Decomposition (CAD) is a decomposition of ordered R space into cells
arranged cylindrically, meaning the projections of cells are all arranged within cylinders. The
cells are (semi)-algebraic meaning each may be described by a finite sequence of polynomial
constraints. A CAD is usually produced for a set of polynomials such that each polynomial has
constant sign on each cell. This allows us to query a finite set of sample points to understand
the behaviour of the polynomials (or logical formula involving them) everywhere.</p>
        <p>
          The most important application of CAD is to perform Quantifier Elimination (QE) over the
reals: given a quantified formula, find an equivalent quantifier-free formula. E.g., QE would
transform ∃, 2 +  +  = 0 ∧  ̸= 0 into the equivalent 2 − 4 ≥ 0. Although CAD
emerged in the Symbolic Computation community, since SAT is a sub-problem of QE, it can
be used to tackle problems in the NRA and QF_NRA logics of the SMT-LIB. Adaptations of the
original CAD algorithm have been designed for use in SMT [25], [27], [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] and we also note the
adaptation [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] which is for general QE but with features inspired by Satisfiability Checking.
        </p>
        <p>CAD was introduced in 1975 [11] and is still an active area of research: for a deeper
introduction see, for example, the background section of [15].</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. CAD variable ordering choice</title>
        <p>
          CAD requires a variable ordering. For QE the ordering must match the quantification, but
variables in blocks of the same quantifier and the free variables can be swapped. So in the
example above we must decompose (, , , )-space with  last, but the other variables can
be in any order. The ordering can have a great efect on the time / memory use of CAD, the
number of cells, and even the underlying complexity [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. In our example using  ≺  ≺ 
requires 27 cells but  ≺  ≺  requires 115. Human-designed heuristics [14], [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], [16]
usually make the choice in implementations. In 2014, we trained a Support Vector Machine
(SVM) to choose which of these heuristic to follow [24]. The SVM significantly outperformed
any one heuristic, identifying subclasses where each excelled. This led to an EPSRC project and
the work described in the remainder of this section.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Results from CICM 2019</title>
        <p>
          The 2014 work choose between existing heuristics, in order to fix the number of classes. However,
there were many problems in the dataset where none of those heuristics gave an optimal choice.
So we revisited these experiments for CICM 2019 [17] this time allowing ML to predict the
optimal ordering directly (fixing the number of variables and thus classes). We explored a
variety of ML classification methods available in Python’s sklearn library [34]: K-nearest
neighbour classifiers, multi-layer perceptions, decision trees, and support vector machines.
We used the CAD implementation in Maple’s Regular Chains Library [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. All the ML models
outperformed the human-made heuristics for our dataset.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Results from SC-Square 2019</title>
        <p>
          The first step to use such an ML classification model is to represent the input (in our case a
set of polynomials) as a vector of floating point numbers: the features. In [24] and [17] we
used measures of degree and frequency of occurrence for each variable, inspired by [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Then
for SC-Square 2019 [18] we developed a new feature generation procedure which evaluates
combinations of basic functions (average, sign, maximum) on the degrees of the variables for
individual polynomials and the system. The extra features improved the performance of all the
aforementioned ML models. Note that this feature generation procedure can be used for similar
classifications where the input is a set of polynomials.
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Results from MACIS 2019</title>
        <p>Metrics for judging a CAD variable ordering choice should correspond to CAD runtime1. The
prior work trained ML classifiers to pick the ordering with minimal runtime for a problem, with
selections deemed accurate only if that optimal ordering was chosen. However, this meant that
ML training does not distinguish between diferent non-optimal orderings, even though the
diferences are often huge. For MACIS 2019 [ 19] we used an alternative definition of an accurate
choice: one leading to a runtime within % of the minimum. We then wrote a new version of
the sklearn cross-validation procedure to select model hyper-parameters to minimise CAD
runtime of the choices, rather than maximising the number of times the ordering that gives the
minimal time for a problem is taken. This improved the performance of all ML models. Note
that the new accuracy definition and procedure are suitable for any classification where we are
seeking to have ML make a choice to minimise computation time.</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. Software release for ICMS 2020</title>
        <p>For ICMS 2020 [20] we presented a software pipeline that implements our work described in
the previous sub-sections. Given two datasets (training and testing) the pipeline automates:
generation of CAD runtimes for each set of polynomials under each admissible variable ordering;
using the runtimes from the training dataset to select the hyper-parameters with cross-validation
and tune the parameters of the ML models; and evaluating the performance of those classifiers
on the testing dataset. The pipeline could be used to pick the variable ordering for other
algorithms which take sets of polynomials as input by changing the calls to Maple’s CAD
procedure with those of another implementation / algorithm. The code is freely available at:
https://doi.org/10.5281/zenodo.3731703.</p>
      </sec>
      <sec id="sec-2-7">
        <title>2.7. Success and limitations</title>
        <p>We experimented on the SMT-LIB benchmarks2 which are mostly real-world applications and
extracted two datasets of 3 and 4 variable problems that could be tackled by CAD. On our
3-variable dataset human-made heuristics achieved runtimes 27% above the minimum and ML
achieved runtimes 6% above. So here, the ML classifiers ofer close to optimal performance.</p>
        <sec id="sec-2-7-1">
          <title>1A hardware independent alternative would be the number of cells produced. 2http://smtlib.cs.uiowa.edu/</title>
          <p>However, on the 4-variable dataset ML achieved runtime 67% above the minimum (compared to
98% above for human-made heuristics) and so there is room for improvement. Of course, with 4
variables this is a much harder classification problem (24 orderings rather than 6).</p>
          <p>To inspire further progress we next consider related work in the literature.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Inspiration from the Literature</title>
      <sec id="sec-3-1">
        <title>3.1. Other applications of ML for CAD</title>
        <p>The methodology of [24] was applied later to decide the order of sub-formulae solving in [26]
and whether to precondition CAD input in [23].</p>
        <p>
          Two more recent works with alternative methodologies are [10] to choose CAD variable
ordering and [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] to choose the ordering of constraints to process using the adapted CAD
algorithm of [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Both papers employed neural networks for the classification and obtained
the quantity of data these need through random polynomial generation. We note that care
needs to be taken as random polynomials are known to behave quite diferently to those which
appear in the literature, e.g. [13] and so validation on non-random data should be encouraged.
Both papers also took steps to tackle the large number of classes: [10] used an iterative greedy
approach to select the ordering; while [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] derived an ordering on the constraints from multiple
binary classification on pairs.
        </p>
        <p>Applications of ML elsewhere in Computer Algebra are fairly rare3 but the following recent
one may ofer a blueprint for progress.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Reinforcement learning to optimise Buchberger’s algorithm</title>
        <p>
          Buchbergers’ Algorithm to produce a Gröbner Basis [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] must process a list of pairs of polynomials
(-pairs); with that processing potentially adding further pairs to the list. Pairs may be processed
in any order, but some orders result in more pairs to study and thus more computational
resources. There exist well established strategies to make this decision (see e.g. [21]).
        </p>
        <p>In [35] the authors described how an Agent could be trained to make this decision using
reinforcement learning: where instead of having a labelled dataset an Agent makes a decision
and receives a reward that informs future decisions. In [35] the Agent chooses an -Pair and
received a reward based on the number of polynomial additions required4.</p>
        <p>The study ensures a constant size of polynomial by studying only binomials (so no term swell)
and working in a modular coeficient field (so no coeficient swell). They can then represent
polynomials to a neural network via consistently sized exponent vectors. Similar to our work
in Section 2, this allows the network to judge sparsity and degree but not the actual coeficients
(to avoid over-fitting).
3We note the early example in [29] which uses a Monte-Carlo tree search to find the representation of polynomials
that are most eficient to evaluate.
4Actually, the reward is based on the number of polynomial additions required to complete a full run of Buchberger’s
algorithm after selecting that -Pair and continuing with a an existing heuristic. The rationale for this given is to
reduce variability but it seems equally compelling for allowing the Agent to judge the efects of a choice not just on
the next step but on the remainder of the algorithm. This does however greatly increase the training cost.</p>
        <p>The experiments in [35] are run on separate distributions of random polynomials based
on the number of variables, generators, and degree. The Agent significantly outperformed
the established strategies on such data, but the performance on real problems remains to be
observed.</p>
        <p>Most interestingly, some simple components of the Agent’s strategy were observed such as
a preference for pairs whose -polynomials are monomials and a preference for pairs whose
-polynomials are low degree. Such strategies had never been studied5 and when used alone
outperform the established heuristics.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. ML to predict algebraic computation directly</title>
        <p>There has been recent work on the use of ML to predict the outcome of algebraic computations
directly. Most notably, in [30] the authors predict the output of symbolic integration and
the symbolic solutions to first and second order ordinary diferential equations using neural
networks6. Their experiments outperform various CASs, in the sense that the model predicts
correct outcomes for examples where the CAS times out. However, we note that from the
viewpoint of a CAS developer the cases where the model predicts the wrong model would be
more critical than the timeouts7. We also note the recent preprint [28] which repeats the study
to make the argument that better generalisability will be achieved with a learning model based
on the relative positions of mathematical symbols rather than the absolute positions. These are
very diferent applications of ML to the algorithm optimisation we are interested in, but lessons
on how best to represent symbolic data to ML tools may well be transferable.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. ML in satisfiability checking</title>
        <p>An early use of ML in Satisfiability Checking was the development of the portfolio solver
SATZilla [39]. There is rarely a single dominant SAT solver for all problems, so SATZilla uses
ML to predict which solver to use for a given instance. This inspired the recent MachSMT which
selects algorithms for SMT-solvers [36] using Principal Component Analysis for dimensionality
reduction.</p>
        <p>The core algorithm of Satisfiability Checking, CDCL [ 33], allows us to proceed through the
exponential search space in an intelligent manner: generalising from the conflicts uncovered
at a specific sample to rule out additional branches. However, even with this conflict learning,
there are decisions in the search that must be taken without guidance and poor luck can lead a
search to an unproductive area, motivating for example the use of restarts.</p>
        <p>Thus CDCL itself has potential to be guided by ML. For example:
• [38] makes the choice of initial value to variable allocation to begin the search using
a regression model that predicted satisfiability of formulae after fixing the values of a
certain fraction of the variables.
5Perhaps because -polynomials are rarely examined before being reduced.
6Specifically seq2seq models more typically used in natural language processing: for example they view integration
as translating from integrands to integrals
7The review [12] ofer some other qualifications on the claim of superiority over CASs.</p>
        <p>• [32] uses machine learning to determine a policy for restarts in SAT-solvers: ML is used to
predict the quality of the next learnt clause based on previously learnt clauses; restarting
when the quality is predicted below a threshold.
• NeuroSAT [37] can predict unsatisfiable cores (subsets of the constraints that cannot be
satisfied together) to inform variable selection in the search.</p>
        <sec id="sec-3-4-1">
          <title>The most prominent use of ML in satisfiability is probably the following one.</title>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Reinforcement learning for SAT-solver branching</title>
        <p>The MapleSAT solver introduced and utilises the learning rate, the propensity for variables to
generate learnt clauses, as a key metric for making decisions and the rfist to outperform the
previously dominant VSIDS heuristic.</p>
        <p>In [31] they view the question of branching in SAT-solving (selecting the next variable in
the search) as an optimisation problem to maximize this metric. In particular, they apply
reinforcement learning, where the learning rate informs the reward function. Variable selection
is modelled in the multi-armed bandit (MAB) framework and tackled using a well known MAB
algorithm. This led MapleSAT to victory in the annual SAT competition8.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. ML to predict mathematical structure</title>
        <p>Finally, we note also the use of supervised ML to predict mathematical properties elsewhere
in mathematics, where the primary motivation is the formation of new conjectures. We refer
to the survey [22] which includes examples in algebraic geometry, representation theory,
combinatorics and number theory, in which most applications are expressed as ML classification
problems.</p>
      </sec>
      <sec id="sec-3-7">
        <title>3.7. Summary</title>
        <p>There is huge potential to apply ML to algorithms of interest to SC-Square. The author’s own
experience in Section 2 shows the potential benefits.</p>
        <p>However, our experience using ML classification has clear limits. Such supervised learning
requires labelled datasets. Although in theory infinite symbolic data could be manufactured, in
practice it would be computationally infeasible to label all that data. A reinforcement learning
approach such as for the examples in Sections 3.2 and 3.5 looks far more promising.</p>
        <p>Still unclear is the optimal way to represent symbolic data for ML tools, and how best to
generate training data so maximise generalisation onto the problems of interest in the real
world. Such questions deserve more focussed study.
8https://baldur.iti.kit.edu/sat-competition-2016/index.php?cat=results
The author’s work surveyed in Sections 2.3− 2.7 was joint with Dorian Florescu, and funded
by EPSRC Project EP/R019622/1: Embedding Machine Learning within Quantifier Elimination
Procedures. The author is now supported by EPSRC Project EP/T015748/1: Pushing Back the
Doubly-Exponential Wall of Cylindrical Algebraic Decomposition.</p>
        <p>
          We thank the reviewer of this paper, and the reviewers of the author’s surveyed work, for
useful comments. We thank the DEWCAD Journal Club for interesting discussions on some of
the other papers.
based on regular chains. Journal of Symbolic Computation 75, 74–93 (2016), https://doi.
org/10.1016/j.jsc.2015.11.008
[10] Chen, C., Zhu, Z., Chi, H.: Variable ordering selection for cylindrical algebraic
decomposition with artificial neural networks. In: Bigatti, A., Carette, J., Davenport, J.H.,
Joswig, M., de Wolf, T. (eds.) Mathematical Software – ICMS 2020. Lecture Notes in
Computer Science, vol. 12097, pp. 281–291. Springer International Publishing (2020),
https://doi.org/10.1007/978-3-030-52200-1_28
[11] Collins, G.E.: Quantifier elimination for real closed fields by cylindrical algebraic
decomposition. In: Proceedings of the 2nd GI Conference on Automata Theory and
Formal Languages. pp. 134–183. Springer-Verlag (reprinted in the collection [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]) (1975),
https://doi.org/10.1007/3-540-07407-4_17
[12] Davis, E.: The use of deep learning for symbolic integration: A review of (Lample and
Charton, 2019). Unpublished, available as ArXiv:2105.11479 (2019), https://arxiv.org/abs/
2105.11479
[13] Dembo, A., Poonen, B., Shao, Q., Zeitouni, O.: Random polynomials having few or no real
zeros. Journal of the American Mathematical Society 15(4), 857–892 (2002), https://www.
ams.org/journals/jams/2002-15-04/S0894-0347-02-00386-7/S0894-0347-02-00386-7.pdf
[14] Dolzmann, A., Seidl, A., Sturm, T.: Eficient projection orders for CAD. In: Proceedings of
the 2004 International Symposium on Symbolic and Algebraic Computation. pp. 111–118.
        </p>
        <p>ISSAC ’04, ACM (2004), https://doi.org/10.1145/1005285.1005303
[15] England, M., Bradford, R., Davenport, J.H.: Cylindrical algebraic decomposition with
equational constraints. Journal of Symbolic Computation 100, 38–71 (2020), https://doi.
org/10.1016/j.jsc.2019.07.019
[16] England, M., Bradford, R., Davenport, J.H., Wilson, D.: Choosing a variable ordering
for truth-table invariant cylindrical algebraic decomposition by incremental triangular
decomposition. In: Hong, H., Yap, C. (eds.) Mathematical Software – ICMS 2014. Lecture
Notes in Computer Science, vol. 8592, pp. 450–457. Springer Heidelberg (2014), http:
//dx.doi.org/10.1007/978-3-662-44199-2_68
[17] England, M., Florescu, D.: Comparing machine learning models to choose the variable
ordering for cylindrical algebraic decomposition. In: Kaliszyk, C., Brady, E., Kohlhase,
A., Sacerdoti, C.C. (eds.) Intelligent Computer Mathematics. Lecture Notes in Computer
Science, vol. 11617, pp. 93–108. Springer International Publishing (2019), https://doi.org/
10.1007/978-3-030-23250-4_7
[18] Florescu, D., England, M.: Algorithmically generating new algebraic features of polynomial
systems for machine learning. In: Abbott, J., Griggio, A. (eds.) Proceedings of the 4th
Workshop on Satisfiability Checking and Symbolic Computation ( SC2 2019). No. 2460 in
CEUR Workshop Proceedings (2019), http://ceur-ws.org/Vol-2460/
[19] Florescu, D., England, M.: Improved cross-validation for classifiers that make algorithmic
choices to minimise runtime without compromising output correctness. In: Slamanig, D.,
Tsigaridas, E., Zafeirakopoulos, Z. (eds.) Mathematical Aspects of Computer and
Information Sciences (Proc. MACIS ’19). Lecture Notes in Computer Science, vol. 11989, pp. 341–356.</p>
        <p>Springer International Publishing (2020), https://doi.org/10.1007/978-3-030-43120-4_27
[20] Florescu, D., England, M.: A machine learning based software pipeline to pick the variable
ordering for algorithms with polynomial inputs. In: Bigatti, A., Carette, J., Davenport,
J.H., Joswig, M., de Wolf, T. (eds.) Mathematical Software – ICMS 2020. Lecture Notes
in Computer Science, vol. 12097, pp. 302–322. Springer International Publishing (2020),
https://doi.org/10.1007/978-3-030-52200-1_30
[21] Giovini, A., Mora, T., Niesi, G., Robbiano, L., Traverso, C.: One sugar cube, please; or
selection strategies in the Buchberger algorithm. In: Proceedings of the 1991 International
Symposium on Symbolic and Algebraic Computation. pp. 49–54. ISSAC ’91, ACM (1991),
https://doi.org/10.1145/120694.120701
[22] He, Y.H.: Machine-learning mathematical structures. International Journal of Data Science
in the Mathematical Sciences 1(1), 1–25 (2022), https://doi.org/10.1142/S2810939222500010
[23] Huang, Z., England, M., Wilson, D., Bridge, J., Davenport, J.H., Paulson, L.: Using machine
learning to improve cylindrical algebraic decomposition. Mathematics in Computer Science
13(4), 461–488 (2019), https://doi.org/10.1007/s11786-019-00394-8
[24] Huang, Z., England, M., Wilson, D., Davenport, J.H., Paulson, L., Bridge, J.: Applying
machine learning to the problem of choosing a heuristic to select the variable ordering
for cylindrical algebraic decomposition. In: Watt, S.M., Davenport, J.H., Sexton, A.P.,
Sojka, P., Urban, J. (eds.) Intelligent Computer Mathematics, Lecture Notes in Artificial
Intelligence, vol. 8543, pp. 92–107. Springer International (2014), http://dx.doi.org/10.1007/
978-3-319-08434-3_8
[25] Jovanovic, D., de Moura, L.: Solving non-linear arithmetic. In: Gramlich, B., Miller, D.,
Sattler, U. (eds.) Automated Reasoning: 6th International Joint Conference (IJCAR), Lecture
Notes in Computer Science, vol. 7364, pp. 339–354. Springer (2012), https://doi.org/10.1007/
978-3-642-31365-3_27
[26] Kobayashi, M., Iwane, H., Matsuzaki, T., Anai, H.: Eficient subformula orders for real
quantifier elimination of non-prenex formulas. In: Kotsireas, S.I., Rump, M.S., Yap, K.C.
(eds.) Mathematical Aspects of Computer and Information Sciences (MACIS ’15). Lecture
Notes in Computer Science, vol. 9582, pp. 236–251. Springer International Publishing
(2016), https://doi.org/10.1007/978-3-319-32859-1_21
[27] Kremer, G., Ábrahám, E.: Fully incremental CAD. Journal of Symbolic Computation 100,
11–37 (2020), https://doi.org/10.1016/j.jsc.2019.07.018
[28] Kubota, H., Tokuoka, Y., Yamada, T.G., Funahashi, A.: Symbolic integration by integrating
learning models with diferent strengths and weaknesses. IEEE Access 10, 47000–47010
(2022), https://doi.org/10.1109/ACCESS.2022.3171329
[29] Kuipers, J., Ueda, T., Vermaseren, J.A.M.: Code optimization in FORM. Computer Physics</p>
        <p>Communications 189, 1–19 (2015), https://doi.org/10.1016/j.cpc.2014.08.008
[30] Lample, G., Charton, D.: Deep learning for symbolic mathematics. In: Mohamed, S., White,
M., Cho, K., Song, D. (eds.) Eighth International Conference on Learning Representations
(ICLR 2020) (2020), https://iclr.cc/virtual_2020/poster_S1eZYeHFDS.html
[31] Liang, J.H., Ganesh, V., Poupart, P., Czarnecki, K.: Learning rate based branching heuristic
for SAT solvers. In: Creignou, N., Le Berre, D. (eds.) Theory and Applications of Satisfiability
Testing – SAT 2016, Lecture Notes in Computer Science, vol. 9710, pp. 123–140. Springer
International Publishing (2016), https://doi.org/10.1007/978-3-319-40970-2_9
[32] Liang, J.H., Oh, C., Mathew, M., Thomas, C., Li, C., Ganesh, V.: Machine learning-based
restart policy for CDCL SAT solvers. In: Beyersdorf, O., Wintersteiger, C.M. (eds.) Theory
and Applications of Satisfiability Testing – SAT 2018, Lecture Notes in Computer Science,
vol. 10929, pp. 94–110. Springer International Publishing (2018), https://doi.org/10.1007/
978-3-319-94144-8_6
[33] Marques-Silva, J.P., Sakallah, K.A.: GRASP: A search algorithm for propositional
satisfiability. Computers, IEEE Transactions on 48(5), 506–521 (1999), https://doi.org/10.1109/12.
769433
[34] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel,
M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D.,
Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine learning in Python. Journal
of Machine Learning Research 12, 2825–2830 (2011), http://www.jmlr.org/papers/v12/
pedregosa11a.html
[35] Peifer, D., Stillman, M., Halpern-Leistner, D.: Learning selection strategies in Buchberger’s
algorithm. In: Daumé III, H., Singh, A. (eds.) Proceedings of the 37th International
Conference on Machine Learning (ICML 2020). Proceedings of Machine Learning Research,
vol. 119, pp. 7575–7585. PMLR (2020), https://proceedings.mlr.press/v119/peifer20a.html
[36] Scott, J., Niemetz, A., Preiner, M., Nejati, S., Ganesh, V.: MachSMT: A machine
learningbased algorithm selector for SMT solvers. In: Groote, J.F., Larsen, K.G. (eds.) Tools and
Algorithms for the Construction and Analysis of Systems. Lecture Notes in Computer
Science, vol. 12652, pp. 303–325. Springer International Publishing (2021), https://doi.org/
10.1007/978-3-030-72013-1_16
[37] Selsam, D., Bjørner, N.: Guiding high-performance SAT solvers with unsat-core predictions.</p>
        <p>In: Janota, M., Lynce, I. (eds.) Theory and Applications of Satisfiability Testing – SAT
2019. Lecture Notes in Computer Science, vol. 11628, pp. 336–353. Springer International
Publishing (2019), https://doi.org/10.1007/978-3-030-24258-9_24
[38] Wu, H.: Improving SAT-solving with machine learning. In: Proceedings of the 2017 ACM
SIGCSE Technical Symposium on Computer Science Education. pp. 787–788. SIGCSE ’17,
ACM (2017), https://doi.org/10.1145/3017680.3022464
[39] Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: Portfolio-based algorithm
selection for SAT. Journal Of Artificial Intelligence Research 32, 565–606 (2008), https:
//doi.org/10.1613/jair.2490</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ábrahám</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davenport</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>England</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kremer</surname>
          </string-name>
          , G.:
          <article-title>Deciding the consistency of nonlinear real arithmetic constraints with a conflict driven search using cylindrical algebraic coverings</article-title>
          .
          <source>Journal of Logical and Algebraic Methods in Programming</source>
          <volume>119</volume>
          ,
          <issue>100633</issue>
          (
          <year>2021</year>
          ), https://doi.org/10.1016/j.jlamp.
          <year>2020</year>
          .100633
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bradford</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davenport</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>England</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Wilson, D.:
          <article-title>Optimising problem formulations for cylindrical algebraic decomposition</article-title>
          . In: Carette,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Aspinall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Sojka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Windsteiger</surname>
          </string-name>
          , W. (eds.) Intelligent
          <source>Computer Mathematics, Lecture Notes in Computer Science</source>
          , vol.
          <volume>7961</volume>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>34</lpage>
          . Springer Berlin Heidelberg (
          <year>2013</year>
          ), http://dx.doi.org/10.1007/ 978-3-
          <fpage>642</fpage>
          -39320-
          <issue>4</issue>
          _
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , C.W.:
          <article-title>Companion to the tutorial: Cylindrical algebraic decomposition, presented at ISSAC '04</article-title>
          . URL http://www.usna.edu/Users/cs/wcbrown/research/ISSAC04/handout.pdf (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , C.W.:
          <article-title>Open non-uniform cylindrical algebraic decompositions</article-title>
          .
          <source>In: Proceedings of the 2015 International Symposium on Symbolic and Algebraic Computation</source>
          . pp.
          <fpage>85</fpage>
          -
          <lpage>92</lpage>
          . ISSAC '15,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2015</year>
          ), https://doi.org/10.1145/2755996.2756654
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Brown</surname>
            ,
            <given-names>C.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davenport</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          :
          <article-title>The complexity of quantifier elimination and cylindrical algebraic decomposition</article-title>
          .
          <source>In: Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation</source>
          . pp.
          <fpage>54</fpage>
          -
          <lpage>60</lpage>
          . ISSAC '07,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2007</year>
          ), https://doi.org/10.1145/ 1277548.1277557
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Brown</surname>
            ,
            <given-names>C.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daves</surname>
            ,
            <given-names>G.C.:</given-names>
          </string-name>
          <article-title>Applying machine learning to heuristics for real polynomial constraint solving</article-title>
          . In: Bigatti,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Carette</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Davenport</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.H.</given-names>
            ,
            <surname>Joswig</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>de Wolf</surname>
          </string-name>
          , T. (eds.)
          <source>Mathematical Software - ICMS 2020. Lecture Notes in Computer Science</source>
          , vol.
          <volume>12097</volume>
          , pp.
          <fpage>292</fpage>
          -
          <lpage>301</lpage>
          . Springer International Publishing (
          <year>2020</year>
          ), https://doi.org/10.1007/ 978-3-
          <fpage>030</fpage>
          -52200-1_
          <fpage>29</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Buchberger</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          : Bruno Buchberger's
          <source>PhD thesis</source>
          (
          <year>1965</year>
          )
          <article-title>: An algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal</article-title>
          .
          <source>Journal of Symbolic Computation</source>
          <volume>41</volume>
          (
          <issue>3-4</issue>
          ),
          <fpage>475</fpage>
          -
          <lpage>511</lpage>
          (
          <year>2006</year>
          ), https://doi.org/10.1016/j.jsc.
          <year>2005</year>
          .
          <volume>09</volume>
          .007
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Caviness</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johnson</surname>
          </string-name>
          , J.:
          <source>Quantifier Elimination and Cylindrical Algebraic Decomposition. Texts &amp; Monographs in Symbolic Computation</source>
          , Springer-Verlag (
          <year>1998</year>
          ), https://doi.org/10. 1007/978-3-
          <fpage>7091</fpage>
          -9459-1
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moreno</surname>
            <given-names>Maza</given-names>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Quantifier elimination by cylindrical algebraic decomposition</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>