<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enhancing State-of-the-Art Parallel SAT Solvers Through Optimized Sharing Policies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincent Vallade</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Julien Sopena</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Souheib Baarir</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>EPITA, LRE</institution>
          ,
          <addr-line>F-94270, Le Kremlin-Bicêtre</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sorbonne Université</institution>
          ,
          <addr-line>CNRS, LIP6, F-75005 Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In the context of parallel SAT solving, eficient information sharing plays a pivotal role in improving the performance. Traditionally, this information comprises the clauses learned by the underlying sequential solvers of the parallel strategy. This paper investigates the integration of existing sharing strategies and proven heuristics into parkissat-rs, a parallel solver using the kissat sequential engine. The investigation focuses on evaluating the trade-ofs between dedicating a core to solving the formula and optimizing the sharing mechanism. We evaluate diferent configurations of parkissat-rs, using additional threads for sharing, a dynamic threshold for clauses filtering, a dedicated thread for strengthening shared clauses, and a mechanism to prevent the sharing of duplicate learned clauses. We also performed a scaling study on a subset of the parameters. These extensive experiments conducted on the SAT 2022 main track benchmark demonstrate the efectiveness of the derived solvers, especially on UNSAT instances.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Parallel SAT Solving</kwd>
        <kwd>Sharing Strategy</kwd>
        <kwd>Clause Strengthening</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] utilizing kissat as sequential engine [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], achieved outstanding results on both SAT and
UNSAT instances. However, the sharing mechanism configuration used in parkissat-rs
difers significantly from that of p-mcomsps. Therefore, we aim to investigate if the sharing
policies of p-mcomsps can be incorporated into parkissat-rs [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] to further enhance its
performance.
      </p>
      <p>Over the years, p-mcomsps’s sharing mechanism has undergone incremental improvements
with multiple options. Some options involve simple decision heuristics, while others require
dedicated execution units (threads). Since we intend to use a solver with diferent sequential
algorithms and diversification mechanism, we need to evaluate the trade-ofs between dedicating
a core to solving the formula versus optimizing sharing.</p>
      <p>We evaluate various configurations of parkissat-rs with the following options: (i)
employing additional threads to handle sharing, thereby increasing the bandwidth of learned clauses;
(ii) utilizing a dynamic threshold for the Literal Block Distance measure; (iii) introducing a
dedicated thread for strengthening the exchanged clauses; (iv) implementing a mechanism to
prevent the sharing of duplicate learned clauses.</p>
      <p>We show that option (i) alone is suficient to significantly improve performance on SAT
instances. However, for better results on UNSAT instances, option (ii) must be activated and it
combines well with options (iii) and (iv), albeit without spectacular improvements.</p>
      <p>Furthermore, we present some insights into the results through a scaling experiment
performed on machines with 32, 48, and 64 cores. We demonstrate that the solving times of SAT
instances decrease as the number of cores increases, while the solving times of the UNSAT
instance do not decrease beyond 48 cores.</p>
      <p>The structure of the article is as follows: Section 2 describes the architecture and specificities
of the parallel solver parkissat-rs. Section 3 outlines the framework and methodology
employed in the study. The options integrated into the sharing policy are discussed in Section 4.
Section 5 discusses some scaling results and Section 6 presents the concluding remarks of the
paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Parkissat-rs</title>
      <p>In this section, we present parkissat-rs (see Figure 1), a state-of-the-art parallel (portfolio)
solver that utilizes shared memory-based communication among the underlying sequential
solvers. As the winner of the parallel SAT competition in 2022 by a significant margin, this
customized portfolio of kissat solvers has proven to be highly efective. The diversification
mechanism of parkissat-rs randomizes the branching order of variables for each solver
in the portfolio, supplemented with the options provided by kissat, the underlying solver.
PF</p>
      <p>Additionally, several preprocessing and in-processing methods have been developed to guide
the solver at the beginning or during the resolution process.</p>
      <p>Furthermore, parkissat-rs implements a policy for sharing learned clauses. Initially, each
solver places all the clauses it has learned with a Literal Block Distance (LBD) value less than or
equal to 2 into an export bufer. Then, a dedicated sharing thread called Sharer retrieves the
clauses from all export bufers and distributes them to the import bufers of all solvers. The
sharing process is performed asynchronously, thereby avoiding contention on locks since each
bufer is only accessed by two threads: the thread responsible for sharing the clauses and the
Sharer thread. The sharing thread performs this operation every 0.5 seconds, and to control
the bandwidth of shared data, it only shares 1500 literals per sharing round.</p>
      <p>In Section 4, we will incrementally introduce various mechanisms to enhance the sharing
policy of the aforementioned architecture. We will analyze the impact of these mechanisms on
the performance of the platform.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental Setup and Evaluation Methodology</title>
      <p>Here, we present the methodology employed to evaluate the potential combinations of options,
as well as provide a detailed overview of the experimental platform.</p>
      <p>Hardware The experiments were conducted on a cluster consisting of Intel Xeon Silver 4216
processors, equipped with 32 cores and 384 GB of RAM. The solvers were executed with a
timeout of 5000 seconds and a memory limit of 256 GB.</p>
      <p>The benchmark We executed our solver on the main track of the SAT 2022 competition, which
comprises a minimum of 171 SAT instances and 187 UNSAT instances, as per the competition
results. Consequently, we divided the results into two separate tables: one for SAT instances
and another for UNSAT instances. However, we excluded the 42 instances with UNKNOWN
results, both as reported by the competition and observed in our experiments, from the tables.
PF</p>
      <p>SW
...</p>
      <p>SW
SW
SW
...</p>
      <p>SW
SW</p>
      <p>Kissat
...
Performance evaluation Our objective is to compare the performance of each solver with
the Virtual Best Solver (VBS), which is composed of the best results achieved by any solver for
each instance. We assess the solvers based on their execution time and the number of instances
solved. In each table, we present the performance of a solver alongside the number of cores
dedicated to the CDCL algorithm.</p>
      <p>Since we aim to utilize all available physical cores on our test machines and certain options
require a dedicated thread, some configurations involve sacrificing a physical core used for
CDCL (as in the original version, parkissat-rs) in favor of the specific option.</p>
      <p>The tables include the following data:
• The "CDCL" column indicates the number of CDCL cores utilized in the portfolio.
• The "Total Runtime" represents the cumulative runtime of the solver.2
• The "Instances Solved" denotes the number of instances successfully solved.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Improvements of the sharing policy</title>
      <p>In this section, we introduce various enhancements to the sharing mechanism of parkissat-rs
and examine their performances. Each improvement is individually studied, as well as in
combination with others, as they can have both quantitative and qualitative efects on clause
sharing. Additionally, these mechanisms come with associated resource costs, making it crucial
to investigate their potential side efects on each other.</p>
      <p>To streamline the presentation of the results, we have opted to incrementally incorporate
each studied mechanism. Towards the end of the section, we will present all the possible
combinations. The resulting solver is named P-[options] for clarity and reference.</p>
      <sec id="sec-4-1">
        <title>2A non-solved instance is given a runtime of 5000.</title>
        <sec id="sec-4-1-1">
          <title>4.1. XG: Increasing sharing throughput using multiple production groups</title>
          <p>The first option we introduce to the sharing policy is the concept of multiple production groups.
A production group comprises a set of solvers as producers, all solvers in the portfolio as
consumers, and a dedicated Sharer thread running on a specific physical core.</p>
          <p>The original implementation of parkissat-rs utilizes one production group (referred to
as P-1G in the tables). To enhance the throughput of shared clauses, we incorporate a second
production group: half of the solvers in the portfolio act as producers for one group, while the
other half act as producers for the other group. This updated configuration is referred to as
P-2G.</p>
          <p>While the proposed mechanism is fully customizable and allows for additional groups,
preliminary tests on our test machine have shown performance degradation when using more
than 2 groups. The increase in the number of physical cores dedicated to diferent threads
becomes too significant. Figure 2 shows the final architecture after incorporating all the options
(P-2G-horde-str-dup).</p>
          <p>Table 1 compares P-1G and P-2G. We observe that using two sharing groups yields improved
performance on SAT instances, with three additional instances solved. However, there is no
performance improvement for UNSAT instances. Although P-2G loses one instance, the runtime
is nearly equivalent, indicating that P-1G is not significantly faster.</p>
          <p>The second Sharer enables faster sharing of clauses, leading to a more rapid utilization of
the shared clauses. However, since the strategy only shares clauses with an LBD value less than
or equal to two, the impact of these shared clauses is limited.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>4.2. Horde: A Heuristic to Select Clauses to Share</title>
          <p>
            The -horde option enables the Sharer to dynamically adjust the LBD threshold for individual
producers based on an estimation of whether they are sending too few or too many clauses.
This mechanism allows for better control over the sharing of clauses. It is worth noting that
this sharing strategy is an improvement of the one developed in [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ].
          </p>
          <p>Table 2 presents the performance results of using the -horde option with P-1G and P-2G.
Although this option appears to slightly degrade the performance for SAT instances, specifically
with P-1G-horde losing two instances compared to P-1G, a closer examination of the runtime
indicates that P-1G-horde overall achieved faster execution.</p>
          <p>On the other hand, the -horde option delivers significant performance gains for UNSAT
instances, with both solvers benefiting from up to five additional solved instances.</p>
          <p>The key insight from this experiment is that clauses with an LBD greater than two prove to
be highly valuable for solving UNSAT instances. Notably, the -horde option performs on par
with the VBS in solving UNSAT instances.</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>4.3. STR: Asynchronous clause strengthening</title>
          <p>The -str option activates the reducer component, which implements an asynchronous clause
strengthening algorithm. The purpose of this option is to enhance the unit propagation (UP)
procedure by reducing the size of received learned clauses. The reducer operates within a
production group and attempts to minimize the size of the clauses received from the solvers
Solvers
VBS
P-2G
P-2G-horde
P-1G
P-1G-horde</p>
          <p>CDCL
28
30
31
30
28
30
31
30</p>
          <p>TIME
42H39
45H13
45H14
45H42
46H03
48H43
53H07
54H42
54H46
within the group. Successfully reduced clauses are then sent back to the solvers. Since the
reducer is computationally intensive, a dedicated core is allocated for its execution.</p>
          <p>
            The theoretical basis of this strengthening technique is presented in [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], and here we provide
the technical details. The reducer relies on a standard CDCL algorithm. It takes a clause as
input and outputs the same clause, potentially reduced by some or all of its literals. If the
reducer derives the empty clause during the process, it indicates that the formula is unsatisfiable
(UNSAT). The algorithm operates as follows:
1. Iteratively assign the false value to each variable in the clause until a conflict is reached
or all literals have been successfully assigned.
2. At each iteration, select a literal whose complement is not involved in the current
assignment. This ensures that the input clause is stripped of literals that are redundant based
on the rest of the clause.
3. Perform UP after selecting each literal. If no conflicts are found, add the literal to the
output clause and add its negation to the current assignment set.
4. When a conflict is reached, execute a sequence of backjumps, UP, and conflict analyses
until the conflict is resolved or the current assignment set is empty.
5. During this phase, the algorithm can learn new clauses. When it reaches an area without
conflicts while the assignment set is not empty, it returns a new clause.
          </p>
          <p>
            A previous evaluation demonstrated that a similar architecture reduced the size of one third of
received clauses by a quarter [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ].
          </p>
          <p>Table 3 presents the performance results of the -str option. We can observe that the
integration of this option yields consistent improvements for UNSAT instances across all the
previously defined configurations, either in terms of running time or the number of solved
instances. Notably, the P-2G-str configuration outperforms the P-2G configuration in terms
of the number of solved instances.</p>
          <p>However, for SAT instances, the impact of the -str option is generally negative, as three out of
four configurations experience performance degradation. The P-1G-horde-str configuration
appears to be an anomaly, which could be attributed to the inherent randomness afecting the
solver’s behavior.</p>
          <p>
            In summary, assigning a core that was originally dedicated to executing CDCL to the execution
of the reducer component proves to be highly beneficial for solving UNSAT instances but has an
adverse efect on SAT instances. These findings align with the results of experiments conducted
in [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], the original context in which the strengthening algorithm was developed.
          </p>
        </sec>
        <sec id="sec-4-1-4">
          <title>4.4. DUP: Preventing the Sharing of Duplicates</title>
          <p>In a parallel context, it is common to encounter many identical clauses that are learned by
diferent CDCL solvers. Sharing these duplicates among solvers can be detrimental to the
parallel solver’s performance.</p>
          <p>To address this issue, it is beneficial to develop a mechanism that detects and rejects the
sharing of duplicated clauses. However, such a mechanism comes with a cost that can potentially
impact the sharer’s performance. Therefore, it is crucial to ensure that the number of duplicate
clauses is significant enough to justify the trade-of.</p>
          <p>The study depicted in Figure 3 examines the percentage of duplicates originating from CDCL
solvers (Global duplicates) and from the reducer (after the strengthening phase). Each dot in
the plot represents the percentage for a specific instance 3.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>3The benchmark used here is from the 2021 SAT competition</title>
        <p>Solvers
VBS
P-2G
P-2G-horde
P-2G-horde-str-dup
P-1G-horde-str
P-2G-str
P-1G
P-1G-horde
P-1G-str-dup
P-1G-horde-dup
P-1G-str
P-1G-dup
P-2G-horde-dup
P-2G-horde-str
P-2G-dup
P-1G-horde-str-dup</p>
        <p>The study reveals that CDCL solvers generate between 0% and 10% duplicates. On the
other hand, the reducer produces a substantial number of duplicates across a larger number
of benchmark instances. This analysis motivates the introduction of a duplicate removal
mechanism.</p>
        <p>
          Implementing such a mechanism requires tracking the already shared clauses. However,
storing all clauses perfectly would be memory-intensive. Therefore, we chose to implement a
space-eficient probabilistic approach using Bloom filters [ 8]. The sharer is enhanced with a
Bloom filter to prevent the sharing of duplicates within its production group. It is worth noting
that similar filters have been utilized in [
          <xref ref-type="bibr" rid="ref5">5, 9</xref>
          ]. In our configuration, this mechanism is activated
by using the -dup option.
        </p>
        <p>The results obtained using the -dup option are shown in Table 4. It can be observed that
the outcomes are somewhat mitigated. Our intuition suggests that although a Bloom filter
ofers eficiency, the process of verifying duplicate clauses in each round of sharing can still
impede sharing. This trade-of does not appear to be worthwhile, especially in the context of
this specific benchmark with the chosen solver.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Scaling study</title>
      <p>Thus far, our research has focused on analyzing the impact of diferent mechanisms on a
configuration consisting of 32 physical cores, which reflects the characteristics of machines
used in SAT competitions over the past few years. However, the issue of scalability of these
algorithms has emerged as a significant concern.</p>
      <p>To address this concern, we conducted a new study using two additional architectures:
one with 48 cores and another with 64 cores. Within this study, we concentrated on
32 cores</p>
      <p>SAT instances
48 cores 64 cores
UNSAT instances
32 cores
48 cores</p>
      <p>64 cores
27500
25000
22500
20000
)s17500
(
iTem1125500000
10000
7500
5000
2500
0
2G-hordeStrategi2eGs-horde-str
1G-horde-dup
2G
2G-hordeStrategi2eGs-horde-str
1G-horde-dup
the best-performing combinations identified in the 32-core study: namely, 2G, 2G-horde,
2G-horde-str, and 1G-horde-dup. To assess their performance, we employed a benchmark
comprising 40 instances that were solved by at least one configuration utilizing the 32-core
architecture, with each instance having a maximum allowed solving time. The results of this
study are presented in Figure 4.</p>
      <p>Overall, our findings indicate that the relative performance of the diferent combinations,
with the exception of one case, remains consistent as the number of cores increases, regardless
of whether the problem is SAT or UNSAT. However, we observed an intriguing phenomenon:
beyond 48 cores, scalability notably diminishes for UNSAT problems, irrespective of the
algorithm employed. This can be attributed to the limitations of the sharing mechanisms, as the
additional threads do not efectively reduce the search space explored by other threads.
Conversely, SAT problems exhibit diferent behavior, as increasing the number of threads enhances
the probability of swiftly finding a solution.</p>
      <p>Furthermore, alongside these general results that support prior research, it is worth
highlighting the remarkable performance of 2G-horde-str. While it maintains its superiority for solving
UNSAT problems, it emerges as the most eficient algorithm for SAT problems in the 64-core
configuration. This achievement can be attributed to the substantial impact of strengthened
clauses, which enable the early pruning of irrelevant branches in the decision tree, thereby
accelerating convergence towards a solution.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we have combined established techniques with new algorithms to enhance the
sharing mechanism in the parallel solver, parkissat-rs.</p>
      <p>Specifically, compared to the original implementation, we introduced production groups, the
horde sharing strategy, strengthening, and duplicate removal mechanisms. The complete solver
with all the options is depicted in Figure 2.</p>
      <p>We conducted experiments and compared the performance of each implemented add-on on
the benchmarks of the 2022 SAT competition. This allowed us to draw conclusions regarding
the individual and combined efects of each technique. For SAT instances, the most efective
implementation is P-2G. This implies that the original implementation is already eficient
and requires minimal changes. The division of solvers within production groups significantly
accelerates sharing.</p>
      <p>For UNSAT instances, all implementations utilizing the horde sharing strategy are highly
eficient. This mechanism complements strengthening and duplicate removal when used separately,
but combining all three does not yield improved results.</p>
      <p>These conclusions hold for multi-core machines with up to 100 cores. Further exploration of
these strategies in many-core machines with more than 100 cores would be fascinating. In such
scenarios, the cost of CPU-intensive mechanisms could be ofset by utilizing a larger number of
solvers, achieving better resource utilization.</p>
      <p>Furthermore, a possible heuristic avenue, not explored in this paper, is the diversification
mechanism used by parallel SAT solvers to avoid redundant work between worker threads. We
are interested in directly modifying the internal structure of the sequential engine by shufling
the literal order of its clause database. This modification will impact the unit propagation of the
solver and may yield diferent results.
strengthening in parallel sat solving, in: R. Lee, S. Jha, A. Mavridou, D. Giannakopoulou
(Eds.), NASA Formal Methods, Springer International Publishing, Cham, 2020, pp. 222–229.
[8] B. H. Bloom, Space/time trade-ofs in hash coding with allowable errors, Communications
of the ACM 13 (1970) 422–426.
[9] D. Schreiber, P. Sanders, Scalable sat solving in the cloud, in: C.-M. Li, F. Manyà (Eds.),
Theory and Applications of Satisfiability Testing – SAT 2021, Springer International Publishing,
Cham, 2021, pp. 518–534.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vallade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Le</given-names>
            <surname>Frioux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Oanea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Baarir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sopena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kordon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nejati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ganesh</surname>
          </string-name>
          ,
          <article-title>New concurrent and distributed painless solvers: P-mcomsps, p-mcomsps-com,p-mcomsps-mpi, and p-mcomsps-com-mpi</article-title>
          ,
          <source>in: Proceedings of SAT Competition</source>
          <year>2021</year>
          :
          <article-title>Solver</article-title>
          and
          <string-name>
            <given-names>Benchmark</given-names>
            <surname>Descriptions</surname>
          </string-name>
          , Department of Computer Science, University of Helsinki, Finland,
          <year>2021</year>
          , p.
          <fpage>40</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Le Frioux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Baarir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sopena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kordon</surname>
          </string-name>
          ,
          <article-title>Painless: a framework for parallel sat solving</article-title>
          ,
          <source>in: Proceedings of the 20th International Conference on Theory and Applications of Satisfiability Testing (SAT)</source>
          , Springer,
          <year>2017</year>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Cherif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Habet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Terrioux</surname>
          </string-name>
          ,
          <string-name>
            <surname>Combining</surname>
            <given-names>VSIDS</given-names>
          </string-name>
          and
          <article-title>CHB Using Restarts in SAT</article-title>
          , in: 27th International Conference on Principles and
          <article-title>Practice of Constraint Programming (CP</article-title>
          <year>2021</year>
          ), LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Dagstuhl, Germany,
          <year>2021</year>
          , pp.
          <volume>20</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          :
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cai</surname>
          </string-name>
          , Parkissat:
          <article-title>Random shufle based and pre-processing extended parallel solvers with clause sharing</article-title>
          ,
          <source>in: Proceedings of SAT Competition</source>
          <year>2022</year>
          :
          <article-title>Solver</article-title>
          and
          <string-name>
            <given-names>Benchmark</given-names>
            <surname>Descriptions</surname>
          </string-name>
          , Department of Computer Science, University of Helsinki, Finland,
          <year>2022</year>
          , p.
          <fpage>51</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Balyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sanders</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sinz</surname>
          </string-name>
          ,
          <article-title>Hordesat: A massively parallel portfolio sat solver</article-title>
          ,
          <source>in: Proceedings of the 18th International Conference on Theory and Applications of Satisfiability Testing (SAT)</source>
          , Springer,
          <year>2015</year>
          , pp.
          <fpage>156</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wieringa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Heljanko</surname>
          </string-name>
          ,
          <article-title>Concurrent clause strengthening</article-title>
          ,
          <source>in: Proceedings of the 16th International Conference on Theory and Applications of Satisfiability Testing (SAT)</source>
          , Springer,
          <year>2013</year>
          , pp.
          <fpage>116</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vallade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Le</given-names>
            <surname>Frioux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Baarir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sopena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kordon</surname>
          </string-name>
          ,
          <article-title>On the usefulness of clause</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>