<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Computing Uniform Interpolants for E U F via (conditional) DAG-based Compact Representations?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Silvio Ghilardi</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Gianola</string-name>
          <email>gianola@inf.unibz.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Deepak Kapur</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Model Theory.</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CSE Department, University of California San Diego</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University of New Mexico</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Dipartimento di Matematica, Universita` degli Studi di Milano</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Faculty of Computer Science, Free University of Bozen-Bolzano</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in E U F endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory.</p>
      </abstract>
      <kwd-group>
        <kwd>Uniform Interpolation</kwd>
        <kwd>SMT</kwd>
        <kwd>Term rewriting</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The theory of equality over uninterpreted symbols, henceforth denoted by E U F , is one
of the simplest theories that have found numerous applications in computer science,
formal methods and logic. Starting with the works of Shostak [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] and Nelson and
Oppen [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] in the early eighties, some of the first algorithms were proposed in the context
of developing approaches for combining decision procedures for quantifier-free
theories including freely constructed data structures and linear arithmetic over the rationals.
E U F was first exploited for hardware verification of pipelined processors by Dill [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
and more widely subsequently in formal methods and verification using model
checking frameworks. With the popularity of SMT solvers, where E U F serves as a glue for
combining solvers for different theories, numerous new graph-based algorithms have
been proposed in the literature over the last two decades for checking unsatisfiability of
a conjunction of (dis)equalities of terms built using function symbols and constants.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], the use of interpolants for automatic invariant generation was proposed,
leading to a plethora of research activities to develop algorithms for generating
interpolants for specific theories as well as their combination. This new application is
different from the role of interpolants for analyzing proof theories of various logics starting
? Copyright c 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0).
with the pioneering work of [
        <xref ref-type="bibr" rid="ref11 ref16 ref25">11,16,25</xref>
        ] (for a recent survey in the SMT area, see [
        <xref ref-type="bibr" rid="ref2 ref3">3,2</xref>
        ]).
Approaches like [
        <xref ref-type="bibr" rid="ref16 ref22 ref25">22,16,25</xref>
        ], however, assume access to a proof of a ! b for which
an interpolant is being generated. Given that there can in general be many interpolants
including infinitely many for some theories, little is known about what kind of
interpolants are effective for different applications, even though some research has been
reported on the strength and quality of interpolants.
      </p>
      <p>
        In this paper, a different approach is taken, motivated by the insight connecting
interpolating theories with those admitting quantifier-elimination, as advocated in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
Particularly, in the preliminaries the concept of a uniform interpolant (UI) defined by
a formula a, in the context of formal methods and verification, is proposed for E U F ,
which is well-known not to admit quantifier elimination. A uniform interpolant for a
formula a is in particular, for any formula b , an ordinary interpolant [
        <xref ref-type="bibr" rid="ref11 ref21">11,21</xref>
        ] for the
pair (a; b ) such that a ! b (as well as a reverse interpolant [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] for an unsatisfiable
pair (a; g)).5 A uniform interpolant could be defined for theories irrespective of whether
they admit quantifier elimination; for theories admitting quantifier elimination, a
uniform interpolant can be obtained using quantifier elimination: indeed, this shows that a
theory enjoying quantifier elimination admits uniform interpolants as well. Then, a UI
is shown to exist for E U F and to be unique. A related concept of a cover is proposed in
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] (see also [
        <xref ref-type="bibr" rid="ref7 ref8">7,8</xref>
        ]).
      </p>
      <p>
        In the current paper, two different algorithms for generating UIs from a formula in
E U F (with a list of symbols to be eliminated) are proposed with different
characteristics. They share a common subpart based on concepts used in a ground congruence
closure proposed in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], which flattens the input and generates a canonical rewrite
system on constants along with unique rules of the form f ( ), where f is an uninterpreted
symbol and the arguments ( ) are canonical forms of constants. Further, eliminated
symbols are represented as a DAG to avoid any exponential blow-up.
      </p>
      <p>The first algorithm is non-deterministic where undecided equalities on constants
are hypothesized to be true or false, generating a branch in each case, and recursively
applying the algorithm. It could also be formulated as an algorithm similar in spirit
to the use of equality interpolants in Nelson and Oppen framework for combination,
where different partitions on constants are tried, with each leading to a branch in the
algorithm. New symbols are introduced along each branch to avoid exponential
blowup. The second algorithm generalizes the concept of a DAG to conditional DAG in
which subterms are replaced by new symbols under a conjunction of equality atoms,
resulting in its compact and efficient representation. A fully or partially expanded form
of a UI can be derived based on their use in applications. Because of their compact
representation, UIs can be kept of polynomial size for a large class of formulas.</p>
      <p>
        The former algorithm is tableaux-based and produces the output in disjunctive
normal form, whereas the second algorithm is based on manipulation of Horn clauses and
gives the output in (compressed) conjunctive normal form. We believe that the two
algorithms are complementary to each others, especially from the point of view of
applications. Model checkers typically synthesize safety invariants using conjunctions
5 The third author recently learned from the first author that this concept has been used
extensively in logic for decades [
        <xref ref-type="bibr" rid="ref14 ref24">14,24</xref>
        ] to his surprise since he had the erroneous impression that
he came up with the concept in 2012, which he presented in a series of talks [
        <xref ref-type="bibr" rid="ref18 ref19">18,19</xref>
        ].
of clauses and in this sense they might better take profit from the second algorithm;
however, model-checkers dually representing sets of backward reachable states as
disjunctions of cubes (i.e., conjunctions of literals) would better adopt the first algorithm.
Non-deterministic manipulations of cubes are also required to match certain PSPACE
lower bounds, as in the case of SAS systems mentioned in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. On the other hand,
regarding the overall complexity, it seems to be easier to avoid exponential blow-ups in
concrete examples by adopting the second algorithm.
      </p>
      <p>
        The termination, correctness and completeness of both the algorithms are proved
by using results in model theory about model completions; this relies on a basic result
(Lemma 2 below) taken from [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Both our algorithms are simple, intuitive and easy to understand in contrast to other
algorithms in the literature. In fact, the algorithm from [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] requires the full saturation
of all the formulae deductively implied in a version of superposition calculus equipped
with ad hoc settings, whereas the main merit of our second algorithm is to show that
a very light form of completion is sufficient, thus simplifying the whole procedure and
getting seemingly better complexity results.6 The algorithm from [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] requires some
bug fixes (as pointed out in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]) and the related completeness proof is still missing.
      </p>
      <p>
        The paper is structured as follows: in the next paragraph we discuss about related
work on the use UIs. In Section 2 we state the main problem, fix some notation,
discuss DAG representations and congruence closure. In Sections 3 and 4, we respectively
give the two algorithms for computing uniform interpolants in E U F (correctness and
completeness of such algorithms are proved in Section 5). We conclude in Section 6.
Related work on the use of UIs. The use of uniform interpolants in model-checking
safety problems for infinite state systems was already mentioned in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and further
exploited in a recent research line on the verification of data-aware processes [
        <xref ref-type="bibr" rid="ref5 ref6 ref9">6,5,9</xref>
        ].
Model checkers need to explore the space of all reachable states of a system; a
precise exploration (either forward starting from a description of the initial states or
backward starting from a description of unsafe states) requires quantifier elimination. The
latter is not always available or might have prohibitive complexity; in addition, it is
usually preferable to make over-approximations of reachable states both to avoid
divergence and to speed up convergence. One well-established technique for
computing over-approximations consists in extracting interpolants from spurious traces, see
e.g. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. One possible advantage of uniform interpolants over ordinary interpolants is
that they do not introduce over-approximations and so abstraction/refinements cycles
are not needed in case they are employed (the precise reason for that goes through the
connection between uniform interpolants, model completeness and existentially closed
structures, see [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] for a full account). In this sense, computing uniform interpolants
has the same advantages and disadvantages as computing quantifier eliminations, with
two remarkable differences. The first difference is that uniform interpolants may be
available also in theories not admitting quantifier elimination (E U F being the typical
6 Although we feel that some improvement is possible, the termination argument in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] gives a
double exponential bound, whereas we have a simple exponential bound for both algorithms
(with optimal chances to keep the output polynomial in many concrete cases in the second
algorithm).
example); the second difference is that computing uniform interpolants may be tractable
when the language is suitably restricted e.g. to unary function symbols (this was already
mentioned in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], see also Remark 3 below). Restriction to unary function symbols
is sufficient in database driven verification to encode primary and foreign keys [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. It
is also worth noticing that, precisely by using uniform interpolants for this restricted
language, in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] new decidability results have been achieved for interesting classes of
infinite state systems. Notably, such results also operationally mirrored in the MCMT
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] implementation since version 2.8.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Preliminaries</title>
      <p>We adopt the usual first-order syntactic notions, including signature, term, atom,
(ground) formula; our signatures are always finite or countable and include equality.
Without loss of generality, only functional signatures, i.e. signatures whose only
predicate symbol is equality, are considered. A tuple hx1; : : : ; xni of variables is compactly
represented as x. The notation t(x); f (x) means that the term t, the formula f has free
variables included in the tuple x. This tuple is assumed to be formed by distinct
variables, thus we underline that, when we write e.g. f (x; y), we mean that the tuples x; y
are made of distinct variables that are also disjoint from each other. A formula is said to
be universal (resp., existential) if it has the form 8x(f (x)) (resp., 9x(f (x))), where f is
quantifier-free. Formulae with no free variables are called sentences.</p>
      <p>From the semantic side, the standard notion of S -structure M is used: this is a pair
formed of a set (the ‘support set’, indicated as jMj) and of an interpretation function.
The interpretation function maps n-ary function symbols to n-ary operations on jMj
(in particular, constants symbols are mapped to elements of jMj). A free variables
assignment I on M extends the interpretation function by mapping also variables to
elements of jMj; the notion of truth of a formula in a S -structure under a free variables
assignment I is the standard one.</p>
      <p>It may be necessary to expand a signature S with a fresh name for every a 2 jMj:
such expanded signature is called S jMj and M is by abuse seen as a S jMj-structure
itself by interpreting the name of a 2 jMj as a (the name of a is directly indicated as a
for simplicity).</p>
      <p>A S -theory T is a set of S -sentences; a model of T is a S -structure M where all
sentences in T are true. We use the standard notation T j= f to say that f is true in all
models of T for every assignment to the variables occurring free in f . We say that f is
T -satisfiable iff there is a model M of T and an assignment to the variables occurring
free in f making f true in M.
2.1</p>
      <sec id="sec-2-1">
        <title>Uniform Interpolants</title>
        <p>Fix a theory T and an existential formula 9e f (e; z); call a residue of 9ef (e; z)
any quantifier- free formula q (z; y) such that T j= 9e f (e; z) ! q (z; y) (equivalently,
such that T j= f (e; z) ! q (z; y)). The set of residues of 9ef (e; z) is denoted as
Res(9e f (e; z)). A quantifier-free formula y (z) is said to be a T -uniform
interpolant7 (or, simply, a uniform interpolant, abbreviated UI) of 9e f (e; z) iff y (z) 2
Res(9e f (e; z)) and y (z) implies (modulo T ) all the formulae in Res(9e f (e; z)). It is
immediately seen that UIs are unique (modulo T -equivalence). A theory T has uniform
quantifier-free interpolation iff every existential formula 9e f (e; z) has a UI.
Example 1. Consider the existential formula 9e( f (e; z1) = z2 ^ f (e; z3) = z4): it can be
shown that its E U F -uniform interpolant is z1 = z3 ! z2 = z4.</p>
        <p>
          Notably, if T has uniform quantifier-free interpolation, then it has ordinary
quantifierfree interpolation, in the sense that if we have T j= f (e; z) ! f 0(z; y) (for quantifier-free
formulae f ; f 0), then there is a quantifier-free formula q (z) such that T j= f (e; z) ! q (z)
and T j= q (z) ! f 0(z; y). In fact, if T has uniform quantifier-free interpolation, then the
interpolant q is independent on f 0 (the same q (z) can be used as interpolant for all
entailments T j= f (e; z) ! f 0(z; y), varying f 0). Uniform quantifier-free interpolation
has a direct connection to an important notion from classical model theory, namely
model completeness (see [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] for more information).
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Problem Statement</title>
        <p>In this paper the problem of computing UIs for the case in which T is pure identity
theory in a functional signature S is considered; this theory is called E U F (S ) or just
E U F in the SMT-LIB2 terminology. Two different algorithms are proposed for that
(while proving correctness and completeness of such algorithms, it is simultaneously
shown that UIs exist in E U F ). The first algorithm computes a UI in disjunctive normal
form format, whereas the second algorithm supplies a UI in conjunctive normal form
format. Both algorithms use suitable DAG-compressed representation of formulae.</p>
        <p>The following notation is used throughout the paper. Since it is easily seen that
existential quantifiers commute with disjunctions, it is sufficient to compute UIs for
primitive formulae, i.e. for formulae of the kind 9e f (e; z), where f is a constraint, i.e.
a conjunction of literals. We partition all the 0-ary symbols from the input as well as
symbols newly introduced into disjoint sets. We use the following conventions:
- e = e0; : : : ; eN (with N integer) are symbols to be eliminated, called variables,
- z = z0; : : : ; zM (with M integer) are symbols not to be eliminated, called parameters,
- symbols a; b; : : : stand for both variables and parameters.</p>
        <p>
          In the following we will also use symbols y for indicating variables that changed their
status and do not need to be eliminated anymore: we use symbols a; b; : : : for them as
well. Variables e are usually skolemized during the manipulations of our algorithms and
proofs below, in the sense that they have to be considered as fresh individual constants.
Remark 1. UI computations eliminate symbols which are existentially quantified
variables (or skolemized constants). Elimination of function symbols can be reduced to
elimination of variables in the following way. Consider a formula 9 f f ( f ; z), where f
is quantifier-free. Successively abstracting out functional terms, we get that 9 f f ( f ; z)
is equivalent to a formula of the kind 9e 9 f (Vi( f (ti) = ei) ^ y ), where the e are fresh
variables (with ei 2 e), ti are terms, f does not occur in ti; ei; y and y is quantifier-free.
The latter is semantically equivalent to 9e(Vi6= j(ti = t j ! ei = e j) ^ y ), where ti = t j
is the conjunction of the component-wise equalities of the tuples ti and t j.
7 In some literature [
          <xref ref-type="bibr" rid="ref15 ref7">15,7</xref>
          ] uniform interpolants are called covers.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Flat Literals, DAGs and Congruence Closure</title>
        <p>A flat literal is a literal of one of the following kinds
f (a1; : : : ; an) = b;
a1 = a2;
a1 6= a2
(1)
where a1; : : : ; an and b are (not necessarily distinct) variables or constants. A formula
is flat iff all literals occurring in it are flat; flat terms are terms that may occur in a flat
literal (i.e. terms like those appearing in (1)).</p>
        <p>We call a DAG-definition (or simply a DAG) any formula d (y; z) of the following
form (where y := y1 : : : ; yn): Vin=1(yi = fi(y1; : : : ; yi 1; z)) : Thus, d (y; z) provides an
explicit definition of the y in terms of the parameters z.</p>
        <p>Given a DAG d , we can in fact associate to it the substitution sd recursively defined
by the mapping (yi)sd := fi((y1)sd ; : : : ; (yi 1)sd ; z): DAGs are commonly used to
represent formulae and substitutions in compressed form: in fact a formula like
(2)
(3)
(4)
9y (d (y; z) ^ F (y; z))
is equivalent to F ((y)sd ; z), and is called DAG-representation . The formula
F ((y)sd ; z) is said to be the unravelling of (2): notice that computing such an
unravelling in uncompressed form by explicitly performing substitutions causes an exponential
blow-up. This is why we shall systematically prefer DAG-representations (2) to their
uncompressed forms.</p>
        <p>
          As above stated, our main aim is to compute the UI of a primitive formula 9e f (e; z);
using trivial logical manipulations (that have just linear complexity costs), it can be
shown that, without loss of generality the constraint f (e; z) can be assumed to be flat.
To do so, it is sufficient to perform a preprocessing procedure by applying well-known
Congruence Closure Transformations: the reader is referred to [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] for a full account.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>The Tableaux Algorithm</title>
      <p>The algorithm proposed in this section is tableaux-like. It manipulates formulae in the
following DAG-primitive format</p>
      <p>9y (d (y; z) ^ F (y; z) ^ 9e Y (e; y; z))
where d (y; z) is a DAG and F ;Y are flat constraints (notice that the e do not occur in
F ). We call a formula of that format a DAG-primitive formula. To make reading easier,
we shall omit in (3) the existential quantifiers, so as (3) will be written simply as
d (y; z) ^ F (y; z) ^Y (e; y; z) :</p>
      <p>Initially the DAG d and the constraint F are the empty conjunction. In the
DAGprimitive formula (4), variables z are called parameter variables, variables y are called
(explicitly) defined variables and variables e are called (truly) quantified variables.
Variables z are never modified; in contrast, during the execution of the algorithm it could
happen that some quantified variables may disappear or become defined variables (in
the latter case they are renamed: a quantified variables ei becoming defined is renamed
as y j, for a fresh y j). Below, letters a; b; : : : range over e [ y [ z.
Definition 1. A term t (resp. a literal L) is e-free when there is no occurrence of any of
the variables e in t (resp. in L). Two flat terms t; u of the kinds
t := f (a1; : : : ; an)
u := f (b1; : : : ; bn)
(5)
are said to be compatible iff for every i = 1; : : : ; n, either ai is identical to bi or both
ai and bi are e-free. The difference set of two compatible terms as above is the set of
disequalities ai 6= bi, where ai is not equal to bi.
3.1
Our algorithm applies the transformations below (except the last one) in a “don’t care”
non-deterministic way. The last transformation has lower priority and splits the
execution of the algorithm in several branches: each branch will produce a different disjunct
in the output formula. Each state of the algorithm is a DAG-primitive formula like (4).
We now provide the rules that constitute our ‘tableaux-like’ algorithm.
(1)
(2)</p>
      <p>Simplification Rules:
(1.0) if an atom like t = t belongs to Y , just remove it; if a literal like t 6= t occurs
somewhere, delete Y , replace F with ? and stop;
(1.i) If t is not a variable and Y contains both t = a and t = b, remove the latter
and replace it with b = a.
(1.ii) If Y contains ei = e j with i &gt; j, remove it and replace everywhere ei by e j.
DAG Update Rule: if Y contains ei = t(y; z), remove it, rename everywhere ei as
y j (for fresh y j) and add y j = t(y; z) to d (y; z). More formally:
(3) e-Free Literal Rule: if Y contains a literal L(y; z), move it to F (y; z). More
formally:
d (y; z) ^ F (y; z) ^</p>
      <p>Y (e; ei; y; z) ^ ei = t(y; z)
d (y; z) ^ y j = t(y; z) ^ F (y; z) ^Y (e; y j; y; z)
d (y; z) ^ F (y; z) ^</p>
      <p>Y (e; y; z) ^ L(y; z)
d (y; z) ^</p>
      <p>F (y; z) ^ L(y; z) ^Y (e; y; z)
+
+
(4) Splitting Rule: If Y contains a pair of atoms t = a and u = b, where t and u are
compatible flat terms like in (5), and no disequality from the difference set of t; u
belongs to F , then non-deterministically apply one of the following alternatives:
(4.0) remove from Y the atom f (b1; : : : ; bn) = b, add to Y the atom b = a and add
to F all equalities ai = bi such that ai 6= bi is in the difference set of t; u;
(4.1) add to F one of the disequalities from the difference set of t; u (notice that
the difference set cannot be empty, otherwise Rule (1.i) applies).</p>
      <p>When no more rule is applicable, delete Y (e; y; z) from the resulting formula
d (y; z) ^ F (y; z) ^Y (e; y; z)
so as to obtain for any branch an output formula in DAG-representation of the kind
9y (d (y; z) ^ F (y; z)) :</p>
      <p>The following proposition states that, by applying the previous rules, termination is
always guaranteed.</p>
      <p>Proposition 1. The non-deterministic procedure presented above always terminates.
Proof. It is sufficient to show that every branch of the algorithm must terminate. In
order to prove that, first observe that the total number of the variables involved never
increases and it decreases if (1.ii) is applied (it might decrease also by the effect of
(1.0)). Whenever such a number does not decrease, there is a bound on the number
of inequalities that can occur in Y ; F . Now transformation (4.1) decreases the number
of inequalities that are actually missing; the other transformations do not increase this
number. Finally, all transformations except (4.1) reduce the length of Y .</p>
      <p>Remark 2. Notice that if no transformation applies to (3), the set Y can only contain
inequalities of the kind ei 6= a, together with equalities of the kind f (a1; : : : ; an) = a.
However, when it contains f (a1; : : : ; an) = a, one of the ai must belong to e (otherwise
(2) or (3) applies). Moreover, if f (a1; : : : ; an) = a and f (b1; : : : ; bn) = b are both in Y ,
then either they are not compatible or ai 6= bi belongs to F for some i and for some
variables ai; bi not in e (otherwise (4) or (1.i) applies).</p>
      <p>
        Remark 3. The complexity of the above algorithm is exponential, however the
complexity of producing a single branch is quadratic. Notice that if functions symbols are
all unary, there is no need to apply Rule 4, hence for this restricted case computing UI
is a tractable problem. The case of unary functions has relevant applications in database
driven verification [
        <xref ref-type="bibr" rid="ref5 ref6 ref9">9,6,5</xref>
        ] (where unary function symbols are used to encode primary
and foreign keys).
      </p>
      <p>Example 2. Let us compute the UI of the formula 9e0 (g(z4; e0) = z0 ^ f (z2; e0) =
g(z3; e0) ^ h( f (z1; e0)) = z0). Flattening gives the set of literals
g(z4; e0) = z0 ^ e1 = f (z2; e0) ^ e1 = g(z3; e0) ^ e2 = f (z1; e0) ^ h(e2) = z0
(6)
where the newly introduced variables e1; e2 need to be eliminated too. Applying (4.0)
removes g(z3; e0) = e1 and introduces the new equalities z3 = z4, e1 = z0. This causes
e1 to be renamed as y1 by (2). Applying again (4.0) removes f (z1; e0) = e2 and adds the
equalities z1 = z2, e2 = y1; moreover, e2 is renamed as y2. To the literal h(y2) = z0 we
can apply (3). The branch terminates with y1 = z0 ^ y2 = y1 ^ z1 = z2 ^ z3 = z4 ^ h(y2) =
z0 ^ f (z2; e0) = y1 ^ g(z4; e0) = z0. This produces z1 = z2 ^ z3 = z4 ^ h(z0) = z0 as a
first disjunct of the uniform interpolant. The other branches produce z1 = z2 ^ z3 6= z4,
z1 6= z2 ^ z3 = z4 and z1 6= z2 ^ z3 6= z4 as further disjuncts, so that the UI turns out to be
equivalent (by trivial logical manipulations) to z1 = z2 ^ z3 = z4 ! h(z0) = z0.</p>
    </sec>
    <sec id="sec-4">
      <title>The Conditional Algorithm</title>
      <p>This section discusses a new algorithm with the objective of generating a compact
representation of the UI in E U F : this representation avoids splitting and is based on
conditions in Horn clauses generated from literals whose left sides have the same function
symbol. A by-product of this approach is that the size of the output UI often can be kept
polynomial. Further, the output of this algorithm generates the UI of 9e f (e; z) (where
f (e; z) is a conjunction of literals and e = e0; : : : ; eN , z = z0; : : : ; zM, as usual) in
conjunctive normal form as a conjunction of Horn clauses (we recall that a Horn clause is
a disjunction of literals containing at most one positive literal). Toward this goal, a new
data structure of a conditional DAG, a generalization of a DAG, is introduced so as to
maximize sharing of sub-formulas.</p>
      <p>Using the core preprocessing procedure explained in Subsection 2.3, it is assumed
that f is the conjunction V S1, where S1 is a set of flat literals containing only literals of
the following two kinds:
f (a1; : : : ; ah) = a
a 6= b
(7)
(8)
(recall that we use letters a; b; : : : for elements of e [ z). In addition we can assume that
variables in e must occur in (8) and in the left side of (7). We do not include equalities
like a = b because they can be eliminated by replacement.
4.1</p>
      <sec id="sec-4-1">
        <title>The Algorithm</title>
        <p>The algorithm requires two steps in order to get a set of clauses representing the output
in a suitably compressed format.</p>
        <p>Step 1. Out of every pair of literals f (a1; : : : ; ah) = a and f (a01; : : : ; a0h) = a0 of the
kind (7) (where a 6 a0), we produce the Horn clause
a1 = a01; : : : ; ah = a0h ! a = a0
(9)
which can be further simplified by deleting identities in the antecedent. Let us call S2
the set of clauses obtained from S1 by adding to it these new Horn clauses.</p>
        <p>Step 2. We saturate S2 with respect to the following rewriting rule
G ! e j = ei</p>
        <p>G ! C[ei]p</p>
        <p>C
where j &gt; i, C[ei]p means the result of the replacement of e j by ei in the position p of
the clause C and G ! C[ei]p is the clause obtained by merging G with the antecedent
of the clause C[ei]p.</p>
        <p>Notice that we apply the rewriting rule only to conditional equalities of the kind
G ! e j = ei: this is because clauses like G ! e j = zi are considered ‘conditional
definitions’ (and the clauses like G ! z j = zi as ‘conditional facts’).</p>
        <p>We let S3 be the set of clauses obtained from S2 by saturating it with respect to the
above rewriting rule, by removing from antecedents identical literals of the kind a = a
and by removing subsumed clauses.</p>
        <sec id="sec-4-1-1">
          <title>Example 3. Let S1 be the set of the following literals</title>
          <p>f1(e0; z1) = e1; f1(e0; z2) = z3; f2(e0; z4) = e2;
f2(e0; z5) = z6; g1(e0; e1) = e2; g1(e0; z01) = z02;
g2(e0; e2) = e1; g2(e0; z010) = z020 h(e1; e2) = z0</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Step 1 produces the following set S2 of Horn clauses</title>
          <p>z1 = z2 ! e1 = z3; z4 = z5 ! e2 = z6;
Since there are no Horn clauses whose consequent is an equality of the kind ei = e j,
Step 2 does not produce further clauses and we have S3 = S2.
In order to be able to extract the output UI in a uncompressed format out of the above
set of clauses S3, we must identify all the ‘implicit conditional definitions’ it contains.</p>
          <p>Let w be an ordered subset of the e = fe1; : : : ; eN g: that is, in order to specify w we
must take a subset of the e and an ordering of this subset. Intuitively, these w will play
the role of placeholders inside a conditional definition.</p>
          <p>If we let w be w1; : : : ; ws (where, say, wi is some eki with ki 2 f1; : : : ; Ng), we let
Li be the language restricted to z and w1; : : : ; wi (for i s): in other words, an Li-term
or an Li-clause may contain only terms built up from z; w1; : : : ; wi by applying to them
function symbols. In particular, Ls (also called Lw) is the language restricted to z [ w.
We let L0 be the language restricted to z.</p>
          <p>Given a set S of clauses and w as above, a w-conditional DAG d (or simply a
conditional DAG d ) built out of S is a set of Horn clauses from S</p>
          <p>G1 ! w1 = t1; : : : ; Gs ! ws = ts
(10)
where Gi is a finite tuple of Li 1-atoms and ti is a Li 1-term. Given a w-conditional DAG
d we can define the formulae fdi (for i = 1; : : : ; s + 1) as follows:
- f s+1 is the conjunction of all Lw-clauses belonging to S;</p>
          <p>d
- for i</p>
          <p>s, the formula fdi is Gi ! 8wi (wi = ti ! fdi+1).</p>
          <p>It can be seen that fdi is equivalent to a quantifier-free Li 1 formula,8 in particular f 1
d
(abbreviated as fd ) is equivalent to an L0-quantifier-free formula. The explicit
computation of such quantifier-free formulae may however produce an exponential blow-up.
Example 4. Let us analyze the conditional DAG d that can be extracted out of the set
S3 of the Horn clauses mentioned in Example 3 (we disregard those d such that fd is
the empty conjunction &gt;). We can get not logically equivalent formulae for fd1 and fd2
8 It can be shown that such a formula can be turned, again up to equivalence, into a conjunction
of Horn clauses.
considering d1 with w1 = e1; e2 and conditional definitions z1 = z2 ! e1 = z3; e1 =
z01 ! e2 = z02 or d2 with w2 = e2; e1 and conditional definitions z4 = z5 ! e2 = z6; e2 =
z010 ! e1 = z020 In fact, fd1 is logically equivalent to
(11)
(12)
(13)
whereas fd2 is logically equivalent to
where we used the notation V S3 0[z3=e1; z02=e2] to mean the result of the substitution of
e1 with z3 and of e2 with z02 in the conjunction of S3-clauses not involving e0 (a similar
notation is used for S3 0[z6=e2; z020=e1]) . A third possibility is to use the conditional
definitions z1 = z2 ! e1 = z3 and z4 = z5 ! e2 = z6 with (equivalently) either w1 or w2
resulting in a conditional dag d3 with fd3 logically equivalent to</p>
          <p>
            The next lemma (proved in [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]) shows the relevant property of fd :
Lemma 1. For every set of clauses S and for every w-conditional DAG d built out of S,
the formula V S ! fd is logically valid.
          </p>
          <p>Notice that it is not true that the conjunction of all possible fd (varying d and w) implies
V S: in fact, such a conjunction can be empty for instance in case S is just fe1 = e2g.
4.3</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>Extraction of UI’s</title>
        <p>We shall prove below that in order to get a UI of 9e f (e; a), one can take the conjunction
of all possible fd , varying d among the conditional DAGs that can be built out of the
set of clauses S3 from Step 2 of the above algorithm.</p>
        <p>Example 5. If f is the conjunction of the literals of Example 3, then the conjunction of
(11), (12) and (13) is a UI of 9e f ; in fact, no further non-trivial conditional dag d can
be extracted (if we take w = e1 or w = e2 or w = 0/ to extract d , then it happens that fd
is the empty conjunction &gt;).</p>
        <p>Example 6. Let us turn to the literals (6) of Example 2. Step 1 produces out of them the
conditional clauses
z3 = z4 ! e1 = z0;
z1 = z2 ! e2 = e1 :
(14)
Step 2 produces by rewriting the further clauses z1 = z2 ! f (z1; e0) = e1 and z1 =
z2 ! h(e1) = z0. We can extract two conditional DAGs d (using both the conditional
definitions (14) or just the first one); in both cases fd is z1 = z2 ^ z3 = z4 ! h(z0) = z0,
which is the UI.</p>
        <p>
          As it should be evident from the two examples above, the conditional DAGs
representation of the output considerably reduces computational complexity in many cases;
this is a clear advantage of the present algorithm over the algorithm from Section 3 and
over other approaches like, e.g. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Still, the next example shows that in some cases the
overall complexity remains exponential.
        </p>
        <p>Example 7. Let e be e0; : : : ; eN and let z be fz0; z00g [ fzi; j; z0i; j j 1 i &lt; j Ng: Let
f (e; z) be the conjunction of the identities f (e0; e1) = z0, f (e0; eN ) = z00 and the set of
identities hi j(e0; zi j) = ei; hi j(e0; z0i j) = e j; varying i; j such that 1 i &lt; j N. After
applying Step 1 of the algorithm presented in Subsection 4.1, we get the Horn clauses
zi j = z0i j ! ei = e j; as well as the clause e1 = eN ! z0 = z00. If we now apply Step 2, we
can never produce a conditional clause of the kind G ! ei = t with t being e-free
(because we can only rewrite some ei into some e j). Thus no sequence of clauses like (10)
can be extracted from S3: notice in fact that the term t1 from such a sequence must not
contain the e. In other words, the only w-conditional DAG d that can be extracted is
based on the empty w e and is empty itself. However, such d produces a formula fd
that is quite big: it is the conjunction of the clauses from S3 where the e do not occur
(S3 contains in fact G ! z0 = z00 for exponentially many e-free G ’s).
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Correctness and Completeness Proofs</title>
      <p>In this section we prove correctness and completeness of our two algorithms. To this
aim, we need some preliminaries, both from model theory and from term rewriting.</p>
      <p>
        For model theory, we refer to [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. We just recall few definitions. A S -embedding
(or, simply, an embedding) between two S -structures M and N is a map m : jMj !
jN j among the support sets jMj of M and jN j of N satisfying the condition (M j=
j ) N j= j) for all S jMj-literals j (M is regarded as a S jMj-structure, by
interpreting each additional constant a 2 jMj into itself and N is regarded as a S
jMjstructure by interpreting each additional constant a 2 jMj into m(a)). If m : M ! N
is an embedding which is just the identity inclusion jMj jN j, we say that M is a
substructure of N or that N is an extension of M.
      </p>
      <p>
        Extensions and UI are related to each other by the following result we take from [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]:
Lemma 2 (Cover-by-Extensions). A formula y(y) is a UI in T of 9e f (e; y) iff it
satisfies the following two conditions:
(i) T j= 8y (9e f (e; y) ! y(y));
(ii) for every model M of T , for every tuple of elements a from the support of M such
that M j= y(a) it is possible to find another model N of T such that M embeds
into N and N j= 9e f (e; a).
      </p>
      <p>
        To conveniently handle extensions, we need diagrams. Let M be a S -structure.
The diagram of M [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], written DS (M) (or just D (M)), is the set of ground S
jMjliterals that are true in M. An easy but important result, called Robinson Diagram
Lemma [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], says that, given any S -structure N , the embeddings m : M ! N are in
bijective correspondence with expansions of N to S jMj-structures which are models
of DS (M). The expansions and the embeddings are related in the obvious way: the
name of a is interpreted as m (a). It is convenient to see DS (M) as a set of flat literals
as follows: the positive part of DS (M) contains the S jMj-equalities f (a1; : : : ; an) = b
which are true in M and the negative part of DS (M) contains the S jMj-inequalities
a 6= b, varying a; b among the pairs of different elements of jMj.
      </p>
      <p>
        For term rewriting we refer to a textbook like [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]; we only recall the following
classical result:
Lemma 3. Let R be a canonical ground rewrite system over a signature S . Then there is
a S -structure M such that for every pair of ground terms t; u we have that M j= t = u
iff the R-normal form of t is the same as the R-normal form of u. Consequently R is
consistent with a set of negative literals S iff for every t 6= u 2 S the R-normal forms of
t and u are different.
      </p>
      <p>We are now ready to prove correctness and completeness of our algorithms. We first
give the relevant intuitions for the proof technique, which is the same for both cases.
By Lemma 2 above, what we need to show is that if a model M satisfies the output
formula of the algorithm, then it can be extended to a superstructure N satisfying the
input formula of the algorithm. By Robinson Diagram Lemma, this is achieved if we
show that D (M) is consistent with the output formula of the algorithm. The output
formula is equivalent to a disjunction of constraints and the diagram D (M) is also a
constraint (albeit infinitary). The positive part of D (M) is a canonical rewriting system
(equalities like f (a1; : : : ; an) = a are obviously oriented from left-to-right) and every
term occurring in D (M) is in normal form. If an algorithm works properly, it will
be easy to see that the completion of the union of D (M) with the relevant disjunct
constraint is trivial and does not produce inconsistencies.</p>
      <sec id="sec-5-1">
        <title>Correctness and Completeness of the Tableaux Algorithm</title>
        <p>Theorem 1. Suppose that we apply the algorithm of Subsection 3.1 to the primitive
formula 9e(f (e; z)) and that the algorithm terminates with its branches in the states
d1(y1; z) ^ F1(y1; z) ^Y1(e1; y1; z); : : : ; dk(yk; z) ^ Fk(yk; z) ^Yk(ek; yk; z)
then the UI of 9e(f (e; z)) in E U F is the unravelling (see Subsection 2.3) of the formula
k
_
i=1</p>
        <p>
          y (di(yi; z) ^ Fi(yi; z)) :
9 i
(15)
Proof. Since 9e(f (e; z)) is logically equivalent to Wik=1 9yi (di(yi; z) ^ Fi(yi; z) ^
9eiYi(e1; y1; z)), it is sufficient to check that if a formula like (3) is terminal (i.e. no
rule applies to it) then its UI is 9y (d (y; z) ^ F (y; z)). To this aim, we apply Lemma 2:
we pick a model M satisfying d (y; z) ^ F (y; z) via an assignment I to the variables
y; z9 and we show that M can be embedded into a model M0 such that, for a suitable
extensions I0 of I to the variables e, we have that (M0; I0) satisfies also Y (e; y; z). This
is proved in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], by using the Robinson Diagram Lemma.
a
9 Actually the values of the assignment I to the z uniquely determines the values of I to the y.
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>Correctness and Completeness of the Conditional Algorithm</title>
        <p>Theorem 2. Let S3 be obtained as in Steps 1-2 from 9e f (e; z). Then the conjunction of
all possible fd (varying d among the conditional DAGs that can be built out of S3) is a
UI of 9e f (e; z) in E U F .</p>
        <p>
          Proof. We use Lemma 2. Condition (i) of that Lemma is ensured by Lemma 1 above
because V S3 is logically equivalent to f . So let us take a model M and elements a˜
from its support such that we have M j= Vd fd under the assignment of the a˜ to the
parameters z. We need to expand it to a superstructure N in such a way that we have
N j= V S1, under some assignment to z; e extending the assignment z 7! a˜ (recall that
V S1 is logically equivalent to f too). The proof is involved and it requires Robinson
Diagram Lemma and additional lemmas: all the details are reported in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
a
6
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>Two different algorithms for computing uniform interpolants (UIs) from a formula in
E U F with a list of symbols to be eliminated are presented. They share a common
subpart as well as they are different in their overall objectives. The first algorithm is
nondeterministic and generates a UI expressed as a disjunction of conjunctions of literals,
whereas the second algorithm gives a compact representation of a UI as a conjunction of
Horn clauses. The output of both algorithms needs to be expanded if a fully (or partially)
unravelled uniform interpolant is needed for an application. This restriction/feature is
similar in spirit to syntactic unification where also efficient unification algorithms never
produce output in fully expanded form to avoid an exponential blow-up.</p>
      <p>For generating a compact representation of the UI, both algorithms make use of
DAG representations of terms by introducing new symbols to stand for subterms arising
in the full expansion of the UI. Moreover, the second algorithm uses a conditional DAG,
a new data structure introduced in the paper, to represent subterms under conditions.</p>
      <p>The complexity of the algorithms is also analyzed. It is shown that the first algorithm
generates exponentially many branches with each branch of at most quadratic length;
the UIs produced by the second algorithm have often polynomial size in concrete
examples (but worst case size is still exponential). A fully expanded UI can easily be of
exponential size. An implementation of both the algorithms, along with a comparative
study are planned as future work. In parallel with the implementation, a
characterization of classes of formulae for which computation of UIs requires polynomial time in
our algorithms (especially in the second one) needs further investigation.
Acknowledgments. The third author has been partially supported by the National
Science Foundation award CCF -1908804.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Nipkow</surname>
          </string-name>
          . Term Rewriting and
          <string-name>
            <given-names>All</given-names>
            <surname>That</surname>
          </string-name>
          . Cambridge University Press, United Kingdom,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Bonacina</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Johansson</surname>
          </string-name>
          .
          <article-title>Interpolation systems for ground proofs in automated deduction: a survey</article-title>
          .
          <source>J. Autom. Reasoning</source>
          ,
          <volume>54</volume>
          (
          <issue>4</issue>
          ):
          <fpage>353</fpage>
          -
          <lpage>390</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Bonacina</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Johansson</surname>
          </string-name>
          .
          <article-title>On interpolation in automated theorem proving</article-title>
          .
          <source>J. Autom. Reasoning</source>
          ,
          <volume>54</volume>
          (
          <issue>1</issue>
          ):
          <fpage>69</fpage>
          -
          <lpage>97</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Burch</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Dill</surname>
          </string-name>
          .
          <article-title>Automatic verification of pipelined microprocessor control</article-title>
          . In D. L. Dill, editor,
          <source>Proc. of CAV</source>
          , volume
          <volume>818</volume>
          <source>of LNCS</source>
          , pages
          <fpage>68</fpage>
          -
          <lpage>80</lpage>
          . Springer,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rivkin</surname>
          </string-name>
          .
          <article-title>Formal modeling and SMT-based parameterized verification of data-aware BPMN</article-title>
          .
          <source>In Proc. of BPM</source>
          , volume
          <volume>11675</volume>
          <source>of LNCS</source>
          , pages
          <fpage>157</fpage>
          -
          <lpage>175</lpage>
          . Springer,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rivkin</surname>
          </string-name>
          .
          <article-title>From model completeness to verification of data aware processes</article-title>
          .
          <source>In Description Logic, Theory Combination, and All That</source>
          , volume
          <volume>11560</volume>
          <source>of LNCS</source>
          , pages
          <fpage>212</fpage>
          -
          <lpage>239</lpage>
          . Springer,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rivkin</surname>
          </string-name>
          .
          <article-title>Model completeness, covers and superposition</article-title>
          .
          <source>In Proc. of CADE</source>
          , volume
          <volume>11716</volume>
          <source>of LNCS (LNAI)</source>
          , pages
          <fpage>142</fpage>
          -
          <lpage>160</lpage>
          . Springer,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rivkin</surname>
          </string-name>
          .
          <article-title>Combined Covers and Beth Definability</article-title>
          . In N. Peltier and V. Sofronie-Stokkermans, editors,
          <source>Proc. of IJCAR</source>
          , volume
          <volume>12166</volume>
          <source>of LNCS (LNAI)</source>
          , pages
          <fpage>181</fpage>
          -
          <lpage>200</lpage>
          . Springer,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rivkin</surname>
          </string-name>
          .
          <article-title>SMT-based verification of data-aware processes: a model-theoretic approach</article-title>
          .
          <source>Math. Struct. Comput. Sci.</source>
          ,
          <volume>30</volume>
          (
          <issue>3</issue>
          ):
          <fpage>271</fpage>
          -
          <lpage>313</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>C.-C. Chang</surname>
            and
            <given-names>J. H.</given-names>
          </string-name>
          <string-name>
            <surname>Keisler</surname>
          </string-name>
          .
          <source>Model Theory. North-Holland Publishing Co., AmsterdamLondon, third edition</source>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>W.</given-names>
            <surname>Craig</surname>
          </string-name>
          .
          <article-title>Three uses of the Herbrand-Gentzen theorem in relating model theory and proof theory</article-title>
          .
          <source>J. Symbolic Logic</source>
          ,
          <volume>22</volume>
          :
          <fpage>269</fpage>
          -
          <lpage>285</lpage>
          ,
          <year>1957</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gianola</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kapur</surname>
          </string-name>
          .
          <article-title>Compactly representing uniform interpolants for EUF using (conditional) DAGS</article-title>
          . CoRR, abs/
          <year>2002</year>
          .09784,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ranise. MCMT</surname>
          </string-name>
          :
          <article-title>A model checker modulo theories</article-title>
          .
          <source>In Proc. of IJCAR</source>
          , pages
          <fpage>22</fpage>
          -
          <lpage>29</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghilardi</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Zawadowski</surname>
          </string-name>
          . Sheaves, games, and model completions, volume
          <volume>14</volume>
          of Trends in Logic-Studia Logica Library. Kluwer Academic Publishers, Dordrecht,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>S.</given-names>
            <surname>Gulwani</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Musuvathi</surname>
          </string-name>
          .
          <article-title>Cover algorithms and their combination</article-title>
          .
          <source>In Proc. of ESOP, Held as Part of ETAPS</source>
          , pages
          <fpage>193</fpage>
          -
          <lpage>207</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. G. Huang.
          <article-title>Constructing Craig interpolation formulas</article-title>
          .
          <source>In Computing and Combinatorics COCOON</source>
          , pages
          <fpage>181</fpage>
          -
          <lpage>190</lpage>
          . LNCS,
          <volume>959</volume>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>D.</given-names>
            <surname>Kapur</surname>
          </string-name>
          .
          <article-title>Shostak's congruence closure as completion</article-title>
          .
          <source>In Proc. of RTA</source>
          , pages
          <fpage>23</fpage>
          -
          <lpage>37</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>D.</given-names>
            <surname>Kapur</surname>
          </string-name>
          .
          <article-title>Nonlinear polynomials, interpolants and invariant generation for system analysis</article-title>
          .
          <source>In Proc. of the 2nd International Workshop on Satisfiability Checking and Symbolic Computation co-located with ISSAC</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>D.</given-names>
            <surname>Kapur</surname>
          </string-name>
          .
          <article-title>Conditional congruence closure over uninterpreted and interpreted symbols</article-title>
          .
          <source>J. Systems Science &amp; Complexity</source>
          ,
          <volume>32</volume>
          (
          <issue>1</issue>
          ):
          <fpage>317</fpage>
          -
          <lpage>355</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>D.</given-names>
            <surname>Kapur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Majumdar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. G.</given-names>
            <surname>Zarba</surname>
          </string-name>
          .
          <article-title>Interpolation for data structures</article-title>
          .
          <source>In Proc. of SIGSOFT FSE</source>
          , pages
          <fpage>105</fpage>
          -
          <lpage>116</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Lyndon</surname>
          </string-name>
          .
          <article-title>An interpolation theorem in the predicate calculus</article-title>
          .
          <source>Pacific J. Math.</source>
          ,
          <volume>9</volume>
          (
          <issue>1</issue>
          ):
          <fpage>129</fpage>
          -
          <lpage>142</lpage>
          ,
          <year>1959</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>K. L. McMillan</surname>
          </string-name>
          .
          <article-title>Lazy abstraction with interpolants</article-title>
          .
          <source>In Proc. of CAV</source>
          , pages
          <fpage>123</fpage>
          -
          <lpage>136</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>G.</given-names>
            <surname>Nelson</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Oppen</surname>
          </string-name>
          .
          <article-title>Simplification by cooperating decision procedures</article-title>
          .
          <source>ACM Trans. Program. Lang. Syst.</source>
          ,
          <volume>1</volume>
          (
          <issue>2</issue>
          ):
          <fpage>245</fpage>
          -
          <lpage>257</lpage>
          ,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>A. M. Pitts</surname>
          </string-name>
          .
          <article-title>On an interpretation of second order quantification in first order intuitionistic propositional logic</article-title>
          .
          <source>J. Symb. Log.</source>
          ,
          <volume>57</volume>
          (
          <issue>1</issue>
          ):
          <fpage>33</fpage>
          -
          <lpage>52</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25. P.
          <article-title>Pudla´k. Lower bounds for resolution and cutting plane proofs and monotone computations</article-title>
          .
          <source>J. Symb. Log.</source>
          ,
          <volume>62</volume>
          (
          <issue>3</issue>
          ):
          <fpage>981</fpage>
          -
          <lpage>998</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Shostak</surname>
          </string-name>
          .
          <article-title>Deciding combinations of theories</article-title>
          .
          <source>J. ACM</source>
          ,
          <volume>31</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          ,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>