=Paper= {{Paper |id=Vol-1635/paper-09 |storemode=property |title=Ordered Resolution with Straight Dismatching Constraints |pdfUrl=https://ceur-ws.org/Vol-1635/paper-09.pdf |volume=Vol-1635 |authors=Andreas Teucke,Christoph Weidenbach |dblpUrl=https://dblp.org/rec/conf/cade/TeuckeW16 }} ==Ordered Resolution with Straight Dismatching Constraints== https://ceur-ws.org/Vol-1635/paper-09.pdf
                              Ordered Resolution
                     with Straight Dismatching Constraints

                Andreas Teucke                                             Christoph Weidenbach
      Max-Planck Institute for Informatics,                          Max-Planck Institute for Informatics,
    Campus E1.4 66123 Saarbrücken, Germany                        Campus E1.4 66123 Saarbrücken, Germany
      Graduate School of Computer Science,
            Saarbrücken, Germany




                                                         Abstract

                        We present a sound and complete ordered resolution calculus for first-
                        order clauses with straight dismatching constraints. The extended
                        clause language is motivated by our first-order theorem proving ap-
                        proach through approximation and refinement. Using a clause language
                        with straight dismatching constraints, single refinement steps do not re-
                        sult in a worst-case quadratic blowup in the number of clauses anymore.
                        The refinement steps can now be represented by replacing one input
                        clause with two equivalent clauses. We show soundness and complete-
                        ness of ordered resolution with straight dismatching constraints. All
                        needed operations on straight dismatching constraints take linear or
                        linear logarithmic time in the size of the constraint.




1     Introduction
Recently, we introduced a first-order calculus based on approximation and refinement [TW15]. There, a first-
order clause set is approximated to a decidable fragment using an unsatisfiability preserving transformation. In
case of a false positive, i.e., a resolution refutation of the approximation that does not imply unsatisfiability of
the original clause set, a refinement via instantiation of the original clause set removes the false positive. The
refinement involves instantiating an original clause C such that no ground instances of C are lost while two
specific ground instances Cσ1 and Cσ2 get separated in the process. This requires (|Σ| − 1) · d + 1 instantiations
where Σ is the signature and d is the minimal term depth at which Cσ1 and Cσ2 differ from each other.
As an example, consider the first-order Horn clauses
                                                                Q(x) → P (g(x, f (x)))
                                                                      → Q(f (g(a, a)))
                                                                      → Q(f (g(b, b)))
                                  P (g(f (g(a, a)), f (f (g(b, b))))) →
under signature {a/0, b/0, f /1, g/2} that are approximated into


     Copyright c by the paper’s authors. Copying permitted for private and academic purposes.
   In: P. Fontaine, S. Schulz, J. Urban (eds.): Proceedings of the 5th Workshop on Practical Aspects of Automated Reasoning
(PAAR 2016), Coimbra, Portugal, 02-07-2016, published at http://ceur-ws.org




                                                              95
                                                      S(y), Q(x)      →   P (g(x, y))
                                                            Q(z)      →   S(f (z))
                                                                      →   Q(f (g(a, a)))
                                                                      →   Q(f (g(b, b)))
                                  P (g(f (g(a, a), f (f (g(b, b))))   →
via linearization of g(x, f (x)) to g(x, f (z)) and then deep variable term extraction of f (z) through the introduction
of a fresh predicate S [TW15]. The approximated clause set has a refutation and the corresponding conflicting
core, a minimal unsatisfiable set of instances from the above clauses generating this refutation, is
                         S(f (f (g(b, b)))), Q(f (g(a, a))) → P (g(f (g(a, a)), f (f (g(b, b)))))
                                             Q(f (g(b, b))) → S(f (f (g(b, b))))
                                                              → Q(f (g(a, a)))
                                                              → Q(f (g(b, b)))
                            P (g(f (g(a, a), f (f (g(b, b)))) →
   Lifting the conflicting core to the original clause set fails, because the resolvent of the first two conflict clauses,
eliminating the introduced S predicate
                          Q(f (g(b, b))), Q(f (g(a, a))) → P (g(f (g(a, a)), f (f (g(b, b)))))
is not an instance of the original clause Q(x) → P (g(x, f (x))), modulo duplicate literal elimination. A refinement
step replaces Q(x) → P (g(x, f (x))) by the instance Q(f (g(a, y))) → P (g(f (g(a, y)), f (f (g(a, y))))), and instances
representing Q(x) → P (g(x, f (x))), where x is not instantiated with f (g(a, y)). The former clause contains the
ground instance Q(f (g(a, a))) → P (g(f (g(a, a)), f (f (g(a, a))))) and the latter clauses include the ground instance
Q(f (g(b, b))) → P (g(f (g(b, b)), f (f (g(b, b))))):
                                          Q(a) → P (g(a, f (a)))
                                          Q(b) → P (g(b, f (b)))
                                   Q(g(x, y)) → P (g(g(x, y), f (g(x, y))))
                                      Q(f (a)) → P (g(f (a), f (f (a))))
                                      Q(f (b)) → P (g(f (b), f (f (b))))
                                 Q(f (f (x))) → P (g(f (f (x)), f (f (f (x)))))
                               Q(f (g(b, y))) → P (g(f (g(b, y)), f (f (g(b, y)))))
                           Q(f (g(f (x), y))) → P (g(f (g(f (x), y)), f (f (g(f (x), y)))))
                        Q(f (g(g(x, y), z))) → P (g(f (g(g(x, y), z)), f (f (g(g(x, y), z)))))
Then, the approximation of the above nine clauses, via linearization and deep variable term extraction, excludes
the previously found refutation. Actually, the refined approximated clause set yields a finite saturation and
therefore shows satisfiability of the original clause set.
   In this paper we introduce the new notion of straight dismatching constraints x 6= t attached to clauses, and
a corresponding ordered resolution calculus. A term t is straight if it is linear and all arguments of a nested
straight term t are either variables or at most one straight term (Definition 1), i.e., the term contains at most
one path along non-variable symbols. Straight dismatching constraints enable a refinement where a clause is
replaced by exactly two new clauses. For the above example, they are
                               Q(f (g(a, y))) → P (g(f (g(a, y)), f (f (g(a, y)))))
                                        Q(x) → P (g(x, f (x))) ; x 6= f (g(a, v))
where the second clause represents all ground instances where x is not instantiated with an instance of f (g(a, v)),
a straight term. This results in the refined approximation
                  (1)                 S1 (y), S2 (x), Q(x) → P (g(x, y))∗
                  (2)                         S2 (x), Q(x) → S1 (f (x))∗
                  (3)                                S3 (x) → S2 (f (x))∗
                  (4)                                       → S3 (g(a, y))∗
                  (5)                         S4 (y), Q(x) → P (g(x, y))∗        ; x 6= f (g(a, v))
                  (6)                                 Q(x) → S4 (f (x))∗         ; x 6= f (g(a, v))
                  (7)                                       → Q(f (g(a, a)))∗
                  (8)                                       → Q(f (g(b, b)))∗
                                                          +
                  (9) P (g(f (g(a, a)), f (f (g(b, b)))))   →
  where under the lpo ordering P ≺ Q ≺ S4 ≺ S3 ≺ S2 ≺ S1 ≺ a ≺ b ≺ f ≺ g and selection of the first complex
negative literal, the only inferences are




                                                             96
              [10 : Res : 1, 9]                S1 (f (f (g(b, b))))+ , S2 (f (g(a, a))), Q(f (g(a, a)))             →
              [11 : Res : 2, 10] S2 (f (g(b, b)))+ , Q(f (g(b, b))), S2 (f (g(a, a))), Q(f (g(a, a)))               →
              [12 : Res : 3, 11]      S3 (g(b, b))+ , Q(f (g(b, b))), S2 (f (g(a, a))), Q(f (g(a, a)))              →
A resolution between the fifth and ninth clause has the tautologous result
                          S(f (f (g(b, b)))), Q(f (g(a, a))) → ; f (g(a, a)) 6= f (g(a, v)).
   Clauses with straight dismatching constraints are closed under resolution and factoring. Repeated refine-
ment steps and resolution inferences can add further atomic constraints to a clause, which are interpreted as
conjunctions, e.g., (C; x 6= t ∧ y 6= s) represents instances of C where neither x is an instance of t nor y is an
instance of s. Straight dismatching constraints can be efficiently encoded in amortized constant space once the
input constraint terms are stored. Any straight term generated by the calculus is a subterm of an input straight
term. Relevant operations (substitution, intersection, solvability and subset tests) take linear time in the size of
the constraints, once they are ordered. Ordering can be done as usual in linear logarithmic time. The ordered
resolution calculus with straight dismatching constraints is sound and complete, enables an abstract redundancy
criterion and follows the saturation principle: if all non-redundant inferences from a clause set are performed
and the empty clause is not generated then the resulting saturated clause set has a model computed by a model
operator.
   Ordered resolution on constraint clauses has already been investigated. Clauses with dismatching constraints
are an instance of the constraint clauses introduced in [NR01]. The authors prove soundness and completeness
for ordered superposition extended with general syntactic equality and ordering inequality constraints powerful
enough to encode unification and maximality restrictions. Algorithms for solving these constraints are NP-hard.
In our more specific setting, the operations on the constraints are linear or linear logarithmic. Furthermore, our
completeness proof is an extension of the original superposition completeness proof without constraints [BG94],
enabling the standard superposition redundancy criterion.
   Our constraints are also a restricted version of the dismatching constraints from [AW15]. These constraints are
more powerful than straight constraints as they also allow dismatching between variables in a clause. However,
solving these constraints is again NP-hard and completeness for the NRCL calculus is only shown for a finite
domain first-order fragment.
   The same dismatching constraints are used in DInst-Gen [Kor13] to exclude redundant instances of clauses.
Solving constraints can even be done in linear time because of an extended signature with an additional fresh
constant ⊥. All satisfiable atomic constraints share the solution where all variables are substituted with ⊥.
Solving constraints therefore reduces to the matching problem on the atomic constraints. Our constraint clause
language, however, does not rely on an extended signature.
   In [CZ92], clauses are constrained by equational systems [CL89]. An equational system is a quantifier free
formula over (dis)equations that are syntactically interpreted over a Herbrand domain. The authors show sound-
ness and completeness of an ordered resolution calculus for the extended clauses. They devise a separate calculus
for model building. Testing solvability of equational systems is NP-hard. Straight dismatching constraints are a
special case of equational systems. They enable more efficient algorithms (Section 4) that are not instances of the
general algorithms for systems. Furthermore, our correctness result for ordered resolution (Section 3) includes a
notion of redundancy and the concept of saturation, i.e., model building is the result of a saturated clause set.
  Our approach to solving constraints is a special case of the solved form construction described in [CL89].
Normalizing straight constraints only requires the Clash (C2 ), Decomposition (D2 ) and Universality (U1 ) rules.
For straight terms the Decomposition rule simplifies to f (t1 , . . . , tn ) 6= f (y1 , . . . , yi−1 , si , yi+1 , . . . , yn ) ⇒ ti 6= si .
The Explosion (E) rule is refined to instantiations with shallow instances (Definition 12).
   An alternative to using constraints is the explicit representation described in [LM87]. Their algorithm allows
computing a finite set of clauses covering the same ground clauses as a clause with constraint. For example,
for (Q(x) → P (g(x, f (x))); x 6= f (g(a, v))) the explicit representation is exactly the nine clauses listed in the
example. This always succeeds because by definition our constraint represents unrestricted counter-examples.
While using the explicit representations is equivalent to using constraints, we require quadratically fewer clauses.
   The paper is organized as follows. After the preliminaries, we formally define straight dismatching constraints
in Section 2. Then, Section 3 contains the soundness and completeness proofs for our ordered resolution calculus
with constraints. We conclude with discussing the complexity of operations on straight dismatching constraints.




                                                                    97
2     Preliminaries
We consider a standard first-order language without equality. The letters v, w, x, y, z denote variables, f, g, h
functions, a, b, c constants, s, t terms, p, q, r positions and Greek letters σ, τ, ρ, δ are used for substitutions. Capital
letters S, P, Q, R denote predicates, A, B atoms, E, L literals, C, D clauses.
     A first-order language is constructed over a signature Σ = (F, P), where F and P are non-empty, disjoint, in
general infinite sets of function and predicate symbols, respectively. Every function or predicate symbol has a
fixed arity and nullary functions are called constants. Further, we assume an infinite set X of variable symbols
disjoint from Σ. Then, the set of all terms T(F, X) is defined recursively by: (i) every constant symbol c ∈ F
is a term, (ii) every variable x ∈ X is a term and (iii) whenever t1 , . . . , tn are terms and f ∈ F is a function
symbol with arity n, then f (t1 , . . . , tn ) is a term. If t1 , . . . , tn are terms and P ∈ P is a predicate symbol with
arity n, then P (t1 , . . . , tn ) is an atom and the ti are its arguments. A term is called linear if there are no
duplicate variable occurrences. For brevity, we also write ~t instead of t1 , . . . , tn for the argument of functions
and predicates. A literal is either an atom or an atom preceded by ¬ and it is then called positive or negative,
respectively.
S The set of free variables of an S             atom (term) denoted by vars( ) is defined as follows: vars(P (t1 , . . . , tn )) =
   i vars(t i ), vars(f (t 1 , . . . , t n )) =  i vars(ti ) and vars(x) = {x}. The function naturally extends to literals,
clauses and sets thereof.
     A multiset over a set A is a function M from A to the natural numbers. For an element e ∈ A, M (e) is
the number of occurrences of e in M . We say e is an element of M if M (e) > 0. The union, intersection, and
difference of multisets are defined by M1 (x) ∪ M2 (x) = M1 (x) + M2 (x), M1 (x) ∩ M2 (x) = min(M1 (x), M2 (x)),
and M1 (x) \ M2 (x) = max(0, M1 (x) − M2 (x)). We use set-like notation for multisets.
     A clause is a multiset of literals which we write as an implication Γ → ∆ where the atoms in the multiset ∆
(the succedent) denote the positive literals and the atoms in the multiset Γ (the antecedent) the negative literals.
We write  for the empty clause. If Γ is empty we omit →, e.g., we write P (x) as an alternative of → P (x)
whereas if ∆ is empty → is always shown. We abbreviate disjoint set union with sequencing, for example, we
write Γ, Γ0 → ∆, A instead of Γ ∪ Γ0 → ∆ ∪ {A}. Terms, literals and clauses are called ground if they contain no
variables.
     A substitution σ is a mapping from variables V into terms denoted by pairs {x 7→ t} where xσ 6= x holds only
for finitely many variables. The update σ[x 7→ t] denotes the substitution where x maps to t and y maps to
yσ for all y 6= x. The composition στ denotes the substitution (xσ)τ for all variables x. A substitution σ is a
grounding substitution for V if xσ is ground for every variable x ∈ V.
     Given two terms (atoms) s, t, a substitution σ is called a unifier for s and t if sσ = tσ. It is called a most
general unifier (mgu) if for any other unifier τ of s and t there exists a substitution δ with σδ = τ . s is called an
instance of t if there exists a substitution σ such that tσ = s.
     An atom ordering ≺ is a well-founded, total ordering on ground atoms. For the atom ordering ≺, A  B is
defined by A ≺ B or A = B. An atom A is maximal with respect to a multiset of atoms ∆, if for all B ∈ ∆ we
have A 6≺ B. Any atom ordering ≺ is extended to literals by A ≺ ¬A ≺ B if A ≺ B. The multiset extension
≺mul of an ordering ≺ is defined by M1 ≺mul M2 iff M1 6= M2 and for all m ∈ M1 with M1 (m) > M2 (m)
there is an m0 ∈ M2 with m ≺ m0 . The multiset extension of the literal ordering induces an ordering on ground
clauses. The clause ordering is compatible with the atom ordering: if the maximal atom in C is greater than the
maximal atom in D then D ≺ C. We use ≺ simultaneously to denote an atom ordering and its literal and clause
extensions. For a ground clause set N and clause C, the set N ≺C = {D ∈ N | D ≺ C} denotes the clauses of
N smaller than C. A literal E (A or ¬A) is (strictly) maximal in a clause Γ → ∆, A (Γ, A → ∆) if there is no
literal in Γ → ∆ that is greater (or equal) than E with respect to ≺.
     A Herbrand interpretation I is a - possibly infinite - set of ground atoms. A ground atom A is called true in I
if A ∈ I and false, otherwise. I is said to satisfy a ground clause C = Γ → ∆, denoted by I  C, if ∆ ∩ I 6= ∅ or
Γ 6⊆ I. A non-ground clause C is satisfied by I if I  Cσ for every grounding substitution σ. An interpretation
I is called a model of N , I  N , if I  C for every C ∈ N . A model I of N is considered minimal with respect
to set inclusion if there is no model I0 with I0 ( I and I0  N . A set of clauses N is satisfiable, if there exists a
model of N . Otherwise, the set is unsatisfiable.

2.1   Straight Dismatching Constraints
In the following, we extend clauses with dismatching constraints x 6= t which restrict the allowed ground instances.
An important restriction is that the term t is straight, i.e., it only has a single branch.




                                                               98
Definition 1 (Straight Terms). A term f (s1 , . . . , sn ) is called straight, if f (s1 , . . . , sn ) is linear and all arguments
are variables except for at most one straight argument si .
   For example, the terms f (x, f (a, y)) and f (x, f (y, z)) are straight, while f (x, f (a, b)) is not. We can identify
straight terms visually in their tree form, if there is exactly one branch with function symbols on each node.
                            f                               f                                 f

                                   f                                            f                               f

                             a                                                            a b
Note that for two different ground terms t1 and t2 , there always exists a position in both where they have different
function symbols. The path from the root to this position in t1 defines a straight term t that has only t1 as an
instance but not t2 . Thus, a straight terms are sufficient to isolate two ground instances.
Definition 2 (Straight Dismatching Constraint). A straight dismatching constraint π is of the form
                                                ^
                                                    si 6= ti
                                                                   i∈I

where I is a finite set of indices, V
                                    the si are arbitrary terms S
                                                               and the ti are straight terms.
   Given a substitution σ, πσ = i∈I si σ 6= ti . lvars(π) = i∈I vars(si ) are the left hand variables in constraint
π. We further extend the set of constraints with the constants > and ⊥ representing the empty and the unsolvable
constraint, respectively. An atomic constraint s 6= t occurring in π is called a subconstraint of π. The length
|π| isVdefined as the number of subconstraints, i.e., |I|. The size, size(π), and depth, d(π), of a constraint
π = i∈I si 6= ti are the sum and maximum of the term depths of the ti , respectively. For brevity, we will call
straight dismatching constraints just constraints in the following.
Definition 3 (Solutions). Let π be a constraint, Σ a signature and V a set of variables with lvars(π) ⊆ V.
A solution of π over V is a grounding substitution δ over V such that for all i ∈ I, si δ is not an instance of
ti . DV (π) := {δ | δ is a solution of π over V}. A constraint is solvable if it has a solution and unsolvable,
otherwise. Two constraints are equivalent if they have the same solutions over all V. In particular, all grounding
substitutions are solutions of >, and ⊥ has no solution.
   For example, consider the constraint π = x 6= b ∧ x 6= f (f (u)) ∧ x 6= g(v, w) with the signature F =
{a/0, b/0, f /1, g/2}. π is solvable and DV (π) = {{x 7→ a}, {x 7→ f (a)}, {x 7→ f (b)}, {x 7→ f (g(s, t))} | s, t ∈
T(F, ∅)} is the set of solutions of π over {x}.
   Note that atomic constraints are not symmetric. For example x 6= a can have a solution while a 6= x is
unsolvable. Furthermore, the right-hand variables in a constraint can be arbitrarily renamed while the left-hand
variables are fixed. For example, the constraints x 6= f (v) and x 6= f (w) are equivalent but x 6= f (v) and
y 6= f (v) are not. For an atom P (g(x, y)) and a constraint x 6= f (y), the two y variables are not connected
since for σ = {y 7→ t}, P (g(x, y))σ = P (g(x, t)) but (x 6= f (y))σ = (x 6= f (y)). To avoid confusion, we generally
rename right-hand variables to be unique in a given context.
   If δ is a solution of π ∧ π 0 , then δ is a solution of π and if δ is a solution of πσ, then σδ is a solution of π.
These two properties follow directly from the definition. We will ignore V if it is clear in a given context. For
example, if π restricts the clause C, V is always vars(C) ∪ lvars(π).
                                                                 V
Definition 4 (Constraint Normal Form). A constraint π = i∈I si 6= ti is called normal, if all si are variables
and for all si = sj with i 6= j, ti is not an instance of tj .
Definition 5 (Constraint Normalization). We define constraint normalization π ↓ as the normal form of the
following transition system over constraints.

                        1    π ∧ s 6= y                                             ⇒ ⊥
                        2    π ∧ f (s1 , . . . , sn ) 6= f (y1 , . . . , yn ) ⇒ ⊥
                        3    π ∧ f (s1 , . . . , sn ) 6= f (t1 , . . . , tn )       ⇒ π ∧ si 6= ti if ti is complex
                        4    π ∧ f (s1 , . . . , sn ) 6= g(t1 , . . . , tm )        ⇒ π             if f 6= g
                        5    π ∧ x 6= t ∧ x 6= tσ                                   ⇒ π ∧ x 6= t




                                                                         99
Lemma 6. Constraint normalization terminates.

Proof. The first and second rule trivially terminate the normalization. In the third rule, the size of the constraint
decreases. The last two rules reduce the length of the constraint.

Lemma 7. π↓ is a normal constraint and equivalent to π.

Proof. In the first and second rule, f (s1 , . . . , sn ) is an instance of y and f (y1 , . . . , yn ), respectively. Hence,
there are no solutions, which is equivalent to the unsatisfiable constraint ⊥. In the third rule, a grounding
substitution δ is a solution of f (s1 , . . . , sn ) 6= f (t1 , . . . , tn ), if and only if at least one sj δ is not an instance
of tj , i.e. f (s1 , . . . , sn ) 6= f (t1 , . . . , tn ) is equivalent to s1 6= t1 ∨ . . . ∨ sn 6= tn . However, since f (t1 , . . . , tn ) is
straight and the second rule does not apply, there is exactly one complex ti . Thus, each sj δ is an instance of
the respective variable tj for j 6= i and therefore s1 6= t1 ∨ . . . ∨ sn 6= tn simplifies to just si 6= ti . In the forth
rule, f (s1 , . . . , sn ) can never be an instance of g(t1 , . . . , tm ) for any grounding substitution. Hence, all solutions
of π are solutions of π ∧ f (s1 , . . . , sn ) 6= g(t1 , . . . , tm ). In the last rule, let δ be a solution of π ∧ x 6= t. Then, xδ
is not an instance of t and hence, not an instance of tσ either. Therefore, δ is a solution of π ∧ x 6= t ∧ x 6= tσ.
All rules together cover every case where the left-hand side of a subconstraint is not a variable or there are two
constraints x 6= t and x 6= s where s is an instance of t. Hence, π↓ is a normal constraint.

Definition 8 (Constrained Clause). A pair of a clause and a constraint (C; π) is called a constrained clause. A
constrained clause is normal, if π is normal and lvars(π) ⊆ vars(C). For a given signature Σ, the set G((C; π)) =
{Cδ | δ ∈ DV (π), V = vars(C) ∪ lvars(π)} is called the set of ground instances of (C; π). The notion extends to
sets of constrained clauses. Two constrained clauses are equivalent if they have the same ground instances. A
Herbrand interpretation I satisfies (C; π), if I  G((C; π)).

   Note that in the context of a constrained clause (C; π), a solution δ of π is implicitly over V = vars(C)∪lvars(π)
such that Cδ ∈ G((C; π)). We can now also formally define the refinement described in the introduction.

Lemma 9 (Refinement). Let N ∪ {(C; π)} be a constrained clause set, x ∈ vars(C) and t a straight term with
vars(t) ∩ vars(C) = ∅. Then N ∪ {(C; π)} and N ∪ {(C; π ∧ x 6= t), (C; π){x 7→ t}} are equisatisfiable.

Proof. Let Cδ ∈ G((C; π)). If xδ is not an instance of t, then δ is a solution of π∧x 6= t and Cδ ∈ G((C; π∧x 6= t)).
Otherwise, δ = {x 7→ t}δ 0 for some substitution δ 0 . Then, δ 0 is a solution of π{x 7→ t} and thus, Cδ = C{x 7→
t}δ 0 ∈ G((C{x 7→ t}; π{x 7→ t})). Hence, G((C; π)) ⊆ G((C; π ∧ x 6= t)) ∪ G((C; π){x 7→ t}). Therefore, if I is a
model of N ∪ {(C; π ∧ x 6= t), (C; π){x 7→ t}}, then I is also a model of N ∪ {(C; π)}.
   Let D ∈ G((C; π ∧ x 6= t)) ∪ G((C; π){x 7→ t}). If D = Cδ ∈ G((C; π ∧ x 6= t)), then δ is also a solution of
π and therefore D ∈ G((C; π)). If D = C{x 7→ t}δ ∈ G((C; π){x 7→ t}), then {x 7→ t}δ is a solution of π and
therefore D ∈ G((C; π)). Hence, G((C; π ∧ x 6= t)) ∪ G((C; π){x 7→ t}) ⊆ G((C; π)). Therefore, if I is a model of
N ∪ {(C; π)}, then I is also a model of N ∪ {(C; π ∧ x 6= t), (C; π){x 7→ t}}.

   Consider again the introductory example. Lifting the conflicting core of the approximation failed because
the so-called lift-conflict D = Q(f (f (b))), Q(f (f (a))) → P (g(f (f (a)), f (f (b)))) is not an instance of the original
clause Q(x) → P (g(x, f (x))). Specifically, because x cannot be instantiated with an instance of both f (f (a)) and
f (f (b)). Therefore, we refine the original clause with the instantiation {x 7→ f (f (a))} and the opposing constraint
x 6= f (f (a)), resulting in C1 = Q(f (f (a))) → P (g(f (f (a)), f (f (f (a))))) and C2 = (Q(x) → P (g(x, f (x))) ; x 6=
f (f (a))). D differs from C1 because both occurrences of x are already instantiated with f (f (a)), excluding
any approximations, where the second x is instantiated with f (f (b)). Since x 6= f (f (a)) has no solutions
where x is instantiated with f (f (a)) and our approximation preserves this fact, D can not be inferred from the
approximation of C2 . Therefore, the lift-conflict D can not happen again.
                                                                            V
Lemma 10 (Clause Simplification).      V A constrained clause (C; π ∧ i∈I x 6= ti ) is equivalent to (C; π), if π is
normal, x ∈  / vars(C) ∪ lvars(π) and i∈I x 6= ti is solvable.
                                                                                     V
Proof. Let Cδ ∈ G(C; π), where δ is a solution of π over V.V Because i∈I x 6= ti is solvable, there is a
solution {x V 7→ t} over {x}. Then, δ[x 7→ t] is a solution of i∈I x 6= ti ∧ π over V ∪ {x}. Hence, Cδ[x 7→
t] ∈ G(C; i∈I x 6= ti ∧ π). The reverse direction is trivial.




                                                                     100
   Lemmas 7 and 10 show that for any constrained clause there exists an equivalent normal clause. In the
following we assume that constrained clauses are normal.
   Lastly in this section, we show that solvability of a constraint is decidable. Note that not all unsolvable
constraints trivially normalize to ⊥. A normal constraint can actually be both solvable and unsolvable depending
on the signature. For example, x 6= a is a normal constraint that is solvable for signature {a/0, b/0} but unsolvable
for signature {a/0}. A discussion of the implementation is deferred to Section 4.
                                                                                                          V
Proposition 11. A constraint π has no solution if and only if π↓= ⊥ or there exists a subset πx = i∈I x 6= si
of π↓ that has no solution.
Definition 12 (Shallow Instances). Let {y1 , y2 , . . .} be an infinite set of pairwise distinct variables. We define
the set of shallow instantiations of x under signature Σ as the set of substitutions
                                       ΘΣx = {{x 7→ f (y1 , . . . , yn )} | f ∈ F}.

   Note that if the set of function symbols F is infinite, ΘΣ
                                                            x is also infinite, but then any constraint is trivially
solvable if its normal form is different from ⊥. In the context of oredered resolution, the function symbols
appearing in the input clause set constitute a finite signature.
                                  V
Lemma 13. A constraint πx = i∈I x 6= si has no solution under signature Σ if and only if πx σ has no solution
for every σ ∈ ΘΣ x.

Proof. “⇒”. Assume δ is a solution of πx σ for some shallow instantiation σ. Then, σδ is a solution of πx , which
contradicts the assumption.
  “⇐”. Assume δ is a solution of πx . Then, xδ is ground V      and hence, xδ = f (y1 , . . . , yn )σ for some function
f ∈ F and substitution σ. Thus, πx {x 7→ f (y1 , . . . , yn )} = i∈I f (y1 , . . . , yn ) 6= si has σ as a solution, which
contradicts the assumption.
Lemma 14. Solvability of a constraint π is decidable.
Proof. If F is not finite, a constraint π has no solution if and only if π↓= ⊥. Let π↓= πx1 ∧ . . . ∧ πxn with xi 6= xj
for i 6= j. Because |πxi | is finite, there exists a function symbol f ∈ F such that xi 6= f (~t ) ∈ / πxi for all straight
terms f (~t ). Thus, for an arbitrary ground term f (~s ), σi = {xi 7→ f (~s )} is a solution of πxi . Then, σ1 ∪ . . . ∪ σn
is a solution of π.
   If F is finite, we use Proposition 11 and Lemma 13 recursively to check solvability of π. Using Proposition 11
we pick the constraints of one variable. Then the constraints are instantiated according to Lemma 13. The
instantiated constraints are normalized and then the overall procedure is repeated until either ⊥ or > is de-
rived. The procedure terminates because each instantiation strictly decreases the term depths of the respective
constraint and the signature is finite.

3    Ordered Resolution with Straight Dismatching Constraints
In this section we define an ordered resolution calculus with selection and redundancy on constrained clauses.
In [NR01] the authors show that superposition without equality, i.e., ordered resolution, is compatible with
arbitrary constraint systems. So their completeness results provide completeness for a straightforward combina-
tion of straight dismatching constraints with ordered resolution. However, our calculus is by far more refined.
The constraints are already considered in order to determine maximal literals, they are an inherent part of our
notion of redundancy and normalization and inheritance is built into the inference rules. Therefore, we pro-
vide here a separate proof of completeness that follows the classical superposition completeness recipe without
constraints [BG94].
Definition 15 (Orderings with Constraints). Let ≺ be an ordering on ground atoms. A literal A is called
(strictly) maximal in a clause (C ∨ A; π) if and only if there exists a solution δ of π such that for all literals B
in C, Bδ  Aδ (Bδ ≺ Aδ).
Definition 16 (Selection). A selection function assigns to a clause Γ → ∆ a possibly empty subset of Γ. For a
clause C and selection function sel, the literals in sel(C) are called selected.
Definition 17 (Redundancy). For a given ordering on ground clauses ≺, a constrained clause (C; π) is redundant
w.r.t. N if for every D ∈ G((C; π)), there exist D1 , . . . , Dn ∈ G(N )≺D with D1 , . . . , Dn  D.




                                                            101
  Note that a constrained clause (C; π) with a tautology C or an unsolvable constraint π is redundant. As
usual, our notion of redundancy does not cover cases such as A(x) subsuming A(f (x)). It can be extended to
consider such matches by extending the ordering [BG94] or the notion of fairness [NR01].

Definition 18 (Constrained Subsumption). A clause (C; π) subsumes (D; π 0 ) if there is a substitution σ such
that Cσ ( D and DV (π 0 ) ⊆ DV (πσ).

Lemma 19. If (C; π) subsumes (D; π 0 ), (D; π 0 ) is redundant in N ∪ {(C; π)}.

Proof. Let Dδ ∈ G((D; π 0 )). Then, δ is a solution of πσ and hence, Cσδ ∈ G((C; π)). Since Cσδ ⊂ Dδ,
Cσδ ∈ G((C; π))≺Dδ and Cσδ  Dδ. Therefore, (D; π 0 ) is redundant in N ∪ {(C; π)}.

Definition 20 (Constrained Condensation). A constrained clause (C; π) can be condensed if there exists a
substitution σ such that (Cσ; πσ) with duplicate literals removed subsumes (C; π). (Cσ; πσ) is then called the
condensation of (C; π).

Definition 21 (Constrained Subsumption Resolution). Let (C ∨ A; π) and (D ∨ ¬A0 ; π 0 ) be two constrained
clauses. Let σ be the matcher Aσ = A0 and let (C; π)σ subsume (D; π 0 ), then (D; π 0 ) is called a subsumption
resolvent of the clauses (C ∨ A; π) and (D ∨ ¬A0 ; π 0 ).

   In the presence of (D; π 0 ) the clause (D∨¬A0 ; π 0 ) is redundant. Therefore, in practice, subsumption resolution
directly replaces (D ∨ ¬A0 ; π 0 ) with (D; π 0 ).

Definition 22 (Constrained Resolution).

                                  (Γ1 → ∆1 , A ; π1 )      (Γ2 , B → ∆2 ; π2 )
                                                                                 , if
                                     ((Γ1 , Γ2 → ∆1 , ∆2 )σ ; (π1 ∧ π2 )σ↓)

1.   σ = mgu(A, B);
2.   (π1 ∧ π2 )σ↓ is solvable;
3.   Aσ is strictly maximal in (Γ1 → ∆1 , A; π1 ∧ π2 )σ and sel(Γ1 → ∆1 , A) = ∅;
4.   B ∈ sel(Γ2 , B → ∆2 ) or sel(Γ2 , B → ∆2 ) = ∅ and ¬Bσ maximal in (Γ2 , B → ∆2 ; π1 ∧ π2 )σ.

Definition 23 (Constrained Factoring).

                                               (Γ → ∆, A, B ; π)
                                                                     , if
                                              ((Γ → ∆, A)σ; πσ↓)

1.   σ = mgu(A, B);
2.   πσ↓ is solvable;
3.   sel(Γ → ∆, A, B) = ∅;
4.   Aσ is maximal in (Γ → ∆, A, B; π)σ

Lemma 24 (Soundness). Constrained Resolution and Factoring are sound.

Proof. Let (Γ1 , Γ2 → ∆1 , ∆2 )σδ be a ground instance of ((Γ1 , Γ2 → ∆1 , ∆2 )σ; (π1 ∧ π2 )σ). Then, δ is a solution
of (π1 ∧ π2 )σ and σδ is a solution of π1 and π2 . Hence, (Γ1 → ∆1 , A)σδ and (Γ2 , B → ∆2 )σδ are ground
instances of (Γ1 → ∆1 , A; π1 ) and (Γ2 , B → ∆2 ; π2 ), respectively. Because Aσδ = Bσδ, if (Γ1 → ∆1 , A)σδ and
(Γ2 , B → ∆2 )σδ are satisfied, then (Γ1 , Γ2 → ∆1 , ∆2 )σδ is also satisfied. Therefore, Constrained Resolution is
sound.
   Let (Γ → ∆, A)σδ be a ground instance of ((Γ → ∆, A)σ; πσ). Then, δ is a solution of πσ and σδ is a solution
of π. Hence, (Γ → ∆, A, B)σδ is a ground instance of (Γ → ∆, A, B; π). Because Aσδ = Bσδ, if (Γ → ∆, A, B)σδ
is satisfied, then (Γ → ∆, A)σδ is satisfied. Thus, Constrained Factoring is sound.

Definition 25 (Saturation). A constrained clause set N is called saturated up to redundancy, if for every
inference between clauses in N the result (R; π) is either redundant in N or G((R; π)) ⊆ G(N ).




                                                         102
Definition 26 (Partial Minimal Model Construction). Given a constrained clause set N , an ordering ≺ and a
selection function sel, we construct an interpretation IN for N , called a partial model, inductively on the ground
instances of clauses from N as follows:
                                    [
                          IC :=            δD
                                     D∈G(N )≺C
                                     
                                      {A}          if C = Γ → ∆, A
                             δC :=                  A strictly maximal, sel(C) = ∅ and IC 6 C
                                         ∅          otherwise
                                     
                                         [
                             IN :=             δC
                                     C∈G(N )

A clause C with δC 6= ∅ is called productive. The smallest ground instance C ∈ G(N ), where IN 6 C, is called
the minimal false clause. If no minimal false clause exists, then IN is a model of N .
  Recall that a ground instance of a constraint clause is a standard clause, Definition 8.
Lemma 27 (Completeness). Let N be a constrained clause set saturated up to redundancy by ordered resolution
with selection. Then N is unsatisfiable, if and only if (; >) ∈ N .
Proof. Assume N is unsatisfiable, but (; >) ∈     / N . For the partial model IN , there exists a minimal false clause
Cσ ∈ G((C; π)) for some (C; π) ∈ N .
   Cσ is not productive, because otherwise IN  Cσ. Hence, either sel(C) 6= ∅ or no positive literal in Cσ is
strictly maximal. Assume C = Γ2 , B → ∆2 with B ∈ sel(C) or ¬Bσ maximal. Then, Bσ ∈ ICσ and there
exists a ground instance (Γ1 → ∆1 , A)τ = Dτ ≺ Cσ of some clause (D; π 0 ) ∈ N , which produces Aτ = Bσ.
Therefore, there exists a ρ = mgu(A, B) and ground substitution δ such that Cσ = Cρδ, Dτ = Dρδ. Since
ρδ = σ is a solution of π and π 0 , δ is a solution of (π ∧ π 0 )ρ. Under these conditions, constrained resolution can
be applied to (Γ1 → ∆1 , A; π 0 ) and (Γ2 , B → ∆2 ; π). Their resolvent (R; πR ) = ((Γ1 , Γ2 → ∆1 , ∆2 )ρ; (π ∧ π 0 )ρ)
is either redundant in N or G((R; πR )) ⊆ G(N ). Its ground instance Rδ is false in IN and Rδ ≺ Cσ. If (R; πR )
is redundant in N , there exist C1 , . . . , Cn in G(N )≺Rδ with C1 , . . . , Cn  Rδ. Because Ci ≺ Rδ ≺ Cσ, IN  Ci
and hence IN  Rδ, which contradicts IN 6 Rδ. Otherwise, if G((R; πR )) ⊆ G(N ), then Rδ ∈ G(N ), which
contradicts Cσ being minimal false.
   Now, assume sel(C) = ∅ and C = Γ → ∆, B with Bσ maximal. Then, C = Γ → ∆0 , A, B with Aσ = Bσ.
Therefore, there exists a ρ = mgu(A, B) and ground substitution δ such that Cσ = Cρδ and ρδ is a solution
of π. Hence, δ is a solution of πρ. Under these conditions, constrained factoring can be applied to (Γ →
∆0 , A, B; π). The result (R; πR ) = ((Γ → ∆0 , A)ρ; πρ) is either redundant in N or G((R; πR )) ⊆ G(N ). Its
ground instance Rδ is false in IN and Rδ ≺ Cσ. If (R; πR ) is redundant in N , there exist C1 , . . . , Cn in G(N )≺Rδ
with C1 , . . . , Cn  Rδ. Because Ci ≺ Rδ ≺ Cσ, IN  Ci and hence IN  Rδ, which contradicts IN 6 Rδ.
Otherwise, if G((R; πR )) ⊆ G(N ), then Rδ ∈ G(N ), which contradicts Cσ being minimal false.
   Therefore, if (; >) ∈ / N , no minimal false clause exists and N is satisfiable.
   As an example for an application of the calculus we consider the following puzzle [Pel86]: “Someone who
lives in Dreadsbury Mansion killed Aunt Agatha. Agatha (a), the butler (b), and Charles (c) live in Dreadsbury
Mansion, and are the only people who live therein.”
                 [1 : Inp]                                             ( → K(a, a), K(b, a), K(c, a) ; >)

”A killer always hates his victim, and is never richer than his victim.”

                 [2 : Inp]                                     (K(x, y) → H(x, y) ; >)
                 [3 : Inp]                             (K(x, y), R(x, y) → ; >)

”Charles hates no one that Aunt Agatha hates. Agatha hates everyone except the butler.” The compact repre-
sentation of this exception results in a non-trivial straight dismatching constraint.

                 [4 : Inp]                             (H(a, x), H(c, x) → ; >)
                 [5 : Inp]                                             ( → H(a, x) ; x 6= b)




                                                               103
”The butler hates everyone not richer than Aunt Agatha. The butler hates everyone Agatha hates.”

                [6 : Inp]                                                 ( → R(x, a), H(b, x) ; >)
                [7 : Inp]                                          (H(a, x) → H(b, x) ; >)

”No one hates everyone. Agatha is not the butler. Therefore Agatha killed herself.”

                [8 : Inp]                (H(x, a), H(x, b), H(x, c) → ; >)
                [9 : Inp]                                          (K(a, a) → ; >)
               [10 : SubRes : 1, 9]                                       ( → K(b, a), K(c, a) ; >)
               [11 : Res : 5, 4]                                   (H(c, x) → ; x 6= b)
               [12 : Res : 5, 7]                                          ( → H(b, x) ; x 6= b)
               [13 : Res : 2, 11]                                  (K(c, x) → ; x 6= b)
               [14 : SubRes : 10, 13]                                     ( → K(b, a) ; >)
               [15 : Res : 14, 3]                                   (R(b, a) → ; >)
               [16 : Res : 6, 15]                                         ( → H(b, b) ; >)
               [17 : Res : 16, 8]                     (H(b, a), H(b, c) → ; >)
               [18 : SubRes : 17, 12]                               (H(b, c) → ; >)
               [19 : SubRes : 18, 12]                                     ( ; >)


4   Operations on Constraints
In this section we consider implementation aspects of constraints including the representation of constraints, the
solvability test and operations on constraints.

Straight Term Encoding
Looking again at constrained resolution, factoring, and normalization, note that the constraint-terms of the
inference are all subterms of the constraints of the parent clauses. This means that throughout a saturation,
every straight constraint term is a subterm of the constraints in the input set. Therefore, in an implementation
we can store the input constraints separately and represent atomic constraints as just a variable and a pointer
to the respective subterm.
    Furthermore, by adding for each n-ary function symbol f the unary functions f1 , . . . , fn and a constant symbol
f0 , we can encode and store any straight term as an array of function symbols terminated by a constant symbol.
For example, f (x, f (a, y)) and f (x, f (y, z)) become f2 , f1 , a0 and f2 , f0 , respectively.

Sorting
All of the operations on constraints can be applied on arbitrary constraints, but most of them become significantly
faster if the constraints are sorted. Given total orderings on X and F, we sort normal atomic constraints ascending
the following order:
                              x 6= [l]     < y 6= [r]         if    x  except for exactly the one σ where both xσ and t have the same top function symbol. In that
case, normalization reduces the size of t by removing the top symbol of t. This means that in total every non-
variable position in the straight right-hand terms of the constraint is considered exactly once for each σ ∈ ΘΣ  x.
Solvability can therefore be decided in O(|F|size(π)).
   If the subconstraints are sorted, the solvability check can be computed in-place, because each recursive call
on some πx and πσ↓ applies to non-overlapping and consecutive sections in the sorted constraint. Recall that
right hand sides of constraints do not share any variables. Now, looking at the overall recursive derivation tree
generated along the proof of Lemma 14, the follwing invariant holds: Each subterm of the right hand sides of the
inital constraint occurs at most once as a top level term on a right hand side constraint in the overall derivation
tree. Solvability of a sorted constraint π can therefore be decided independently of the size of F in O(size(π)).
   Furthermore, we can use intermediate results of the solvability check to simplify the constraint. If πx {x 7→ t}
has no solutions for some straight term t but πx {x 7→ t}↓6= ⊥, we can add x 6= t to π (Corollary 29) which
replaces the sub-constraints in πx {x 7→ t}↓ by Definition 5.5.

Example
Consider the constrained clause set N consisting of
                                        (P (x, x) → ; x 6= f (a))
                                        (P (f (y), z) ; y 6= f (f (v)) ∧ z 6= f (f (a))).
Without the constraints, N is unsatisfiable because we could derive  using the unifier
                                 σ = mgu(P (x, x), P (f (y), z)) = {x 7→ f (y), z 7→ f (y)}.
Instead, we use σ to analyze the constraints.
                                             yσ 6= f (f (v)) ∧ zσ 6= f (f (a)) ∧ xσ 6= f (a)
                                        = y 6= f (f (v)) ∧ f (y) 6= f (f (a)) ∧ f (y) 6= f (a)
                                       ⇒ y 6= f (f (v)) ∧ y 6= f (a) ∧ y 6= a
                                  sorted y 6= a ∧ y 6= f (a) ∧ y 6= f (f (v))
For the solvability check, we use the signature {a/0, f /1} given by the input.
                                   (y 6= a){y 7→ a} or (y 6= f (a) ∧ y 6= f (f (v))){y 7→ f (y)}.
                              ⇒ a 6= a or y 6= a ∧ y 6= f (v).
                              ⇒ ⊥ or [(y 6= a){y 7→ a} or (y 6= f (v)){y 7→ f (y)}].
                              ⇒ ⊥ or [a 6= a or f (y) 6= f (v)].
                              ⇒ ⊥ or [⊥ or ⊥].
                              ⇒ ⊥.
Since the constraint is unsolvable, no resolution is performed and N is saturated. If the constraint were solvable,
for example, without the y 6= a constraint, we could replace y 6= f (a) ∧ y 6= f (f (v)) with y 6= f (v) since
(y 6= f (a) ∧ y 6= f (f (v))){y 7→ f (y)} is unsolvable.

Many-sorted Signature
We can further improve the chance to identify unsolvable constraints by using many-sorted signatures. Then,
the shallow instantiations θxΣ only range over the function symbols in the sort of variable x. In array-theories,
for example, data, indices, and arrays are usually modeled to be distinct from each other. A constraint on an
index-variable is therefore already unsolvable if just all ground terms of the index-sort are excluded. Without
sort information, such a constraint is still solvable.




                                                             105
   An algorithm to extract a many-sorted signature with a maximal number of sorts was introduced in [Cla03].
At first, all function and predicate symbols start with different sorts. Then, compute equivalence classes of sorts
that should be equal for the problem to be well-sorted: Wherever a function symbol is the argument of another
function or predicate, their result and argument sorts are the same and for each variable in each clause, the sorts
of its occurrences are also the same. Using union-find, the whole algorithm has linear complexity.
   For first-order logic without equality, the resulting sorts are always monotone [CLS11], which means that the
unsorted and many-sorted versions of the same problem are equisatisfiable. As a consequence, the resolution
calculus can continue to use the unsorted signature while the constraint operations use the corresponding many-
sorted signature.


Subset Check

Another important advantage of straight constraints is the ability to efficiently check whether the solutions of
one constraint are a subset of the solutions of another constraint. The subset check allows common redundancy
eliminations such as subsumption, condensation, and subsumption resolution to account for constraints. For
example, (P (x); x 6= a) subsumes (P (x), Q(y); x 6= a ∧ y 6= b), while (Q(x); x 6= a) does not.
   If πσ↓= > the subset condition on the solutions is trivially fulfilled, but even for general constraints, we can
decide the subset relation.

Lemma 28. Let π and π 0 be solvable normal constraints. Then DV (π) ⊆ DV (π 0 ) if and only if π{x 7→ t} has no
solutions for every subconstraint x 6= t in π 0 .

Proof. “⇒”. Assume there exists a subconstraint x 6= t in π 0 , such that δ is a solution of π{x 7→ t}. Then,
{x 7→ t}δ is a solution of π. Because x{x 7→ t}δ = tδ is an instance of t, {x 7→ t}δ is not a solution of π 0 . This
contradicts the assumption that DV (π) ⊆ DV (π 0 ).
   “⇐”. Assume DV (π) 6⊆ DV (π 0 ). Let δ be a solution of π, but not of π 0 . There exists a subconstraint x 6= t
of π 0 such that xδ is an instance of t. Hence, there is a substitution σ such that δ = {x 7→ t}σ. Then, σ is a
solution of π{x 7→ t}. This contradicts the assumption that π{x 7→ t} has no solutions.


Corollary 29. A constraint π is equivalent to π ∧ x 6= t, if π{x 7→ t} has no solution.

Proof. DV (π ∧ x 6= t) ⊆ DV (π) holds trivially and DV (π) ⊆ DV (π ∧ x 6= t) follows from Lemma 28.


   Note that if π was fully simplified according to Corollary 29, then π{x 7→ t} has no solutions if and only if
π{x 7→ t}↓= ⊥. This means π{x 7→ t} is unsolvable if there exists a constraint x 6= s ∈ π such that t is an
instance of s. If both π and π 0 are also sorted, we can therefore implement the subset check similar to a merge
in merge-sort in O(size(π) + size(π 0 )).


Ordering

As the name ordered resolution suggests the ordering on atoms holds another key role in the efficiency of the
calculus. Just as their unconstrained counterparts, the constrained resolution and factoring rules (Definitions 22
and 23) are only applied if the unified literals are (strictly) maximal. This means that we can avoid inferences
if we can show that these literals are not maximal.
   In general, we can continue to use any traditional ordering by ignoring the constraints. If an atom is not
(strictly) maximal in the clausal part, it is not (strictly) maximal for any ground instance including the ones
solving the constraint. On the other hand, the constraint could be excluding all ground instances where the
atom is (strictly) maximal. For example, the lexicographic path ordering (LPO) can be extended with straight
dismatching constraints. Similar extensions are possible for the RPO and KBO.

Definition 30 (LPO with Constraints). Let s, t be terms, π a constraint and F finite. We define the constrained




                                                        106
lexicographic path ordering with the following transition system on pairs (s ≺ t; π):
                                                         _
                              (f (s~n ) ≺ g(t~m ) ; π) ⇒    (f (s~n )  ti ; π) where g ≺ f
                                                                            1≤i≤m
                                                                             ^
                                        (f (s~n ) ≺ g(t~m ) ; π) ⇒                  (si ≺ g(t~m ); π)    where f ≺ g
                                                                            1≤i≤n
                                                                             ^
              (f (t1 , . . . , ti−1 , si , . . . , sn ) ≺ f (t~n ) ; π) ⇒           (sj ≺ f (t~n ); π)   ∧ (si  ti ; π)
                                                                            i                           where x ∈ vars(g(t~m ))
                                                                            ^
                                               (s ≺ x          ; π) ⇒              (sσ ≺ xσ; πσ↓)        where x ∈ lvars(π)
                                                                               Σ
                                                                            σ∈θx
                                                                            ^
                                               (x ≺ t          ; π) ⇒              (xσ ≺ tσ; πσ↓)        where x ∈ lvars(π)
                                                                               Σ
                                                                            σ∈θx

                                               (s ≺ t          ; ⊥) ⇒         >


Lemma 31. Constrained lexicographic path ordering terminates.
Proof. The first to third rule each decrease the depth of s or t, while the constraint remains the same. The
fourth and last rules are trivially terminating. For the fifth and sixth rule, size(πσ↓) < size(π). Therefore, the
constrained lexicographic path ordering terminates.
   Note that the first four rules directly follow the definition of ≺lpo and therefore (s ≺ t; >) is the same as
computing s ≺lpo t. The fifth and sixth rule reuse the principle idea of the solvability check to instantiate π
until it normalizes to > or ⊥. In the latter case, the last rule applies to remove instantiations that are excluded
by the constraint. For example, consider (a ≺ x ; x 6= a) with precedence a ≺ b ≺ f ≺ g. The only ground case
where a 6≺lpo x is (a ≺ a ; a 6= a), but the constraint a 6= a is unsolvable. Therefore, a ≺lpo x under constraint
x 6= a.
   Furthermore, note that (a ≺ b ; b 6= a) ⇒∗ > directly implies that also (a ≺ f (x) ; f (x) 6= a) ⇒∗ > and
(a ≺ g(x, y) ; g(x, y) 6= a) ⇒∗ >. In general, we need to check only the “minimal” solution with respect to of
πx ≺lpo in the fifth rule. Analogously, the “maximal” solution of πx suffices for the sixth rule, if it exists. A
solution δ of πx is minimal (maximal) if for every solution δ 0 of πx , xδ lpo xδ 0 ( xδ lpo xδ 0 ). For example,
the constraint x 6= a ∧ x 6= f (b) ∧ x 6= f (f (u)) has under signature a ≺ b ≺ f the minimal solution {x 7→ b} and
maximal solution {x 7→ f (a)}, but no maximal solution exists under signature a ≺ b ≺ f ≺ g. If there is no
maximal solution, it means that arbitrarily large terms are solutions for πx . Then, we can immediately conclude
(x ≺ t; π) ⇒ ⊥ if x ∈/ vars(t). Therefore, we can refine these rules to

               (s ≺ x; π) ⇒ (sδ ≺ xδ; πδ↓)                where x ∈ lvars(π) and δ is the minimal solution of πx
               (x ≺ t ; π) ⇒ (xδ ≺ tδ; πδ↓)               where x ∈ lvars(π) and δ is the maximal solution of πx

   Just like the solvability check, minimal and maximal solutions can be computed in linear time. Unless an
earlier case allows an early termination, the previous rules would eventually generate all cases for solutions of x
including the minimal and maximal. Generating the cases alone corresponds to the work required to find the
minimal or maximal solution.
Lemma 32. Iff (s ≺ t; π) ⇒∗ >, then sδ ≺lpo tδ for every solution δ of π.
Proof. “⇒”: By induction on the derivation (s ≺ t; π) ⇒∗ >.
    The first four rules W  follow directly from the definition of ≺lpo . For example, consider the first rule. Let
(f (s~n ) ≺ g(t~m ); π) ⇒ 1≤i≤m (f (s~n )  ti ; π) ⇒∗ >. Then, (f (s~n )  ti ; π) ⇒∗ > for at least one 1 ≤ i ≤ m.
By the inductive hypothesis, f (s~n )δ ≺lpo ti δ for every solution δ of π. Therefore by the definition of ≺lpo ,
f (s~n )δ ≺lpo g(t~m )δ for every solution V δ of π.
    For the fifth rule, let (s ≺ x; π) ⇒ σ∈θxΣ (sσ ≺ xσ; πσ) ⇒∗ >. Assume there is a solution δ of π such that
sδ 6≺lpo xδ. Then, xδ = f (y1 , . . . , yn )δ 0 for some substitution δ 0 and there is a σ = {x 7→ f (y1 , . . . , yn )} ∈ ΘΣ
                                                                                                                            x.




                                                                            107
Since δ 0 is a solution of πσ, sσδ 0 6≺lpo xσδ 0 contradicts the inductive hypothesis on (sσ ≺ xσ; πσ) ⇒∗ >. The
sixth rule is analogous. The last rule is trivial as ⊥ has no solutions.
   “⇐”: Let (s ≺ t; π) 6⇒∗ >, i.e., there is normal form (s ≺ t; π) ⇒∗ F , where F is either ⊥ or a formula
consisting of conjunctions and disjunctions of pairs (s ≺ t; π) where no rule can be applied. We prove by
induction on the derivation (s ≺ t; π) ⇒∗ F , that there is a solution δ of π such that sδ 6≺lpo tδ.
   Again, the first four rules follow   V directly from the definition of ≺lpo . For example, consider the second
rule. Let (f (s~n ) ≺ g(t~m ); π) ⇒ 1≤i≤n (si  g(t~m ); π) 6⇒∗ >. Then, (f (s~n )  ti ; π) 6⇒∗ > for at least one
1 ≤ i ≤ n. By the inductive hypothesis, si δ 6≺lpo g(t~m )δ for a solution δ of π. Therefore, by the definition of
≺lpo , f (s~n )δ 6≺lpo g(t~m )δ for the solution
                                             V δ of π.
   For the fifth rule, let (s ≺ x; π) ⇒ σ∈θxΣ (sσ ≺ xσ; πσ) 6⇒∗ >. Then, (sσ ≺ xσ; πσ) 6⇒∗ > for at least one
σ ∈ ΘΣ  x . By the inductive hypothesis, sσδ 6≺lpo xσδ for a solution δ of πσ. Therefore, sσδ 6≺lpo xσδ for the
solution σδ of π. The sixth rule is analogous and the last rule contradicts (s ≺ t; π) 6⇒∗ >.
    Constrained LP O extends to atoms and literals in the usual way.
Corollary 33. Let (C ∨ E; π) be a constrained clause. If (E ≺ L; π) ⇒∗ > ((E  L; π) ⇒∗ >) for some literal
L in C, then E is not (strictly) maximal in (C ∨ E; π) under ≺lpo .

5    Conclusion
We have presented a sound and complete ordered resolution calculus for first-order clauses with straight dismatch-
ing constraints. This constraint clause language enables a refinement method for our abstraction-refinement
calculus [TW15] by replacing an input clause by exactly two constrained clauses instead of a quadratically
((|Σ| − 1) · d(t), see Section 1) sized explicit representation using standard clauses [LM87]. At the same time,
a resolution prover only needs to store the input constraint terms once and can then store atomic constraints
in constant space. Aside from sorting constraints, all operations on constraints perform in linear time. This
is in contrast to general dismatching constraints where solvability is NP-hard [CL89]. SAT can be encoded as
solvability of dismatching constraints with only ground terms on the right-hand side. For example, the SAT
clause Ci = x1 ∨ ¬x2 ∨ x3 is encoded by the atomic constraint πi = fi (x1 , x2 , x3 ) 6= fi (false, true, false). Then
a general dismatching constraint π1 ∧ · · · ∧ πn as the result of the encoding of a clause set {C1 , . . . , Cn } with
signature Σ = {true, false} is solvable iff the clause set {C1 , . . . , Cn } is satisfiable. Currently, we are working on
the extension of classical decidable clause fragments to clauses with straight dismatching constraints.

Acknowledgments
The authors would like to acknowledge discussions with and helpful comments by Gábor Alagi. We further thank
our reviewers for their helpful and constructive comments.

References
[AW15] Gábor Alagi and Christoph Weidenbach. NRCL - a model building approach to the Bernays-Schönfinkel
       fragment. In Carsten Lutz and Silvio Ranise, editors, Frocos (Wroclaw), volume 9322 of Lecture Notes
       in Artificial Intelligence, pages 69–84. Springer, 2015.
[BG94]    Leo Bachmair and Harald Ganzinger. Rewrite-based equational theorem proving with selection and
          simplification. Journal of Logic and Computation, 4(3):217–247, 1994. Revised version of Max-Planck-
          Institut für Informatik technical report, MPI-I-91-208, 1991.
[CL89]    Hubert Comon and Pierre Lescanne. Equational problems and disunification. Journal of Symbolic
          Computation, 7(3/4):371–425, 1989.
[Cla03]   Koen Claessen. New techniques that improve MACE-style finite model finding. In Proceedings of the
          CADE-19 Workshop: Model Computation - Principles, Algorithms, Applications, 2003.
[CLS11] Koen Claessen, Ann Lillieström, and Nicholas Smallbone. Sort it out with monotonicity - translating be-
        tween many-sorted and unsorted first-order logic. In Nikolaj Bjørner and Viorica Sofronie-Stokkermans,
        editors, Automated Deduction - CADE-23 - 23rd International Conference on Automated Deduction,
        Wroclaw, Poland, July 31 - August 5, 2011. Proceedings, volume 6803 of Lecture Notes in Computer
        Science, pages 207–221. Springer, 2011.




                                                           108
[Com91] Hubert Comon. Disunification: A survey. In Jean-Louis Lassez and Gordon D. Plotkin, editors,
        Computational Logic - Essays in Honor of Alan Robinson, pages 322–359. The MIT Press, 1991.
[CZ92]    Ricardo Caferra and Nicolas Zabel. A method for simultanous search for refutations and models by
          equational constraint solving. Journal of Symbolic Computation, 13(6):613–642, 1992.
[Kor13] Konstantin Korovin. Inst-Gen - A modular approach to instantiation-based automated reasoning. In
        Programming Logics - Essays in Memory of Harald Ganzinger, pages 239–270, 2013.
[LM87]    J.-L. Lassez and K. Marriott. Explicit representation of terms defined by counter examples. Journal
          of Automated Reasoning, 3(3):301–317, September 1987.

[NR01]    Robert Nieuwenhuis and Albert Rubio. Paramodulation-based theorem proving. In John Alan Robin-
          son and Andrei Voronkov, editors, Handbook of Automated Reasoning (in 2 volumes), pages 371–443.
          Elsevier and MIT Press, 2001.
[Pel86]   Francis Jeffry Pelletier. Seventy-five problems for testing automatic theorem provers. Journal of
          Automated Reasoning, 2(2):191–216, 1986.
[TW15] Andreas Teucke and Christoph Weidenbach. First-order logic theorem proving and model building via
       approximation and instantiation. In Carsten Lutz and Silvio Ranise, editors, Frontiers of Combin-
       ing Systems: 10th International Symposium, FroCoS 2015, Wroclaw, Poland, September 21-24, 2015,
       Proceedings, pages 85–100, Cham, 2015. Springer International Publishing.




                                                     109