=Paper=
{{Paper
|id=Vol-1872/paper2
|storemode=property
|title=Self-Correcting Unsound Reasoning Agents
|pdfUrl=https://ceur-ws.org/Vol-1872/paper2.pdf
|volume=Vol-1872
|authors=Yannick Chevalier
|dblpUrl=https://dblp.org/rec/conf/lpnmr/Chevalier17
}}
==Self-Correcting Unsound Reasoning Agents==
Self-Correcting Unsound Reasoning Agents
Yannick Chevalier1
Université Paul Sabatier? , IRIT, Toulouse, France yannick.chevalier@irit.fr
Abstract. This paper introduces a formal framework for relating learn-
ing and deduction in reasoning agents. Our goal is to capture imperfect
reasoning as well as the progress, through introspection, towards a better
reasoning ability. We capture the interleaving between these by a reason-
ing/deduction connection and we show how this—and related—definition
apply to a setting in which agents are modeled by first-order logic the-
ories. In this setting, we give a sufficient condition on the connection
ensuring that under fairness assumptions the limit of introspection steps
is a sound and complete deduction system. Under the same assumption
we prove every falsehood is eventually refuted, hence the self-correction
property.
Keywords: Reasoning Agents, Unsound Reasoning, First-Order Logic,
Ordered Resolution, Saturation
1 Introduction
We consider in this paper a framework in which an agent is described by state-
ments in a logic. These statements reflect the agent’s view of the world, and can
either be true—when the statement is known to be true in all possible cases—or
just satisfiable—in which case we say the statement is merely believed. While the
knowledge must be a consistent set of statements, we allow the set of beliefs to
be inconsistent, as sometimes belief just reflects uncertainty, lack of knowledge,
or simply queries related to the knowledge. Reasoning is accordingly split into
two part:
– an introspective part, that aims at producing new known statements from
existing ones;
– a deductive part, that aims at evaluating believed statements to eliminate
those in contradiction with what is known to be true.
Specifically, we aim at capturing the dynamicity of reasoning, that is the
interplay between these two aspects of reasoning. This agency framework is
actually very close to what is routine in automated deduction. A given (usu-
ally first-order) theory T is saturated by adding consequences of already known
clauses to obtain a new, hopefully finite, equivalent theory T 0 . The rationale for
the development of a saturation procedure is to obtain a theory T 0 that can be
used with a specific deduction strategy to answer queries.
?
CIMI Project FMS
This work is built from the strong connection in first-order logic between a
saturation procedure on the one hand, and the deduction strategy that can be
proven complete for saturated sets of clauses on the other hand. For instance,
linear ordered resolution is complete [2] for a set of clauses saturated up to re-
dundancy under ordered resolution: given a saturated theory T and a ground
clause C such that T ∪ ¬C is inconsistent, there exists a proof by ordered reso-
lution of the empty clause in which each inference contains at most one clause
from T . Similar results are obtained for superposition [2], equational theories [6,
5], with the possible addition of a selection function [8, 11]. In some works [1,
9], the output of the saturation process is directly a decision procedure for a set
of queries (in the mentioned cases, a decision procedure for ground queries). In
other cases it is proven that if a saturation procedure terminates then a given
proof strategy is efficient, and in many cases this proof strategy can be turned
into a decision procedures for ground query problems [2, 10, 3, 9].
In contrast with these more recent approaches, the dynamicity of reasoning
that we consider is closer to the unfailing completion procedure [5] that applies to
equational theories and does not have a subsumption rule. In [5] it is proved that
a transformation of the theory can be formulated as a transformation of proofs
that reduce their complexity for a well-founded ordering. Thus the absence of a
subsumption rule, after which proofs may become more complex, is essential in
that approach. Technically we aim at obtaining a similar main theorem, i.e. that
the presentations computed step by step are eventually complete for refutation
of any contradicted formula.
In the context of this paper, this means we are interested in the interleaving of
saturation steps (called introspection above) and query evaluation steps (called
deduction above). We introduce in this paper pairs (`i , `d ) of inference rules
that reflect this situation. The relation `i represents the inference rules employed
during the saturation process, while the relation `d represents the inference rules
involved in the deduction.
Beside that technical justification for this paper, we note that we model
a situation in which the lack of omniscience of agents is not the result of a
partial knowledge, but rather stems from an incomplete form of deduction. This
approach was proposed by Konolige in [7], but does not seem to have been
pursued. In [7] what we call an agent state is called a deduction structure.
While Konolige focuses on the properties of a deduction structure, for example
by introducing a possible worlds semantics, this paper focuses on the analysis
on the possible evolutions of deduction structures and on the general criteria
on pairs (`i , `d ) needed to ensure that a contradictory formula will always be
eventually (for relation `i ) be refuted (using `d ).
Outline. In this paper we first give basic definitions relating to first-order logic,
and saturation in Sect. 2. We then present agent models and their evolution in
Sect. 3, and we prove that under assumptions made on the introspection and
deduction process, an agent may eventually remove all beliefs contradicting her
knowledge. We give an example in Sect. 4 of how the processes of saturation and
deduction in first-order logic fit in that model. We conclude in Sect. 5 with the
possible extensions of this paper.
2 Basic Definitions
2.1 First-Order Logic
First-order objects. We assume the reader is familiar with the basics of first-
order logic, and just recall them here to define notations. Variables are denoted
x, y, z and decorations thereof and constants denoted c etc. We let X and C be
denumerable sets of variables and constants, respectively. Given a set F of func-
tion symbols with arity and a set S of nullary symbols, we denote T (F, S) the
set of terms over S, i.e. the least set containing S and closed by the application
of functions in F. The set of terms is T (F, X ∪ C), and the set of ground terms
is T (F, C).
Given a set P of predicate symbols with arity, if p ∈ P is of arity n and
t1 , . . . , tn are terms then p(t1 , . . . , tn ) is an atom. A literal is either an atom A or
its negation ¬A. We say the literal is positive in the first case and negative in the
latter case. A clause is a multiset of literals denoting their disjunction. Accord-
ingly the empty clause ∅ is always false. A theory is a set—or conjunction—of
clauses. An atom, literal, clause, or theory is ground whenever all its atoms are
constructed on ground terms. By skolemization every first-order logic theory can
be presented by an equisatisfiable set of clauses. Finally we say a theory T is
inconsistent, and write T |= ∅ if it has no model.
Unifier, resolution and factorization. A substitution σ is an idempotent func-
tion from variables to terms of finite support, i.e. σ(x) = x for all but a
finite number of variables. Substitutions are extended homomorphically (us-
ing σ(f (t1 , . . . , tn ) = f (σ(t1 ), . . . , σ(tn )) for function or predicate symbols and
σ(c) = c for constants) to terms, atoms, literals, and clauses. As usual, we prefer
to use the infix notation, and write tσ in lieu of σ(t). A substitution σ is a unifier
of two terms t, t0 if tσ = t0 σ. It is a most general unifier of t, t0 if for every unifier
τ of t, t0 there exists a substitution θ such that σθ = τ . As is well known the most
general syntactic unifier of two terms is unique up to a renaming of variables.
We extend this notion to atoms in the usual way. Resolution and factorization
(see Fig. 2.1) were formally defined by Robinson [12]. A deduction sequence from
T is a finite sequence (C1 , . . . , Cn ) of clauses such that each Ci is either in T
or the conclusion of a resolution or factorization inference with antecedents in
{C1 , . . . , Ci−1 }. Whenever one such sequence exists we write T ` Cn . Resolution
is complete for refutation, i.e. if T |= ∅ then T ` ∅. If Φ is a clause and T is
not inconsistent, we have that T |= Φ is equivalent to T, ¬Φ |= ∅, and thus by
completeness for refutation T, ¬Φ ` ∅. Trivially, if Φ is an atomic formula, i.e.
contains only one literal, this implies T ` Φ.
Γ ∨ A ¬B ∨ ∆ σ = mgu(A, B) Γ ∨ A ∨ B σ = mgu(A, B)
(Γ ∨ ∆)σ (Γ ∨ A)σ
resolution factorisation
Fig. 1. Resolution and factorization inference rules
3 Agent Model
3.1 Modeling Reasoning
Agents are defined wrt to a logic L in which formulas are evaluated wrt to a set
M of models. We let SL be the set of statements in L, and P(SL ) be the set of
subsets of SL , and Con be the set of consistent subsets of L. First we define the
introspection relation as a change of a representation of a theory T that yields
a theory T 0 in which truth in a given model is preserved.
Definition 1. (Introspection relation) An introspection relation `i on the logic
L is a truth-preserving relation on the powerset P(SL ) , that is, for all T, T 0 , if
T `i T 0 , then for all m ∈ M we have m |= T if and only if m |= T 0 .
The `i relation changes the world representation of an agent, presumably to
better it. We now introduce a deduction relation that uses a representation of
the world to reason about the truth of arbitrary statements in the language.
Definition 2. (Deduction relation) A deduction relation `d on the logic L is a
relation ⊆ P(SL ) × L that represents a sound proof procedure for L. Namely, if
T `d ϕ then for all m ∈ M , we have m |= T implies m |= ϕ.
In addition to being truth preserving, an introspection relation should, in
order to be useful, reduce the difficulty of reasoning. We capture this intuition
as follows. Given T ∈ Con and ϕ ∈ L such that T |= ϕ, we measure the difficulty
of proving ϕ from T by the number of introspection steps needed to reach a set
T 0 ∈ Con such that T 0 `d ϕ.
Definition 3. (Distance to provability) Assume `i ⊆ Con × Con is an introspec-
tion relation, T |= ϕ, and that `d is a deduction relation. Then we let:
n
dist`i ,`d (ϕ, T ) = min{n | T `i T 0 and T 0 `d ϕ}
We aim to prove that introspection steps always make proving easier, or at
least not more difficult. This implies a reasoning on all possible proofs, as is
the case for unfailing completion (see above). The usual orderings on proofs (for
example, a well-founded ordering on the clauses occurring in the proof [2, 3]) are
not strictly decreasing, and the distance provides an admittedly coarser ordering
sufficient for our purpose.
We capture the relation between introspection and deduction outlined previ-
ously by defining reasoning connections. These impose the constraints that can
be summarized as follows:
– a completeness constraint (first point in the definition below) by imposing,
when T |= ϕ that at least one sequence of introspection steps will lead to a
theory T 0 such that T 0 `d ϕ;
– a uniform not-going-back constraint imposing that whatever introspection
step is taken from T , the distance of the resulting T 0 to any formula ϕ will
be less than or equal to the distance between T and ϕ;
– a progress constraint stating that once a formula ϕ is chosen but not im-
mediately provable, there exists an introspection step that will reduce the
distance to provability.
These points are stated formally in the following definition.
Definition 4. (Reasoning connection) A reasoning connection is a couple (`i
, `d ) where `i is an introspection relation, `d is a deduction relation, and such
that:
1. completeness:
∀ϕ ∈ L, ∀T ∈ Con, T |= ϕ ⇒ dist`i ,`d (ϕ, T ) < +∞
2. not-going-back:
∀ϕ ∈ L, ∀T, T 0 ∈ Con, T `i T 0 ⇒ dist`i ,`d (ϕ, T 0 ) ≤ dist`i ,`d (ϕ, T )
3. progress:
∀T ∈ Con, ∀ϕ ∈ L, 0 < dist`i ,`d (ϕ, T ) < +∞ ⇒
(∃T 0 , dist`i ,`d (ϕ, T 0 ) < dist`i ,`d (ϕ, T ) ∧ T `i T 0 )
The role of Def. 4 is to provide one with simple conditions that are sufficient to
define reasoning agents in an arbitrary logic L who:
– start with imperfect reasoning yet are able to increase their reasoning power;
– and at the same time have a bounded deduction capacity as expressed by
the (presumably incomplete in general) deduction relation.
3.2 Agent State and Trajectory
We now proceed to define an agent state. We consider a naive model of agent
whose beliefs are statements considered satisfiable, i.e. that were not proven
invalid. This choice of accepting all non-refuted statements transforms a sound
but incomplete reasoning procedure into an unsound yet complete one, that will
accept all valid entailments, but also may accept in the set of beliefs unsatisfiable
statements.
Definition 5. (Agent state) Let (`i , `d ) be a reasoning connection. A (`i , `d )-
agent state is a couple (T, B) where T ∈ Con and B ⊆ L, and for all ϕ ∈ B we
have T ¬ `d ¬ϕ.
In order to simplify notations in relation with this definition, we note Ref `d (T )
the set of statements ϕ that can be refuted from T given `d . That is,
Ref `d (T ) = {ϕ | T `d ¬ϕ}
We present in this paper a first agent model that does not react to outside events
nor with other agents. Accordingly, an agent evolves either by receiving a new
aceptable belief or by changing her world’s view through introspection.
Definition 6. (Evolution of an agent) Let A = (T, B) be a (`i , `d )-agent. An
evolution of A is either:
– the reception of a new belief,
?ϕ
(T, B) −−−−−−−−→ (T, B ∪ {ϕ})
ϕ∈Ref
/ `d
(T )
– an introspection step,
i
(T, B) −−−
i
→, B \ Ref `d (T 0 ))
(` T
We note (T, B) (T 0 , B 0 ) if there exists one evolution of (T, B) into (T 0 , B 0 ).
We define trajectories as finite or infinite sequences of evolutions π = (πi =
(Ti , Bi ))0≤i . A trajectory is finite if one also has i < n for some n ∈ N in the
preceding definition. In that case it is said to be of length n, otherwise it is of
infinite length. Finally, the domain of a trajectory π is the set of integers i for
which πi is defined, and is denoted dom(π).
Definition 7. (Fair trajectories) A trajectory
π = (T0 , B0 ) (T1 , B1 ) . . . (Tn , Bn ) ...
is fair if:
∀ϕ ∈ L, ∀n ∈ dom(π), +∞ > dist`i ,`d (ϕ, Tn ) > 0 ⇒
∃m > n ∈ dom(π), dist`i ,`d (ϕ, T )n > dist`i ,`d (ϕ, Tm )
Given the unusual definition of fairness, we prove that fair trajectories always
exist.
Lemma 1. Let (`i , `d ) be a reasoning connection and (T, B) be a (`i , `d )-agent.
Then there exists a fair trajectory starting from (T, B).
Proof. Let ϕ0 , . . . , ϕn , . . . be an enumeration of all statements in L such that
T |= ϕn . By property 1. (completeness) we know that ∀n ∈ N, dist`i ,`d (ϕn , T ) <
+∞. Let ψ0 , ψ1 , . . . be any sequence in which each ϕn appears distdiϕn T times.
Let T0 , . . . , Tn , . . . be constructed inductively as follows:
– T0 = T
– if Ti−1 constructed, dist`i ,`d (ϕj , Ti−1 ) > 0, and ψi = ϕj , then by property
3. (progress) choose Ti such that dist`i ,`d (ϕj , Ti ) < dist`i ,`d (ϕj , Ti−1 ).
By property 3. (not-going-back) one easily proves that for all i the sequence
(dist`i ,`d (ϕi , Tj ))j∈N is decreasing and thus if ϕi has been selected k times in
the construction of Tj then dist`i ,`d (ϕi , Tj ) ≤ dist`i ,`d (ϕi , T0 ) − k.
The interest of fair trajectories lies in the fact that they eventually prove any
entailment.
Proposition 1. Let T ∈ Con and ϕ ∈ L be such that T |= ϕ, and let π be a
fair trajectory starting with (T0 = T, B0 ). Then there exists N ≥ 0 such that for
all n ≥ N we have Tn `d ϕ.
Proof. Let π = ((Ti , Bi )i≥0 and un = dist`i ,`d (ϕ, Tn ). The sequence of integers
u0 , u1 , . . . contains only positive integers, and is decreasing by the not-going-
back property. Furthermore, since the trajectory is fair, for each n ∈ dom(π)
such that dist`i ,`d (ϕ, Tn ) > 0 there exists m > n such that dist`i ,`d (ϕ, Tn ) >
dist`i ,`d (ϕ, Tm ). Thus there is a finite strictly decreasing subsequence of un
which must converge to 0 at some indice N . By the not-going-back property, for
each n > N we have dist`i ,`d (ϕ, Tn ) = 0, i.e. Tn `d ϕ.
?ϕ
Note that since −−−−−−−−→ steps have no impact on the first member of the
ϕ∈Ref
/ `d
(T )
couple, we have roundly ignored them. Prop. 1 can be applied to prove that
any belief varphi ∈ B inconsistent with T is eventually removed, and never
reintroduced. The proof simply consists in applying Prop. 1 on ¬ϕ for all ϕ ∈ B
inconsistent with T , and is thus omitted. It is however one of the main result of
this paper, and is thus stated as a theorem.
Let (T, B) be a (`i , `d )-agent and let π be any fair trajectory starting from
(T, B). Then there exists N ≥ 0 such that for all n ≥ N and all ϕ ∈ B such that
T |= ¬ϕ, we have ϕ ∈ / Bn .
4 Example of Reasoning Connection
We continue in this section the example of subterm local deductions in first-order
logic, and add to it ordered resolution. It is well known [2] that if a set of clauses
is saturated for ordered resolution then linear order resolution is refutationally
complete. The proof of this fact is based on the notion notion of redundancy
employed in that article: a clause obtained by ordered resolution or factorisation
is redundant if it is entailed by instances of clauses smaller than the premisses
of the rule applied. Hence, intuitively, if a proof by ordered resolution and fac-
torisation (denoted using `o of T, ¬ϕ `o ∅ contains an inference between two
clauses of T , this inference can be replaced by first, applying it on T to obtain
T 0 , and second, a case reasoning on the obtained clause: if it is redundant, then
one can replace it in the original proof of T, ¬ϕ `o ∅ by the proof of redundancy,
and if it is not redundant, one can use the newly obtained clause to simplify the
original proof.
Thus, in the original result, the completeness of linear ordered resolution
for refutation is based on an ordering on proofs in which proofs are viewed as
multisets of clauses. Thus, for every formula ϕ entailed by T , a saturation step
can be translated into a decrease in the proof of ϕ. This approach is however
not practical for our purpose, as we would to prove that the decrease happens
for all formulas, and is strict for at least some. To prove this we need to consider
the infinite number of minimal proofs, one for each entailed clause, and the
order on this object is no longer well-founded. Hence our previous introduction
of a distance with the not-going-back and progress properties. We also weaken
slightly the redundancy criterion by considering the multiset of atoms resolved
or factored upon in the redundancy proof, instead of the set of instances of
clauses. If a proof is minimal in the sense of [2] then the multiset of atoms
resolved or factored upon is minimal. However the converse does not hold: If
T = {a ∨ b, a ∨ c, b, c} and a > b > c then we have {a ∨ b, c} > {a ∨ c, b} though
were these two sets usable interchangeably in a redundancy proof, they would
yield the same multiset {a, b, c} of atoms resolved upon.
In Sect. 4.1 we make this discussion more precise by introducing the orderings
on terms, atoms, etc. In Sect. 4.2 we introduce the ordered resolution, factori-
sation, and redundancy rules we consider, and construct a (`i , `d ) reasoning
connection based on these rules for introspection and linear ordered resolution
for deduction. We prove that this is indeed a reasoning connection.
4.1 Orderings on Terms and Clauses
Let