<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Nonmonotonic Reasoning, November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Kinematics Principles for Inductive Reasoning from Conditional Belief Bases</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexander Hahn</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gabriele Kern-Isberner</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lars-Phillip Spiegel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christoph Beierle</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>FernUniversität in Hagen</institution>
          ,
          <addr-line>Universitätsstraße 11, 58097 Hagen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Technische Universität Dortmund (TU Dortmund University)</institution>
          ,
          <addr-line>August-Schmidt-Straße 1, 44227 Dortmund</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>1</volume>
      <fpage>1</fpage>
      <lpage>13</lpage>
      <abstract>
        <p>The kinematics principle, originating from probability theory, captures the idea that conditional beliefs should be independent from changes in the plausibility of facts. Furthermore, conditional information with respect to exclusive cases should be relevant only to the respective case, but not influence others. This principle was recently adapted to belief revision of ranking functions and total preorders. In this paper, we propose a kinematics principle for non-monotonic inference relations induced from conditional belief bases. We derive this principle from the connection between inductive reasoning and belief revision of both total preorders and ranking functions. Moreover, we evaluate several inference operators from the literature with respect to this new kinematics principle for inductive reasoning.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;inductive reasoning</kwd>
        <kwd>kinematics principle</kwd>
        <kwd>belief change</kwd>
        <kwd>conditionals</kwd>
        <kwd>total preorders</kwd>
        <kwd>ranking functions</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Intelligent agents often reason with incomplete background knowledge while assuming that the available
information is suficient to draw reasonable inferences. For example, when communicating scientific
results, we often provide examples with some limited context ∆ (e.g. about penguins) and assume
that unrelated information (e.g. about sparrows) can be left out without distorting the picture. More
formally, if all information in ∆ is conditional information based on a common premise , we would
expect that additional information ∆ ′ about the case of ¬ does not influence the inferences for the
case of . So our inferences about the case of  should be the same whether we provide ∆ or ∆ ∪ ∆ ′
as background knowledge. Moreover, it should not matter whether  actually holds or not.</p>
      <p>
        These ideas are very close to the kinematics principle which has been studied for belief revision. This
principle originates from probability theory, where it captures the idea that changes in the probability
of facts should not impact the conditional beliefs given those facts [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. An extension of this core idea
called Subset Independence has been formulated for probabilistic belief change [2], essentially stating
that the conditional beliefs should not only be independent from changes in the probability of facts, but
also independent from changes in other conditional beliefs, as long as the premises refer to exclusive
cases. This property has recently been adapted as Generalized Ranking Kinematics for the revision of
ranking functions [3], and as Qualitative Kinematics for the revision of total preorders [3].
      </p>
      <p>In this paper, we are going to derive a kinematics principle for inductive reasoning from the
abovementioned principles for belief revision. We are going to study its relationship to similar postulates from
the literature, and also evaluate well-known inference relations with respect to this adapted principle.
In summary, the main contributions of this paper are:
• We provide a more detailed account of inductive inference operators by emphasizing the
underlying two-step process of inducing an epistemic state first, and an inference relation afterwards.
• We extend the notion of conditionalization to inference relations and inductive inference operators.
• We propose a kinematics principle for inductive reasoning called (IRK), and evaluate several
approaches from the literature with respect to (IRK) in order to highlight its relevance.
• As a byproduct of our research, we show that for every ranking function  , there exists a
conditional belief base ∆ such that  is a c-representation of ∆ , which implies that all ranking
functions (and thus all total preorders) can be expressed via revisions of uniform epistemic states.</p>
      <p>The remainder of this paper is structured as follows. In Section 2, we briefly discuss related work. In
Section 3, we provide formal preliminaries and recall basic definitions which are relevant for this paper.
In Section 4, we recall the kinematics principles for the revision of ranking functions and total preorders.
In Section 5, we recall inductive inference operators and investigate their connection to belief revision.
In Section 6, we lift the concept of conditionalization to both inference relations and inductive inference
operators. In Section 7, we present the main result of this paper, which is the kinematics principle for
inductive reasoning. Afterwards, we evaluate several inference operators with respect to this principle
in Section 8. In Section 9 we discuss the relationship between kinematics and syntax splitting. We end
this paper with conclusions and some pointers to future work in Section 10.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Our work builds upon recent work done by Kern-Isberner, Sezgin, and Beierle [3, 4] which laid the
foundation for the kinematics principle we propose for inductive reasoning. A diferent adoption of
subset independence (which our kinematics principle is based on) in a semi-quantitative reasoning
framework can be found in [5].</p>
      <p>In [6], the relationship between several kinds of belief change and inductive reasoning has been
investigated, which is highly relevant for this paper since it enables us to transfer techniques used in
belief revision to non-monotonic reasoning.</p>
      <p>Similarly, in [7] the concept of syntax splitting was carried over from belief revision to non-monotonic
reasoning. Similar to kinematics, syntax splitting postulates that non-relevant information should not
influence inference results. However, as the name implies, syntax splitting is more syntactical in nature
(although there are obvious implications for the underlying semantics), since relevance in the case of
syntax splitting refers to the need to use common symbols in the logical language to express beliefs,
and is expressed with marginalization. Kinematics, on the other hand, focuses on reasoning about
semantically exclusive cases, which corresponds to conditionalization.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Formal Preliminaries</title>
      <p>Let ℒ be a finitely generated propositional language over an alphabet Σ = {, , , . . .}. Formulas
, , , . . . are formed using the standard connectives ∧, ∨, ¬. For conciseness of notation, we will
write  instead of  ∧  for conjunctions, and overlining formulas will indicate negation, i.e. 
means ¬. The symbol ⊤ denotes an arbitrary propositional tautology. The set of all possible worlds
(propositional interpretations) over Σ is denoted by Ω , and  |=  means that the propositional formula
 ∈ ℒ holds in the possible world  ∈ Ω ; then  is called a model of , and the set of all models of 
is denoted by Mod(). Similarly, for sets of propositions  ⊆ ℒ , Mod() denotes the set of possible
worlds that satisfy all elements of . For propositions ,  ∈ ℒ,  |=  holds if Mod() ⊆ Mod(),
as usual. Analogously, for sets of propositions , ℬ ⊆ ℒ ,  |= ℬ holds if Mod() ⊆ Mod(ℬ). Logical
equivalence between formulas is denoted by ≡ . By slight abuse of notation, we will use  both for the
model and the corresponding conjunction of all positive or negated atoms. This will allow us to ease
notation a lot. Since  |=  means the same for both readings of , no confusion will arise.</p>
      <p>We also consider conditionals (|) ∈ (ℒ|ℒ) which express statements like “If  then plausibly
”. The formula  is called the antecedent, and  is called the consequent of the conditional (|).
A conditional belief base ∆ is a finite set of conditionals. For every possible world , let verΔ() =
{(|) ∈ ∆ |  |= } and falΔ() = {(|) ∈ ∆ |  |= } be the sets of conditionals from
∆ which are verified resp. falsified by . Semantics for conditionals and conditional belief bases are
provided by epistemic states Ψ via an acceptance relation |=. For formulas , we have Ψ |=  if
Ψ |= (|⊤), meaning that  is accepted in Ψ . This allows us to subsume plausible propositional
formulas in terms of conditionals, which supports a more coherent view on reasoning and revision.</p>
      <p>In this paper, we consider two types of (representations for) epistemic states: total preorders and
ranking functions over possible worlds. Total preorders (TPOs) ⪯ ⊆ Ω × Ω are total and transitive
relations. As usual, 1 ≺ 2 if 1 ⪯ 2, but not 2 ⪯ 1, and 1 ≈ 2 if both 1 ⪯ 2 and 2 ⪯ 1.
Total preorders represent plausibility orderings, with the most plausible worlds being located in the
lowermost layer of ⪯ which we denote by min(Ω , ⪯ ). More generally, if Ω ′ ⊆ Ω is a subset of possible
worlds, min(Ω ′, ⪯ ) denotes the set of minimal worlds in Ω ′ according to ⪯ . The preorder ⪯ is lifted to
a relation between propositions1 in the usual way:  ⪯  if there is  |=  such that  ⪯ ′ for all
′ |= . A conditional (|) is accepted by ⪯ , denoted by ⪯ | = (|), if  ≺ .</p>
      <p>Ordinal Conditional Functions (OCFs, also called ranking functions)  : Ω → N∪{∞} with  − 1(0) ̸= ∅
[8] assign degrees of implausibility, or surprise, to possible worlds. The degree of (im)plausibility of
a formula  is defined by  () := min{ () |  |= }. Hence, due to  − 1(0) ̸= ∅, at least one of
 (),  () must be 0. A proposition  is accepted by  , denoted by  |= , if  |=  for all  such
that  () = 0; this is equivalent to saying that  () &gt; 0. This notion can be extended in a natural way
to assign ranks to sets of formulas  ⊆ ℒ via  () = min{ () |  |= }. Conditionals are accepted
by  , written as  |= (|), if  () &lt;  (). Note that these denfiitions are in full compliance
with corresponding definitions for total preorders.</p>
      <p>An epistemic state Ψ is called TPO-representable if its (qualitative) conditional beliefs can be modeled
via a total preorder, i.e. there exists a total preorder ⪯ such that Ψ |= (|) if ⪯ | = (|) for all
(|) ∈ (ℒ|ℒ). The corresponding total preorder is then denoted as ⪯ Ψ. Clearly, OCFs and TPOs
themselves are TPO-representable. A uniform epistemic state Ψ  accepts only trivial conditionals, i.e.
Ψ  |= (|) if  |= . For uniform TPOs ⪯  this means  ≈  ′ for all , ′; and for uniform OCFs
  we have  () = 0 for all .</p>
      <p>For the rest of this paper, we assume that all formulas  are consistent (i.e. Mod() ̸= ∅), all
conditionals (|) are contingent (i.e. Mod() ̸= ∅ and Mod() ̸= ∅), and all conditional belief
bases ∆ are (strongly) consistent (i.e. there exists a TPO ⪯ with ⪯ | = ∆ ) [9]. This allows us to present
our approach in a straightforward way without worrying about technical intricacies of these limit cases.</p>
      <p>For an OCF  and  ∈ ℒ, the conditionalization of  by  is an OCF  | : Mod() → N such that
 |() =  () −  () for all  ∈ Ω [10]. Qualitative conditionalization of TPOs was first introduced
in [7] and later refined in [4], improving the compatibility with conditionalization of OCFs.
Definition 1 (Conditionalized TPO). Let ⪯ be a total preorder over Ω and let  ∈ ℒ. The
conditionalization of ⪯ by  is defined as a total preorder ⪯|  over Mod() such that
1 ⪯|
 2
if
1 ⪯ 2</p>
      <p>for all 1, 2 ∈ Mod() .</p>
      <p>The acceptance relation of ⪯|  is only defined for formulas ,  |=  since ⪯|  |= (|) if there
is a world  ∈ Mod() ∩ Mod() (since ⪯|  is defined over Mod()) such that  ≺|  ′ for all
′ ∈ Mod() ∩ Mod(). Note that this is a restriction of , but not an additional restriction of ,
since we already assume that (|) is contingent, i.e.  |=  already implies  ̸|= ¬. Hence both
the verification and the falsification of (|) must imply .</p>
    </sec>
    <sec id="sec-4">
      <title>4. Kinematics Principle for Revision</title>
      <p>
        The kinematics principle for belief revision originates from probability theory. The assumption of
probability kinematics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] states that conditional probabilities  (·| ) given some fact  should be
preserved when the probability  () of the fact itself changes.
1Note that this lifted relation over propositions is not necessarily a total preorder. Hence, when referring to TPOs, we always
mean the underlying order over possible worlds.
      </p>
      <p>This principle was adapted as Generalized Ranking Kinematics (GRK) to ranking theory by [3]. The
definition of (GRK) relies on so-called case splittings, which are defined below.</p>
      <p>Definition 2 (Case Splitting). Let ∆ be a conditional belief base ∆ and let 1, . . . ,  ∈ ℒ be exclusive
and exhaustive formulas. Then 1, . . . ,  are called a case splitting (or premise splitting) of ∆ if there
are subsets ∆ 1, . . . , ∆  ⊆ ∆ such that ∆ = ∆ 1 ∪ · · · ∪ ∆  and for every 1 ≤  ≤ , the antecedents
of the conditionals in ∆  imply .</p>
      <p>Note that exclusiveness is the stronger requirement here, since exhaustiveness can always be achieved
by adding an additional case. For instance, if the premises of ∆ 1, . . . , ∆  ⊆ ∆ respectively imply
non-exhaustive (but exclusive) cases 1, . . . , , we can add the remaining case ¬(1 ∨ · · · ∨ )
representing the empty set of conditionals ∅ ⊆ ∆ .2</p>
      <p>Now the postulate of Generalized Ranking Kinematics for OCF-revision operators * reads as follows.
(GRK) Let ∆ = ∆ 1∪· · ·∪ ∆  be a set of conditionals with a case splitting 1, . . . , . Let  = ⋁︀∈ 
with ∅ ≠  ⊆ { 1, . . . , }. Then for all OCFs  and all cases  the following holds:
( * (∆ ∪ {}))| = ( |) * ∆ 
(CaseRel) ( * ∆) | = ( |) * ∆ .
(CaseIrr) ( * { })| =  |.</p>
      <p>The first property, case relevance3, captures the idea that when focusing on the case , only the
conditionals talking about this case should be relevant. Therefore, when conditionalizing the revision
result  * ∆ by , we should obtain the same result as if we conditionalized first and then performed
the revision with only the relevant information ∆  locally. Note that the exclusiveness of the cases
plays a crucial role here. The second postulate, case irrelevance, concerns itself with facts: When talking
about what would happen in the case of , it should not matter how plausible  actually is. Therefore,
learning about the plausibility of any of the cases should not change the conditionalized revision result.</p>
      <p>The following short proposition summarizes these implications of (GRK). In order to connect (IRK)
to (CaseRel), we need the following basic postulate from [3].</p>
      <p>The (GRK) principle combines two notions of relevance, which we illustrate by considering two
special cases which yield crucial properties for revision operators:
(1)
(2)
(TI* ) Ψ * (∆ ∪ {⊤}) = Ψ * ∆</p>
      <sec id="sec-4-1">
        <title>Proposition 1. If (TI* ) holds, then (GRK) implies both (CaseRel) and (CaseIrr).</title>
        <p>Proof. (CaseRel) follows from (GRK) via  ≡ ⊤
and (TI* ). (CaseIrr) follows from (GRK) via ∆ =
∅.</p>
        <p>Note that the proposition above only claims an implication, not equivalence, since (CaseRel) and
(CaseIrr) together do not restrict how * handles a revision where conditional and propositional information
is provided at the same time.</p>
        <p>The kinematics principle for the revision of ranking functions (GRK) was adapted as a qualitative
kinematics principle by [4] for the revision of epistemic states represented by total preorders.
(QK) Let ∆ = ∆ 1 ∪· · ·∪ ∆  be a set of conditionals with a case splitting 1, . . . , . Let  = ⋁︀∈ 
with ∅ ≠  ⊆ { 1, . . . , }. Then for all total preorders ⪯ and all cases  the following holds:
(⪯ *</p>
        <p>(∆ ∪ {}))| = (⪯|  ) * ∆ 
2This also means that one can always achieve a trivial case splitting of any conditional belief base: one case is the disjunction
of all premises appearing in Δ, and the other case is the negation of this disjunction.
3This property is called (GRKweak) in [3].</p>
        <p>We end this section with the following lemma, which formalizes an important property of conditional
belief bases with case splittings: It is impossible for one possible world to verify (or falsify) conditionals
from diferent subsets because of the exclusiveness of the premises.</p>
        <p>Lemma 2. Let ∆ = ∆ 1 ∪ · · · ∪ ∆  be a set of conditionals, and let 1, . . . ,  be a case splitting of ∆
such that for all 1 ≤  ≤ , the premises in ∆  imply . Then for every possible world  ∈ Ω , there exists
an 1 ≤  ≤  such that
(verΔ() ∪ falΔ()) ⊆ ∆  .
(3)
Proof. Let  ∈ Ω . If (verΔ() ∪ falΔ()) = ∅ then Equation (3) holds for all ∆ . Otherwise, there
exists a conditional (|) ∈ ∆ such that  |= . Since the cases 1, . . . ,  are exhaustive, there
must be a set ∆  such that (|) ∈ ∆  and  |=  . Consequently,  |=  as well. Because of the
exclusiveness of the premises, we have  |=  for all  ̸= . Hence  cannot verify or falsify any
conditionals in ∆ ∖ ∆  . Therefore, Equation (3) holds.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Inductive Inference Operators</title>
      <p>Many non-monotonic inference relations from the literature are inductive in the sense that they depend
on explicitly given background beliefs, which we will represent as a set of conditionals ∆ . In [7]
the following axioms expressing Direct Inference and Trivial Vacuity are given for inductive inference
relations:
(DI) (|) ∈ ∆ implies  |∼ Δ .
(TV) If ∆ =</p>
      <p>∅, then  |∼ Δ  only if  |= .</p>
      <p>Definition 3 (Inductive Inference Operators C). An inductive inference operator from conditional belief
bases on ℒ is a mapping C that assigns to each conditional belief base ∆ ⊆ (ℒ|ℒ) an inference relation
|∼ Δ on ℒ such that (DI) and (TV) are satisfied: C : ∆ ↦→ |∼ Δ.</p>
      <p>When defining an inductive inference operator, we often assume that the operator first constructs an
epistemic state from the belief base ∆ , and then yields the inference relation induced by the epistemic
state. For example, in [7], classes of model-based inductive inference operators are defined which map
conditional belief bases to TPOs or OCFs.</p>
      <p>In order to make this two-step process more explicit, we define two inductive operators in this section:
inductive epistemic operators, which map conditional belief bases to epistemic states, and epistemic
inference operators, which map epistemic states to inference relations. This distinction adds a formal
structure to what is often done or assumed implicitly, and will enable us to investigate both steps
independently.</p>
      <sec id="sec-5-1">
        <title>5.1. Epistemic Inference Operators</title>
        <p>We start with the definition of epistemic inference operators, which essentially link inference to
acceptance of conditionals.</p>
        <p>Definition 4 (Epistemic Inference Operator I). An epistemic inference operator over ℒ and a class
of epistemic states S is a mapping from epistemic states Ψ ∈ S to inference relations |∼ Ψ ⊆ ℒ × ℒ ,
I : Ψ ↦→ |∼ Ψ, such that  |∼ Ψ  if Ψ |= (|).</p>
        <p>We choose the symbol I to denote epistemic inference operators since I(Ψ) can be read as “the
inference relation induced by Ψ ”. There are two specific epistemic inference operators which we will
make use of in this paper.</p>
        <p>• Itpo : TPO → 2ℒ×ℒ
• Iocf : OCF → 2ℒ×ℒ
maps ⪯ ↦→ |∼ ⪯ such that  |∼ ⪯  if  ≺ .</p>
        <p>maps  ↦→ |∼  such that  |∼   if  () &lt;  ().</p>
        <p>Note that we have Iocf ( ) = Itpo(⪯  ).</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Inductive Epistemic Operators</title>
        <p>Next we define inductive epistemic operators mapping conditional knowledge bases to epistemic states.
Definition 5 (Inductive Epistemic Operator E). An inductive epistemic operator over ℒ is a mapping
from conditional belief bases ∆ ⊆ (ℒ|ℒ) to epistemic states Ψ Δ ∈ S, E : ∆ ↦→ Ψ Δ, such that the
following properties are satisfied:
(DIΨ) E(∆)</p>
        <p>|= ∆ .
(TVΨ) E(∅) = Ψ .</p>
        <p>The term E(Ψ) can be read as “the epistemic state induced by Ψ ”, hence the choice of the symbol E
to denote inductive epistemic operators.</p>
        <p>Recently, a more general form of such mappings were investigated from a philosophical perspective
in [6]. The authors of [6] claim that inductive reasoning, i.e., the completion of some ∆ ⊆ (ℒ|ℒ) to a
full-fledged epistemic state Ψ Δ, can be considered a special case of belief revision. They introduced an
operator ind, mapping ∆ to the result of a belief revision process: indΨbk (∆) = Ψ bk * ∆ , where Ψ bk
represents a prior epistemic state with background knowledge, and * is some suitable revision operator.
In the most simple scenario, assuming no (relevant) background beliefs, we have Ψ bk = Ψ . In this
paper, we restrict ourselves to this base case of inductive reasoning, i.e., we do not consider inductive
inference with separate background knowledge states. Accordingly, we define the following inductive
epistemic operator:</p>
        <p>E* = Ψ  * ∆ ,
where Ψ  ∈ { , ⪯ } and * is a suitable revision operator such that (DIΨ) and (TVΨ) are satisfied.</p>
        <p>By composing inductive epistemic operators and epistemic inference operators, we obtain inductive
inference operators.</p>
        <p>Proposition 3. Let E : 2(ℒ|ℒ) → S be an inductive epistemic operator and let I : S → ℒ × ℒ
epistemic inference operator. Then C = I ∘ E is an inductive inference operator.
be an
Proof. Let ∆ ⊆ (ℒ|ℒ). It is clear from the definition of E and I that C(∆) is well-defined and yields
ahan inference relation |∼ Δ. We need to show that this inference relation satisfies (DI) and (TV).</p>
        <p>For (DI), let (|) ∈ ∆ . Because of (DIΨ), E(∆) |= (|). Then it follows immediately from the
definition of I that  |∼ Δ  since |∼ Δ = |∼ E(Δ).</p>
        <p>For (TV), we have C(∅) = |∼ Ψ because of (TVΨ), and Ψ  |= (|) if  |=  by definition.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Inductive Inference via Belief Revision</title>
        <p>With Proposition 3, we have now established that belief revision operators can be used to construct
inductive inference operators. Next, we are going to show that all model-based inductive inference
operators Cocf and Ctpo as presented in [7] can be expressed in this way, i.e., it is possible to obtain
any desired OCF or TPO from the uniform epistemic state via belief revision. Our proof for this uses
Kern-Isberner’s c-Revisions [11].</p>
        <p>Definition 6 (c-Revision). Let  be an OCF and ∆ = {(1|1), . . . , (|)} a set of conditionals.
Then a c-revision of  by ∆ is an OCF  * =  *  ∆ of the form
 * () =  0 +  () +
∑︁  
1⩽⩽
|=
with impact factors   ≥ 0 for each (|), satisfying
  &gt;</p>
        <p>min {︁ () +
|=
  }︁ −</p>
        <p>min {︁ () +
|=
∑︁
̸=
|=
∑︁
̸=
|=
  }︁
and a normalization factor  0 ∈ N to ensure  − 1(0) ̸= ∅, guaranteeing that  * is again an OCF.
(4)
(5)
(6)</p>
        <p>Observe that having   as the prior OCF in Equation (5) simplifies the equation a lot, since  () = 0
for all , i.e., the ranks in  * only depend on the interactions between the conditionals in ∆ (together
with their associated impact factors) and not on some prior epistemic state. Since we only consider
consistent belief bases, also no normalization via  0 is needed. Such OCFs, which can be represented as
a sum of impact factors, are called c-representations (of the respective conditional belief base ∆ ).</p>
        <p>Using c-revisions, we can show the following proposition. The quite technical proof has been omitted
due to space constraints.</p>
        <p>Proposition 4. Let  be an OCF. Then there are a conditional belief base ∆ and a c-revision operator * 
such that  =   *  ∆ .</p>
        <p>Note that Proposition 4 has far-reaching consequences. It essentially says that every OCF  is a
c-representation (of some conditional belief base ∆ ), i.e., whenever we want to prove a property for all
OCFs, it sufices to show that it holds for c-representations (as long as we may choose ∆ freely).</p>
        <p>Moreover, the following proposition shows that all model-based inductive inference operators Cocf
and Ctpo as presented in [7] can be expressed using belief revision as the induction mechanism.
Proposition 5. Every model-based inductive inference operator Cocf can be represented as Cocf = Iocf ∘ E*
for some revision-based inductive inference operator E* .</p>
        <p>Proof. Let Cocf be an inductive inference operator such that for every ∆ , we have Cocf (∆) = |∼  for
some OCF  . Then we can define E* such that E* (∆) =   * ∆ with a revision operator * such that
  * ∆ =   *  ∆ ′ =  for some suitable *  and ∆ ′. The existence of * follows from Proposition 4.</p>
        <p>Using qualitative c-revisions from [4], it is straight-forward to prove analogous results to
Propositions 4 and 5 for total preorders, i.e. every TPO (and hence every Ctpo) can be constructed via belief
revision. In that sense, the proposition above supports the claim of [6] that inductive reasoning can be
understood as a special case of belief revision for TPOs and OCFs.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conditionalization of Inference and Induction</title>
      <p>Conditionalization is an important operation on epistemic states, enabling an agent to (temporarily)
focus on some specific case , resulting in an epistemic state where ¬ is considered as impossible.
This is crucial for eficient reasoning as well as for reasoning about hypothetical scenarios.</p>
      <p>In this section we are going to lift the concept of conditionalization to both inference relations and
inductive inference operators, with the goal of capturing an agent’s inference behavior under the
assumption that a specific proposition  holds.</p>
      <sec id="sec-6-1">
        <title>6.1. Conditionalized Inference Relations</title>
        <p>When conditionalizing a ranking function (or a total preorder) by a formula , the result is a ranking
function (resp. total preorder) over the models of , while all models of ¬ are excluded. Since
inference relations are defined over formulas, we need to restrict the language accordingly. Therefore,
we introduce the scope of a formula , which is the set of all formulas that imply .
Definition 7 (Scope). The scope of a formula  ∈ ℒ is defined as Sc() := { ∈ ℒ |  |= }.</p>
        <p>The conditionalization of an inference relation by a formula  should focus on the inferences that
can be drawn in the case of  being true, considering all other cases as impossible.
Definition 8 (Conditionalized Inference Relation |∼ ). The conditionalization of an inference relation
|∼⊆ ℒ × ℒ by  ∈ ℒ is an inference relation |∼  ⊆ Sc() × Sc() such that for all ,  ∈ Sc():
 |∼   if
 |∼  .</p>
        <p>In other words, |∼ = |∼∩ (Sc() × Sc()). The idea behind defining |∼  over the scope of  is
that for evaluating inferences based on models, only models of  should be relevant.</p>
        <p>The following proposition shows that conditionalization of inference relations induced by total
preorders is compatible with conditionalization of the total preorders themselves.
,  ∈ Sc(), it holds that  |∼ ⪯ |  if  |∼ ⪯|  .</p>
        <p>Proposition 6. If |∼ ⪯ is an inference relation induced by a total preorder ⪯ and  ∈ ℒ, then for all
Proof. Let  ∈ ℒ and ,  ∈ Sc(). We have  |∼ ⪯ |  if  |∼ ⪯ . This is equivalent to
 ≺ , which holds if there exists  ∈ Mod() such that  ≺ ′ for all ′ ∈ Mod(). Since
Mod() ⊆ Mod(),  ≺  holds if  ≺|  . This is equivalent to  |∼ ⪯|  .
Corollary 7. If |∼  is an inference relation induced by an OCF  , and  ∈ ℒ, then for all ,  ∈ Sc(),
it holds that  |∼  |  if  |∼  | .</p>
        <p>Proof. Observe that |∼  = |∼ ⪯  . The corollary then follows immediately from Proposition 6.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Conditionalized Inductive Inference</title>
        <p>After defining conditionalization for inference relations, we are now ready to apply the concept of
conditionalization to inductive inference operators as well.</p>
        <p>Definition 9 (Conditionalized Inductive Inference Operator C|). Let C be an inductive inference
operator on ℒ and let  ∈ ℒ. Then the conditionalization of C by  is an inductive inference operator
C| on Sc() such that for all ∆ ⊆ (Sc()| Sc()) it holds that C|(∆) = C(∆) |.</p>
        <p>The following proposition shows that the conditionalization of inductive inference operators is
well-defined. The proof is immediate from Definition 8.</p>
        <p>Proposition 8. If C is an inductive inference operator, then for every  ∈ ℒ, C| satisfies (DI) and (TV).</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. A Kinematics Principle for Inductive Reasoning</title>
      <p>The core ideas behind (GRK) is that for revision with case-specific information, the plausibility of the
case and additional information about other cases should not influence the local revision result. As a
result of (GRK), conditionalization and revision are interchangeable as long as the new information
applies to exclusive cases. We can apply similar ideas to inductive inference: When we reason based
on conditional information concerning exclusive cases, then case-specific information about one case
should not influence reasoning about the other cases.</p>
      <p>In order to work towards a version of (GRK) for inductive reasoning, let us first examine the
consequences of (GRK) for applying the ind-operator. Let  be a ranking function, let ∆ be a conditional
knowledge base, and let 1, . . .  be a premise splitting of ∆ = ∆ 1 ∪ · · · ∪ ∆  (with the premises in
∆  implying  for all 1 ≤  ≤ ). Moreover, let  = ⋁︀∈  with ∅ ̸=  ⊆ { 1, . . . , }. Then (GRK)
implies the following:</p>
      <p>ind (∆ ∪ {})| = ( * (∆ ∪ {}))| = ( |) * ∆  = ind | (∆ ) .</p>
      <p>More intuitively, (GRK) implies that conditionalization of an epistemic state induced via belief revision
and conditionalization of the background knowledge  are interchangeable as long as there is an
appropriate case splitting in ∆ . This is a first interesting result, but we can go one step further. Applying
(GRK) again yields:
ind (∆ ∪ {})| = ( |) * ∆  = ( * ∆ )| = ind (∆ )| .
(7)
(8)
Since inductive epistemic operators are a special case of ind, Equations (7) and (8) above directly imply
(9)
(10)
(11)
(Iocf ∘ E* )(∆ ∪ {})| = (Iocf ∘ E* )(∆ )| .</p>
      <p>Analogously, we can derive Equations (9) and (10) also for TPO-revision operators satisfying (QK).</p>
      <p>Recall that (Iocf ∘ E* ) is an inductive reasoning operator according to Proposition 3. By generalizing
Equation (10) to arbitrary inductive inference operators, we arrive at the following postulate of Inductive</p>
      <sec id="sec-7-1">
        <title>Reasoning Kinematics.</title>
        <p>(IRK) Let ∆ = ∆ 1 ∪ · · · ∪ ∆  be a set of conditionals, and let 1, . . . ,  be a case splitting of ∆ . Let
 = ⋁︀∈  with ∅ ≠  ⊆ { 1, . . . , }. Then for all cases , the following holds:</p>
        <p>E* (∆ ∪ {})| = E* (∆ )|
for inductive epistemic operators E* based on OCF-revision operators * satisfying (GRK). Since Iocf is
compatible with conditionalization according to Corollary 7, we obtain</p>
        <p>C(∆ ∪ {})| = C| (∆ )</p>
        <p>Observe that the conditionalization on the right-hand side of Equation (11) is applied to the inductive
reasoning operator instead of the output inference relation. This emphasizes that the whole inductive
reasoning process may happen within a limited scope. As long as (IRK) is fulfilled, we can
inductively reason in closed local semantic contexts without worrying about the influence of non-relevant
information like the plausibility of facts or conditional information regarding excluded cases.
Example 1. Suppose that we were writing a scientific paper and wanted to provide an example about
the properties of a bird in the case that it was a penguin. In order to convince the reader that we
were not hiding any important information, we could construct our whole inference relation including
inferences about eagles, owls, sparrows, and other types of birds. Afterwards, we could focus only on
the relevant parts about penguins.</p>
        <p>However, assuming that (IRK) holds, we could significantly shorten the reasoning process: We would
not have to construct the whole inference relation (which could require constructing a representation
of our complete epistemic state about birds), but stay comfortably within the scope about penguins and
only consider the relevant conditional information.</p>
        <p>Moreover, we would expect the reader of our example not to care about whether the bird in question
actually happens to be a penguin or not, but to recognize that the example is hypothetical and that the
plausibility of the bird being a penguin is irrelevant.</p>
        <p>On the other hand, from a more technical perspective, applying the conditionalization to the inference
operator instead of the induced inference relation in Equation (11) is not an additional restriction (or less
of a restriction) of the induced inference relation, since (IRK) can equivalently be formulated as follows
by expanding the definition of the inference operator in question: for all  and all ,  ∈ Sc()
(1 ≤  ≤ ), it should hold that  |∼ Δ∪{}|  if  |∼ Δ | .</p>
        <p>We formalize the connection between the kinematics principles for revision—(GRK) and (QK)—and
(IRK) with the following proposition. The proof is very similar to the initial derivation of the principle
starting from Equation (7) above.</p>
        <p>Proposition 9. Let * be a revision operator for OCFs or TPOs that satisfies (GRK) or (QK), respectively.
Then the inductive inference operator C* = (I ∘ E* ) (with I ∈ {Iocf , Itpo} chosen suitably) satisfies (IRK).</p>
        <p>Similar to (GRK), we can split (IRK) into two notions of (ir)relevance. Let ∆ = ∆ 1 ∪ · · · ∪ ∆  be a set
of conditionals, and let 1, . . . ,  be a case splitting of ∆ . Let  = ⋁︀∈  with ∅ ̸=  ⊆ { 1, . . . , }.
(CaseRelIR) C(∆) | = C| (∆ ) for all cases .
(CaseIrrIR) C({})| = C| (∅) for all cases .</p>
        <p>Just like (CaseRel), the postulate (CaseRelIR) states that when reasoning about one of the cases ,
considering the set ∆  is suficient. The postulate (CaseIrr IR) states that the plausibility of the cases
should be truly irrelevant for conditional reasoning, since we obtain the same inferences as if the belief
base was empty. Together with (TV), this amounts to obtaining only classical consequences. In order
for (IRK) to imply these sub-postulates, we again need to assume that tautologies do not influence the
inductive reasoning mechanism:
(TIIR) C(∆ ∪ {⊤}) = C(∆)
Proposition 10. If (TIIR) holds, then (IRK) implies both (CaseRelIR) and (CaseIrrIR).</p>
        <p>Proof. From (IRK), (CaseRelIR) follows via  ≡ ⊤
and (TIIR), and (CaseIrr) follows via ∆ =
∅.</p>
        <p>We conclude this section by showing that (IRK) is not self-evident. In particular, when E(∆) and
E(∆ ) are chosen arbitrarily, it is easy to violate (IRK), as the following example shows.
Example 2. Let ∆ = {(|), (|)}. Clearly 1 = , 2 =  is a case splitting of ∆ with ∆ 1 = {(|)}
and ∆ 2 = {(|)}. Now let C = Itpo ∘ E be an inductive inference operator such that
E(∆) :
E(∆ 1) :
 ≺ Δ  ≺ Δ  ≺ Δ . . . ,
 ≺ Δ1  ≺ Δ1 . . . ,
where the dots “. . . ” above represent an arbitrary order over all remaining worlds in Ω and in Mod(),
respectively. Then we have  |∼ Δ , but  |∼ Δ1 . Therefore, (IRK) is not fulfilled.</p>
        <p>The problem in the example above essentially amounts to the atom  not being restricted by any
conditional, allowing E to choose the relative plausibility of the worlds  and  freely. A similar
counterexample for inference relations induced from OCFs can be constructed analogously.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Evaluation of Inference Relations from the Literature</title>
      <p>In this section, we are going to evaluate inference relations from the literature with respect to (IRK).
The proofs of the propositions in this section are straight-forward with the help of Lemma 2 resp.
Proposition 9 and thus have been omitted due to limited space.</p>
      <p>System Z System Z [12] is a well-known method to construct the minimal ranking model for a
conditional belief base. It is based on a notion of tolerance. A conditional (|) is tolerated by a set
of conditionals ∆ ⊆ (ℒ|ℒ) if there exists a possible world  such that  |=  and  ̸|=  for
all (|) ∈ ∆ ; in other words, a conditional is tolerated if it can be verified without falsifying any
conditional in ∆ . A tolerance partition of ∆ is a partition (∆ 0, . . . , ∆ ) such that all conditionals in ∆ 
are tolerated by ⋃︀≥  ∆  . If the sets ∆  are chosen inclusion-maximally, starting from ∆ 0, then the
partition is called the Z-partition of ∆ and denoted as (∆) = (∆ 0, . . . , ∆ ). For all  ∈ ∆ , we define
Δ( ) =  if  ∈ ∆  in the Z-partition. The System Z ranking model of ∆ is then defined as follows:
 Δ () =
{︃0</p>
      <p>if falΔ() = ∅,
1 + max ∈falΔ() Δ( ) otherwise.</p>
      <p>We denote the inductive inference operator constructed from the mapping ∆ ↦→  Δ and Iocf as C ,
and the corresponding inference relation as C (∆) = |∼ Δ . This operator satisfies the (IRK) principle.
Proposition 11. C satisfies (IRK).</p>
      <p>Lexicographic Inference Lexicographic inference [13] is based on an order relation ⪯ lex over
integer vectors, which is defined via (1, . . . , ) ⪯ lex (1, . . . , ) if there exists 1 ≤  ≤  such
that  ≤  and for all  &gt; ,  = .</p>
      <p>For a belief base ∆ with (∆) = (∆ 0, . . . , ∆ ), let lexΔ() = (|verΔ1 ()|, . . . , |verΔ ()|) for
every possible world . Now a total preorder ⪯ lΔex over Ω can be constructed such that  ⪯ lΔex ′ if
lexΔ() ⪯ lex lexΔ(′).</p>
      <p>We denote the inductive inference operator defined via this total preorder as
Clex = Itpo(⪯ lΔex).</p>
      <p>Proposition 12. Clex satisfies (IRK).
p-Entailment A formula  p-entails another formula  given a conditional belief base ∆ ,  |∼ Δp ,
if  |∼   holds for all ranking functions  with  |= ∆ . Equivalently,  |∼ Δp  holds if ∆ ∪ {(|)}
is inconsistent [12]. We denote the respective inductive inference operator implementing p-entailment
as Cp, i.e. Cp(∆) = |∼ Δp.</p>
      <p>Proposition 13. Cp satisfies (IRK).</p>
      <p>Skeptical c-Inference Skeptical c-inference [14] with respect to ∆ is defined by taking all
cc-sk  holds if  |∼    holds for all c-representations  
representations of ∆ into account, i.e.  |∼ Δ
of ∆ . Let Cc-sk denote the respective inductive inference operator with Cc-sk(∆) = |∼ Δc-sk.
Proposition 14. Cc-sk satisfies (IRK).</p>
      <p>Strategic c-Inference A c-revision is called strategic [15] if it uses a so-called selection strategy
 : (, ∆) ↦→⃗ to choose a solution for the constraint satisfaction problem described in Definition 6.
Selection strategies were first described for c-representations in [7].</p>
      <p>It was shown in [3] that strategic c-revisions with selection strategies that satisfy a postulate called
(IP-ESP ), impact preservation with respect to equivalent subproblems, satisfy (GRK). Therefore, this class
of OCF-revision operators induces inference relations that satisfy (IRK).</p>
      <p>Proposition 15. Let *  be a revision operator that satisfies (IP-ESP  ), and let E* be the inductive epistemic
operator induced by *  . Then the inductive inference operator C* = (Iocf ∘ E* ) satisfies (IRK).
Elementary Inductive Inference Chandler and Booth proposed a method to define conditional
TPO-revision operators ⊛ from propositional TPO-revision operators * [16]. If * is one of the so-called
elementary revision operators [17], then it was shown in [4] that ⊛ satisfies (QK). Therefore, ⊛ induces
inference relations satisfying (IRK).</p>
      <p>Proposition 16. Let ⊛ be a conditional TPO-revision operator defined from an elementary revision
operator as described above. Then the inductive inference operator C⊛ = (Itpo ∘ E⊛) satisfies (IRK).</p>
    </sec>
    <sec id="sec-9">
      <title>9. Kinematics and Syntax Splitting</title>
      <p>In this section, we are going to briefly discuss the relationship between kinematics and syntax splitting,
another principle for eficient reasoning in local contexts.</p>
      <sec id="sec-9-1">
        <title>9.1. Syntax Splitting</title>
        <p>Syntax splitting was first introduced as a property for propositional belief revision operators by [ 18]
and later adapted for inductive inference operators in [7].</p>
        <p>The core idea behind syntax splitting is quite similar to the idea behind the kinematics principles:
unrelated information should not influence the reasoning process. This is implemented by partitioning
the belief base syntactically such that one can focus on the locally relevant parts while ignoring the rest.
The diference lies in the splittings that are utilized: While case splittings split a belief base according to
exclusive premises (e.g. information about penguins and non-penguins), syntax splittings split according
to the syntax with which the information is expressed (e.g. information about penguins and politics).</p>
        <p>Let ∆ ∈ (ℒ|ℒ) be a belief base including conditionals over the language (ℒ|ℒ) defined over the
alphabet Σ . Let Σ 1, Σ 2 ⊆ Σ such that Σ 1 ∩ Σ 2 = ∅, and let ℒ1, ℒ2 be the languages defined over
Σ 1, Σ 2, respectively. Syntactically splitting ∆ over Σ 1, Σ 2 means partitioning ∆ = ∆ 1 ∪ ∆ 2 such that
∆ 1 ⊆ (ℒ1|ℒ1) and ∆ 2 ⊆ (ℒ2|ℒ2).</p>
        <p>According to [7], the postulate of syntax splitting (SynSplit) consists of two parts: (syntactic) relevance
and (syntactic) independence. Let ∆ be a conditional belief base that syntactically splits into ∆ = ∆ 1∪∆ 2,
and let C be an inductive inference operator with C(∆) = |∼ Δ.
(SynRel) For ,  ∈ ℒ and  ∈ {1, 2}:
 |∼ Δ  if
 |∼ Δ  .
(SynInd) For ,  ∈ ℒ,  ∈ ℒ , ,  ∈ {1, 2}, and  ̸= :
 |∼ Δ  if
 |∼ Δ  .
(SynSplit) C satisfies both (SynRel) and (SynInd).</p>
      </sec>
      <sec id="sec-9-2">
        <title>9.2. Syntactic Relevance and Case Relevance</title>
        <p>The postulate (SynRel) is quite similar to (CaseRelIR). This becomes more obvious when we observe the
following consequence of (CaseRelIR): For all  and ,  ∈ Sc(), it holds that  |∼ Δ  if  |∼ Δ .
Both postulates ensure that only one of the subsets from the respective splitting is relevant for the local
reasoning process. The diference between (SynRel) and (CaseRel IR) lies in their applicability. While
(SynRel) helps for queries that split over sub-alphabets, (CaseRelIR) helps with queries that concern
only certain cases. Consider the following example from [7] for a scenario in which (SynRel) is helpful
while (CaseRelIR) is not.</p>
        <p>Example 3 (“Penguins”). Let Σ pen = {, , , , }. The belief base ∆ pen = {( |), (|), ( |), (|)}
encodes that birds () can fly (  ), penguins () are birds but cannot fly (  ), and dark objekts () are
not visible at night (). Syntactically splitting ∆ pen over Σ 1pen = {, ,  } and Σ 2pen = {, } yields:
∆ 1pen = {( |), (|), ( |)}, ∆ 2pen = {(|)}. To answer the query “ |∼ Δpen  ?”: (SynRel) implies
that only ∆ 1pen needs to be considered, whereas (CaseRelIR) does not help since  ∨  and  are not
exclusive, and no helpful case splitting can be found.</p>
        <p>For a contrary scenario, consider the following example from [3].</p>
        <p>Example 4 (“Furniture”). Let Σ fur = {, , }, encoding information about furniture, with the atoms
referring to kitchen items (), items from the new collection (), and items requiring heavy lifting ().
Consider the belief base ∆ fur = {(|), (|), (|)} .. Case splitting with 1 =  and 2 =  yields:
∆ f1ur = {(|), (|)}, ∆ f2ur = {(|)}. To answer the query “ |∼ Δfur ?”: (SynRel) does not help
since there is no syntax splitting, but (CaseRelIR) implies that only ∆ f1ur needs to be considered.</p>
      </sec>
      <sec id="sec-9-3">
        <title>9.3. Syntactic Independence and Case Irrelevance</title>
        <p>After observing the similarity between (CaseRelIR) and (SynRel), one could assume that similar parallels
exist between (CaseIrrIR) and (SynInd). However, the latter two postulates are quite diferent in nature.</p>
        <p>The property (CaseIrrIR) concerns itself with the plausibility of the cases themselves, stating that
this information should not matter at all when focusing on one of the cases via conditionalization. In
that way, the kinematics principle for inductive reasoning ensures that plausible propositional beliefs
are treated independently from conditional beliefs. Syntax splitting, on the other hand, does not apply
special treatment to plausible propositional beliefs.</p>
        <p>The postulate (SynInd) essentially states that when locally reasoning about one (syntactically limited)
area, extending the premise by (syntactically) unrelated information should not influence the local
reasoning results at all. This is very diferent from the scenarios with which the kinematics principle
concerns itself, since (IRK) assumes exclusive cases and only restricts reasoning for premises which
belong to one of these cases.</p>
      </sec>
      <sec id="sec-9-4">
        <title>9.4. Combination of (SynSplit) and (IRK)</title>
        <p>Syntax splitting and kinematics can be combined to yield high-quality inferences in an eficient way
since having both properties enables us to split belief bases in multiple ways.</p>
        <p>Example 5. Consider a combination of Examples 3 and 4, i.e. let our belief base ∆ contain information
about penguins (Σ pen) as well as information about furniture (Σ fur). Note that Σ pen and Σ fur are clearly
disjoint. Moreover, we hold the plausible propositional belief ¬, since we believe the item in front of
us to not be a kitchen item (but that does not stop us from reasoning about hypothetical alternatives).
In summary, our belief base looks as follows: ∆ = (∆ pen ∪ ∆ fur ∪ {¬}).</p>
        <p>Now assume that we obtain the inference relation |∼ Δ via an inductive reasoning operator that
satisfies both (SynSplit) and (IRK). Again, we wish to answer the query “  |∼ Δ ?” from Example 4.
We first apply (SynSplit) and find that  |∼ Δ  if  |∼ Δfur∪{¬}  since ∆ = ∆ pen ∪ (∆ fur ∪ {¬})
syntactically splits over Σ pen and Σ fur. Afterwards, we can apply (IRK) to obtain  |∼ Δfur∪{¬}  if
 |∼ Δf1ur  just like in Example 4, ignoring {¬} as well because of (CaseIrrIR). Therefore, although our
belief base ∆ is much more complex now, the combination of syntax splitting and kinematics allowed
us to reduce the conditionals that need to be considered to the locally relevant minimum.</p>
        <p>The combination of syntax splitting and kinematics is very promising, not just because of
highquality inferences, but in particular for increasing the computational eficiency of model-based inductive
inference. If we needed to consider all possible worlds in Example 5, we would have to deal with
2|Σ1|+|Σ2| = 256 worlds. Syntax splitting allows us to reduce this to 2|Σ2| = 8 worlds, and finally
kinematics halves the amount of worlds again since only models of  need to be considered. Therefore,
utilizing these principles could be essential for making large-scale conditional belief bases manageable.
10. Conclusions and Future Work
In this paper, we have presented a kinematics principle for inductive reasoning (IRK) based on the
principle of Generalized Ranking Kinematics (GRK) for belief revision from [3]. Moreover, we have
investigated the relationship between inductive inference operators and revision operators for OCFs
and TPOs, building on recent work by [6], to motivate our version of the kinematics principle. We have
evaluated inference relations from the literature with respect to (IRK) and have been able to show that
several well-known inference relations—System Z, lexicographic inference, p-entailment, and skeptical
c-inference—satisfy this kinematics principle. Finally, we compared (IRK) to the postulate (SynSplit) for
inductive inference relations from [7].</p>
        <p>Future work includes the evaluation of more approaches from the literature, for example System W
[19] and its extension for a weaker notion of consistency [20]. Furthermore, comparing our approach
to Weydert’s version of subset independence for inference [5] would be interesting. Moreover, the
relationship and interplay with other techniques for eficiency or notions of relevance has not yet
been deeply investigated. Especially the relationships between case splittings, syntax splittings, or
its extensions like conditional syntax splittings [21] or semantic splittings [22], and other forms of
modularity with respect to conditional belief bases and OCF/TPO models deserve attention, both for
non-monotonic reasoning and belief revision.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[2] J. E. Shore, R. W. Johnson, Axiomatic derivation of the principle of maximum entropy and the
principle of minimum cross-entropy, IEEE Trans. Inf. Theory 26 (1980) 26–37. doi:10.1109/TIT.
1980.1056144.
[3] M. Sezgin, G. Kern-Isberner, C. Beierle, Ranking kinematics for revising by contextual information,</p>
      <p>Ann. Math. Artif. Intell. 89 (2021) 1101–1131. doi:10.1007/s10472-021-09746-2.
[4] G. Kern-Isberner, M. Sezgin, C. Beierle, A kinematics principle for iterated revision, Artificial</p>
      <p>Intelligence 314 (2023) 103827. doi:10.1016/j.artint.2022.103827.
[5] E. Weydert, System JLZ – rational default reasoning by minimal ranking constructions, Journal of</p>
      <p>Applied Logic 1 (2003) 273–308. doi:10.1016/S1570-8683(03)00016-8.
[6] G. Kern-Isberner, W. Spohn, Inductive reasoning, conditionals, and belief dynamics, Journal of</p>
      <p>Applied Logics 11 (2024) 89–127.
[7] G. Kern-Isberner, C. Beierle, G. Brewka, Syntax splitting = relevance + independence: New
postulates for nonmonotonic reasoning from conditional belief bases, in: Proceedings of the 17th
International Conference on Principles of Knowledge Representation and Reasoning, 2020, pp.
560–571. doi:10.24963/kr.2020/56.
[8] W. Spohn, Ordinal conditional functions: A dynamic theory of epistemic states, in: W. L. Harper,
B. Skyrms (Eds.), Causation in Decision, Belief Change, and Statistics, Springer Netherlands, 1988,
pp. 105–134. doi:10.1007/978-94-009-2865-7_6.
[9] M. Goldszmidt, J. Pearl, Qualitative probabilities for default reasoning, belief revision, and causal
modeling, Artificial Intelligence 84 (1996) 57–112.
[10] W. Spohn, The Laws of Belief: Ranking Theory and Its Philosophical Applications, Oxford
University Press, 2012. doi:10.1093/acprof:oso/9780199697502.001.0001.
[11] G. Kern-Isberner, A thorough axiomatization of a principle of conditional preservation in belief
revision, Ann. Math. Artif. Intell. 40 (2004) 127–164.
[12] J. Pearl, System Z: A natural ordering of defaults with tractable applications to nonmonotonic
reasoning, in: R. Parikh (Ed.), Proceedings of the 3rd Conference on Theoretical Aspects of
Reasoning about Knowledge, Morgan Kaufmann, 1990, pp. 121–135.
[13] D. Lehmann, Another perspective on default reasoning, Ann. Math. Artif. Intell. 15 (1995) 61–82.</p>
      <p>doi:10.1007/BF01535841.
[14] C. Beierle, C. Eichhorn, G. Kern-Isberner, S. Kutsch, Properties of skeptical c-inference for
conditional knowledge bases and its realization as a constraint satisfaction problem, Ann. Math.</p>
      <p>Artif. Intell. 83 (2018) 247–275. doi:10.1007/s10472-017-9571-9.
[15] C. Beierle, G. Kern-Isberner, Selection strategies for inductive reasoning from conditional belief
bases and for belief change respecting the principle of conditional preservation, in: E. Bell,
F. Keshtkar (Eds.), Proceedings of the 34th International Florida Artificial Intelligence Research
Society Conference, 2021, pp. 563–568. doi:10.32473/flairs.v34i1.128459.
[16] J. Chandler, R. Booth, Revision by Conditionals: From Hook to Arrow, in: Proceedings of the 17th
International Conference on Principles of Knowledge Representation and Reasoning, 2020, pp.
233–242. doi:10.24963/kr.2020/24.
[17] J. Chandler, R. Booth, Elementary belief revision operators, J. Philos. Log. 52 (2023) 267–311.
[18] R. Parikh, Beliefs, belief revision, and splitting languages, Logic, Language, and Computation 2
(1999) 266–278.
[19] C. Komo, C. Beierle, Nonmonotonic reasoning from conditional knowledge bases with system W,</p>
      <p>Ann. Math. Artif. Intell. 90 (2022) 107–144. doi:10.1007/s10472-021-09777-9.
[20] J. Haldimann, C. Beierle, G. Kern-Isberner, T. Meyer, Conditionals, infeasible worlds, and reasoning
with system W, in: M. Franklin, S. A. Chun (Eds.), Proceedings of the 36th International Florida
Artificial Intelligence Research Society Conference, 2023. doi: 10.32473/FLAIRS.36.133268.
[21] J. Heyninck, G. Kern-Isberner, T. Meyer, J. Haldimann, C. Beierle, Conditional syntax splitting
for non-monotonic inference operators, in: B. Williams, Y. Chen, J. Neville (Eds.), Proceedings
of the 37th AAAI Conference on Artificial Intelligence, 2023, pp. 6416–6424. doi: 10.1609/AAAI.</p>
      <p>V37I5.25789.
[22] C. Beierle, J. Haldimann, G. Kern-Isberner, Semantic splitting of conditional belief bases, in:
A. Raschke, E. Riccobene, K. Schewe (Eds.), Logic, Computation and Rigorous Methods – Essays
Dedicated to Egon Börger on the Occasion of His 75th Birthday, volume 12750 of Lecture Notes in
Computer Science, Springer, 2021, pp. 82–95. doi:10.1007/978-3-030-76020-5_5.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Jefrey</surname>
          </string-name>
          , The Logic of Decision, University of Chicago Press,
          <year>1965</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>