<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Brief Introduction Into Activation-Based Conditional Inference</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dept. of Computer Science</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>TU Dortmund University</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dortmund</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Germany</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>marco.wilhelm</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>diana.howey</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>gabriele.kern-isberner}@cs.tu-dortmund.de</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Computer Science, FernUniversita ̈t in Hagen</institution>
          ,
          <addr-line>Hagen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>4</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>Activation-based conditional inference integrates several aspects of human reasoning into formal conditional reasoning, such as focusing, forgetting, and remembering, by combining conditional reasoning and the cognitive architecture ACT-R. The idea is to select a reasonable subset of a conditional belief base before drawing inferences. The selection is based on an activation function which assigns to the conditionals in the belief base a degree of activation based on the conditional's relevance for the current query and its usage history.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Activation-based conditional inference combines ACT-R [
        <xref ref-type="bibr" rid="ref1 ref2">2, 1</xref>
        ] and conditional
reasoning. ACT-R (Adaptive Control of Thought-Rational ) is a well-founded
cognitive architecture developed to formalize human reasoning in which a selection
of cognitive entities (chunks as declarative memory and production rules as
procedural memory) is performed before these entities are used to solve a reasoning
task. From a cognitive point of view, there are basically two processes which
affect the selection: The long-term process of forgetting and remembering and the
short-term process of activating certain beliefs depending on the current context.
In this paper, we adapt the concept of (de)activation of cognitive entities from
ACT-R and combine it with the task of drawing conditional inferences. More
precisely, we define an activation function for conditionals of the form (B|A)
with the meaning “if A holds, then usually B holds, too.” The conditionals with
the highest activation are selected for the inference task. Therewith, we
generalize the concept of focused inference [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and give it a profound cognitive meaning,
and we also equip ACT-R with a modern, high quality inference formalism.
We consider a propositional language L over a finite set of atoms ⌃ which we
extend to the conditional language (L|L) = {(B|A) | A, B 2 L} where conditionals
(B|A) 2 (L|L) have the intuitive meaning “If A holds, then usually B holds,
too.” A formal semantics of conditionals is given by ranking functions over
possible worlds [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Here, possible worlds are the propositional interpretations I 2 I
represented as complete conjunctions of literals (atoms or their negations). The
set of all possible worlds is denoted by ⌦ . A ranking function  : ⌦ ! N10 with
 1(0) 6= ; maps possible worlds to a degree of plausibility. Lower ranks indicate
higher plausibility so that  1(0) is the set of the most plausible worlds. Ranking
functions are extended to formulas by  (A) = min{ (!) | ! |= A}.  accepts a
conditional (B|A) i↵  (AB) &lt;  (AB) or  (A) = 1 and is a (ranking) model of
a belief base (a finite set of conditionals) i↵  accepts all conditionals in .
      </p>
      <p>
        An inference operator [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is a mapping I which assigns to each belief base
an inference relation |⇠ I ✓ L ⇥ L such that
– if (B|A) 2 , then A |⇠ I B,
– if = ; , then A |⇠ I B only if A |= B.
(Direct Inference)
(Trivial Vacuity)
I = {(B|A) | A |⇠ I B} denotes the set of inductive inferences from wrt. I.
Inference operators yield a three-valued inference response to a query (B|A):
[[(B|A)]]I =
8&gt; yes
&lt;
      </p>
      <p>
        no
&gt;: unknown
i↵ ( B|A) 2 I
i↵ ( B|A) 2 I .
otherwise
Drawing inferences from the whole belief base can be computationally
expensive and does not fit to human reasoning. Thus, focused inference [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] defines a
(query-dependent) subset ( ) ✓ as a focus and draws inferences wrt. ( )
instead of : A conditional (B|A) follows from wrt. the inference operator I
in the focus ( ) i↵ ( B|A) 2 I ( ). Finding an appropriate focus is challenging
but, apart from the computational benefits of small foci, appropriate foci may
unveil the part of which is relevant for answering the query.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>ACT-R Architecture</title>
      <p>
        ACT-R [
        <xref ref-type="bibr" rid="ref1 ref2">2, 1</xref>
        ] is a cognitive architecture which formalizes human reasoning and
distinguishes between declarative and procedural memory. In the declarative
memory, categorical knowledge about individuals or objects is stored in form of
chunks while the procedural memory consists of production rules and describes
how chunks are processed. Reasoning in ACT-R starts with an initial priming of
chunks. The chunk with the highest activation is processed by production rules
in order to compute a solution to the reasoning task. If this fails, the activation
passes into an iterative process. The retrieval of chunks basically depends on
an activation function which is calculated for each specific request anew and is
given by the base-level activation B(ci) and the spreading activation S(ci), which
is a sum of degrees of associations between chunks S(ci, cj ) weighted by W(cj ):
A(ci) = B(ci) + S(ci) = B(ci) + X
j W(cj ) · S(ci, cj ).
      </p>
      <p>(1)
The base-level activation of a chunk B(ci) reflects the entrenchment of ci in
the reasoner’s memory and depends on the recency and frequency of its use.
Typically, B(ci) is decreased over time (fading out ) and is increased when the
chunk is active. The spreading activation of a chunk S(ci) formalizes the impact
of the priming. While the degree of association S(ci, cj ) reflects how strongly
related the two chunks ci and cj are in principal (i.e., it reflects whether ci and
cj deal with the same issue or not), the weighting factor W(ci) indicates whether
this connection is triggered by the actual priming.
4</p>
    </sec>
    <sec id="sec-3">
      <title>Activation-Based Conditional Inference</title>
      <p>The production-system-based logical basis of ACT-R does not hold the pace with
modern KRR formalisms. Therefore, we propose a cognitively inspired model of
conditional reasoning by interpreting the concepts of ACT-R in terms of logic,
conditionals, and inference. In particular, we replace chunks by conditionals in a
belief base and derive a focus ( ) based on the activation of the conditionals
in order to draw focused inferences. Atoms in L play the role of cognitive units,
and the production rules are replaced by an inference operator I. From the
conditional logical perspective, the main value of this approach is the cognitive
justification of the focus and the option to integrate further cognitive concepts
such as forgetting and remembering. More formally, we calculate an activation
value A(r) &gt; 0 for all r 2 . If A(r) is not less than a certain threshold ✓ 0,
then the conditional r is within the focus ( ), i.e.,
( ) = (,</p>
      <p>A, ✓ ) = {r 2
| A(r) ✓ }.</p>
      <p>Note that ( ) implicitly depends on a query q = (B|A), too, since queries will
serve as the priming and A depends on that priming. Eventually, we say that
the query q can be inferred from wrt. I, A, and ✓ i↵ q 2 I (, A,✓ ). When
answering the query fails, i.e., if [[q]]I(, A,✓ ) = unknown, the inference process
can be iterated by lowering the threshold ✓ . In the limit, when ✓ = 0, one has
(, A, 0) = , thus [[q]]I(, A,0) = [[q]]I .</p>
      <p>In the ACT-R framework, the functionality of the activation function (1) is
extensively discussed but its single components are not formalized
mathematically. Here, we give a concrete instantiation of (1) in the conditional setting
which can be seen as a blue print for further investigations and empirical
analyses. Let be a belief base, ri 2 , and q a further conditional (the query resp.
priming), then the activation of ri wrt. and q is</p>
      <p>Aq (ri) =</p>
      <p>B (ri)
| {z }
base-level activation
+ X
rj2</p>
      <p>Wq (rj ) · S(ri, rj ) .</p>
      <p>(2)
|</p>
      <p>{z
spreading activation Sq (ri)
}
In (2), the base-level activation B (r) reflects the entrenchment of r in the
reasoner’s memory. Since epistemic entrenchment and ranking semantics are dual
ratings, the normality of a conditional is a good estimator and we define
B (r) =</p>
      <p>
        1
1 + Z (r)
,
r 2 ,
where Z (r) is the Z-rank of r from System Z which is a valuable measure of
normality according to [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>The degree of association S(ri, rj ) is a measure of connectedness between the
conditionals in and is defined by</p>
      <p>
        S(ri, rj ) = |⌃ (ri) \ ⌃ (rj )| ,
|⌃ (ri) [ ⌃ (rj )|
ri, rj 2 ,
where ⌃ (r) is the set of atoms mentioned in r. That is, S(ri, rj ) is the number
of shared atoms relative to all atoms in ri or rj . This syntactically-driven
definition of S(ri, rj ) is motivated by and extends the principle of relevance from
nonmonotonic reasoning which states that if the belief base splits into two
sub-belief bases 1 and 2 with ⌃ ( 1) \ ⌃ ( 2) = ; and the query is defined
over one of the signatures ⌃ ( i), say ⌃ ( 1), only, then only the conditionals in
1 should be relevant for answering this query (cf. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]).
      </p>
      <p>
        The weighting factor Wq (r) indicates how much the priming q triggers the
conditional r. We formalize the influence of the priming according to the
spreading activation theory [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] by a labeling of the vertices in an undirected graph
N ( ) with vertices V = ⌃ and edges
      </p>
      <p>E = {{a, b} | 9 r 2
: {a, b} ✓ ⌃ (r)}.</p>
      <p>
        The labels are the triggering values ⌧ q (a) 2 [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] for a 2 ⌃ which indicate how
much a is triggered by q. Once the triggering values are determined, we follow
the idea that a conditional r is triggered not more than the atoms in ⌃ (r) and
define the respective weighting factor by
      </p>
      <p>Wq (r) = min{⌧ q (a) | a 2 ⌃ (r)}.</p>
      <p>The actual labeling of the vertices in N ( ) is an iterative process and starts with
labeling the atoms a 2 ⌃ which are mentioned in the query q, i.e. a 2 ⌃ (q), with
⌧ q (a) = 1. In the subsequent steps, the neighboring atoms are labeled and so on.
The remaining atoms a0 which are not reachable from the initially labeled atoms
in ⌃ (q) are labeled with ⌧ q (a0) = 0. The labels of the atoms a00 in between are
the sum of the labels of the already labeled neighbors weighted by the sum of
all labels so far plus 1, i.e.,
⌧ q (a00) =</p>
      <p>P
b2L : {a00,b}2E
1 + P
b2L
⌧ q (b)
⌧ q (b)
where L is the set of the already labeled atoms. This guarantees that these
labels are between 0 and 1 and decrease for increasing iteration steps. Therewith,
the triggering value of an atom depends on both the triggering values of the
associated (earlier triggered) atoms and their quantity.</p>
    </sec>
    <sec id="sec-4">
      <title>Forgetting and Remembering</title>
      <p>In ACT-R the base-level activation of a chunk is not constant but decreases over
time and increases when the chunk is retrieved. This integrates the concepts of
forgetting and remembering into ACT-R. In order to capture this dynamic view
on the base-level activation, we propose to extend the base-level activation such
that B (r) is multiplied with a forgetting factor after each inference request. For
a fixed 2 [0, 1), the focus ( ) = (, A, ✓ ) and r 2 , the forgetting factor
, ( )(r) is given by 1 + if r 2 ( ) and otherwise given by 1 . By doing so,
the base-level activation of a conditional is decreased when the conditional is not
selected for answering the query, and it is increased otherwise. When applying
this update of the base-level activation for every inference request, the usage
history of the conditionals is implemented into B (r).
6</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions and Future Work</title>
      <p>
        We applied conditional reasoning to ACT-R [
        <xref ref-type="bibr" rid="ref1 ref2">2, 1</xref>
        ] and developed a prototypical
model for activation-based conditional inference. For this, we reformulated the
activation function from ACT-R for conditionals and selected the conditionals
with the highest degree of activation for drawing inference. With our approach it
is possible to implement several aspects of human reasoning into modern expert
systems such as focusing, forgetting, and remembering. The main challenge for
future work is to find for a given query q = (B|A) a proper subset 0 of a belief
base such that q is answered the same wrt. to 0 and , i.e., [[q]]I 0 = [[q]]I .
Acknowledgments. This work is supported by DFG Grant KE 1413/10-1 awarded
to Gabriele Kern-Isberner and DFG Grant BE 1700/9-1 awarded to Christoph
Beierle as part of the priority program “Intentional Forgetting in Organizations”
(SPP 1921).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          :
          <article-title>How can the human mind occur in the physical universe</article-title>
          ? Oxford University Press (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lebiere</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The atomic components of thought</article-title>
          . Psychology Press (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Kern-Isberner</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beierle</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brewka</surname>
          </string-name>
          , G.:
          <article-title>Syntax splitting = relevance + independence: New postulates for nonmonotonic reasoning from conditional belief bases</article-title>
          .
          <source>In: Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning</source>
          . pp.
          <fpage>560</fpage>
          -
          <lpage>571</lpage>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>System Z: A natural ordering of defaults with tractable applications to nonmonotonic reasoning</article-title>
          .
          <source>In: Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge</source>
          . pp.
          <fpage>121</fpage>
          -
          <lpage>135</lpage>
          . Morgan Kaufmann (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Spohn</surname>
            ,
            <given-names>W.:</given-names>
          </string-name>
          <article-title>The Laws of Belief: Ranking Theory and Its Philosophical Applications</article-title>
          . Oxford University Press (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Wilhelm</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kern-Isberner</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Focused inference and System P</article-title>
          .
          <source>In: Thirty-Fifth AAAI Conference on Artificial Intelligence</source>
          . pp.
          <fpage>6522</fpage>
          -
          <lpage>6529</lpage>
          . AAAI Press (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>