=Paper=
{{Paper
|id=Vol-2961/paper_1
|storemode=property
|title=A Brief Introduction Into Activation-Based Conditional Inference
|pdfUrl=https://ceur-ws.org/Vol-2961/paper_1.pdf
|volume=Vol-2961
|authors=Marco Wilhelm,Diana Howey,Gabriele Kern-Isberner,Kai Sauerwald,Christoph Beierle
|dblpUrl=https://dblp.org/rec/conf/ki/WilhelmHKSB21
}}
==A Brief Introduction Into Activation-Based Conditional Inference==
Proceedings of the 7th Workshop on Formal and Cognitive Reasoning
A Brief Introduction Into Activation-Based
Conditional Inference
Marco Wilhelm1[0000 0003 0266 2334] , Diana Howey1[0000 0002 7203 4862] ,
Gabriele Kern-Isberner1[0000 0001 8689 5391] , Kai
2[0000 0002 1551 7016]
Sauerwald , and Christoph Beierle2[0000 0002 0736 8516]
1
Dept. of Computer Science, TU Dortmund University, Dortmund, Germany
{marco.wilhelm, diana.howey, gabriele.kern-isberner}@cs.tu-dortmund.de
2
Dept. of Computer Science, FernUniversität in Hagen, Hagen, Germany
{kai.sauerwald, christoph.beierle}@fernuni-hagen.de
Abstract. Activation-based conditional inference integrates several as-
pects of human reasoning into formal conditional reasoning, such as fo-
cusing, forgetting, and remembering, by combining conditional reasoning
and the cognitive architecture ACT-R. The idea is to select a reasonable
subset of a conditional belief base before drawing inferences. The selec-
tion is based on an activation function which assigns to the conditionals
in the belief base a degree of activation based on the conditional’s rele-
vance for the current query and its usage history.
1 Introduction
Activation-based conditional inference combines ACT-R [2, 1] and conditional
reasoning. ACT-R (Adaptive Control of Thought-Rational ) is a well-founded cog-
nitive architecture developed to formalize human reasoning in which a selection
of cognitive entities (chunks as declarative memory and production rules as pro-
cedural memory) is performed before these entities are used to solve a reasoning
task. From a cognitive point of view, there are basically two processes which af-
fect the selection: The long-term process of forgetting and remembering and the
short-term process of activating certain beliefs depending on the current context.
In this paper, we adapt the concept of (de)activation of cognitive entities from
ACT-R and combine it with the task of drawing conditional inferences. More
precisely, we define an activation function for conditionals of the form (B|A)
with the meaning “if A holds, then usually B holds, too.” The conditionals with
the highest activation are selected for the inference task. Therewith, we general-
ize the concept of focused inference [6] and give it a profound cognitive meaning,
and we also equip ACT-R with a modern, high quality inference formalism.
2 Logical Foundations
We consider a propositional language L over a finite set of atoms ⌃ which we ex-
tend to the conditional language (L|L) = {(B|A) | A, B 2 L} where conditionals
Copyright c 2021 for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).
4
A Brief Introduction Into Activation-Based Conditional Inference
(B|A) 2 (L|L) have the intuitive meaning “If A holds, then usually B holds,
too.” A formal semantics of conditionals is given by ranking functions over pos-
sible worlds [5]. Here, possible worlds are the propositional interpretations I 2 I
represented as complete conjunctions of literals (atoms or their negations). The
set of all possible worlds is denoted by ⌦. A ranking function : ⌦ ! N1 0 with
1 (0) 6= ; maps possible worlds to a degree of plausibility. Lower ranks indicate
higher plausibility so that 1 (0) is the set of the most plausible worlds. Ranking
functions are extended to formulas by (A) = min{(!) | ! |= A}. accepts a
conditional (B|A) i↵ (AB) < (AB) or (A) = 1 and is a (ranking) model of
a belief base (a finite set of conditionals) i↵ accepts all conditionals in .
An inference operator [3] is a mapping I which assigns to each belief base
an inference relation |⇠I ✓ L ⇥ L such that
– if (B|A) 2 , then A |⇠I B, (Direct Inference)
– if = ;, then A |⇠I B only if A |= B. (Trivial Vacuity)
I = {(B|A) | A |⇠I B} denotes the set of inductive inferences from wrt. I.
Inference operators yield a three-valued inference response to a query (B|A):
8
>
:
unknown otherwise
Drawing inferences from the whole belief base can be computationally expen-
sive and does not fit to human reasoning. Thus, focused inference [6] defines a
(query-dependent) subset ( ) ✓ as a focus and draws inferences wrt. ( )
instead of : A conditional (B|A) follows from wrt. the inference operator I
in the focus ( ) i↵ (B|A) 2 I ( ) . Finding an appropriate focus is challenging
but, apart from the computational benefits of small foci, appropriate foci may
unveil the part of which is relevant for answering the query.
3 ACT-R Architecture
ACT-R [2, 1] is a cognitive architecture which formalizes human reasoning and
distinguishes between declarative and procedural memory. In the declarative
memory, categorical knowledge about individuals or objects is stored in form of
chunks while the procedural memory consists of production rules and describes
how chunks are processed. Reasoning in ACT-R starts with an initial priming of
chunks. The chunk with the highest activation is processed by production rules
in order to compute a solution to the reasoning task. If this fails, the activation
passes into an iterative process. The retrieval of chunks basically depends on
an activation function which is calculated for each specific request anew and is
given by the base-level activation B(ci ) and the spreading activation S(ci ), which
is a sum of degrees of associations between chunks S(ci , cj ) weighted by W(cj ):
X
A(ci ) = B(ci ) + S(ci ) = B(ci ) + W(cj ) · S(ci , cj ). (1)
j
5
A Brief Introduction Into Activation-Based Conditional Inference
The base-level activation of a chunk B(ci ) reflects the entrenchment of ci in
the reasoner’s memory and depends on the recency and frequency of its use.
Typically, B(ci ) is decreased over time (fading out) and is increased when the
chunk is active. The spreading activation of a chunk S(ci ) formalizes the impact
of the priming. While the degree of association S(ci , cj ) reflects how strongly
related the two chunks ci and cj are in principal (i.e., it reflects whether ci and
cj deal with the same issue or not), the weighting factor W(ci ) indicates whether
this connection is triggered by the actual priming.
4 Activation-Based Conditional Inference
The production-system-based logical basis of ACT-R does not hold the pace with
modern KRR formalisms. Therefore, we propose a cognitively inspired model of
conditional reasoning by interpreting the concepts of ACT-R in terms of logic,
conditionals, and inference. In particular, we replace chunks by conditionals in a
belief base and derive a focus ( ) based on the activation of the conditionals
in order to draw focused inferences. Atoms in L play the role of cognitive units,
and the production rules are replaced by an inference operator I. From the
conditional logical perspective, the main value of this approach is the cognitive
justification of the focus and the option to integrate further cognitive concepts
such as forgetting and remembering. More formally, we calculate an activation
value A(r) > 0 for all r 2 . If A(r) is not less than a certain threshold ✓ 0,
then the conditional r is within the focus ( ), i.e.,
( ) = ( , A, ✓) = {r 2 | A(r) ✓}.
Note that ( ) implicitly depends on a query q = (B|A), too, since queries will
serve as the priming and A depends on that priming. Eventually, we say that
the query q can be inferred from wrt. I, A, and ✓ i↵ q 2 I ( ,A,✓) . When
answering the query fails, i.e., if [[q]]I( ,A,✓) = unknown, the inference process
can be iterated by lowering the threshold ✓. In the limit, when ✓ = 0, one has
( , A, 0) = , thus [[q]]I( ,A,0) = [[q]]I .
In the ACT-R framework, the functionality of the activation function (1) is
extensively discussed but its single components are not formalized mathemat-
ically. Here, we give a concrete instantiation of (1) in the conditional setting
which can be seen as a blue print for further investigations and empirical analy-
ses. Let be a belief base, ri 2 , and q a further conditional (the query resp.
priming), then the activation of ri wrt. and q is
X
Aq (ri ) = B (ri ) + Wq (rj ) · S(ri , rj ) . (2)
rj 2
| {z } | {z }
base-level activation spreading activation Sq (ri )
In (2), the base-level activation B (r) reflects the entrenchment of r in the rea-
soner’s memory. Since epistemic entrenchment and ranking semantics are dual
6
A Brief Introduction Into Activation-Based Conditional Inference
ratings, the normality of a conditional is a good estimator and we define
1
B (r) = , r2 ,
1 + Z (r)
where Z (r) is the Z-rank of r from System Z which is a valuable measure of
normality according to [4].
The degree of association S(ri , rj ) is a measure of connectedness between the
conditionals in and is defined by
|⌃(ri ) \ ⌃(rj )|
S(ri , rj ) = , ri , rj 2 ,
|⌃(ri ) [ ⌃(rj )|
where ⌃(r) is the set of atoms mentioned in r. That is, S(ri , rj ) is the number
of shared atoms relative to all atoms in ri or rj . This syntactically-driven def-
inition of S(ri , rj ) is motivated by and extends the principle of relevance from
nonmonotonic reasoning which states that if the belief base splits into two
sub-belief bases 1 and 2 with ⌃( 1 ) \ ⌃( 2 ) = ; and the query is defined
over one of the signatures ⌃( i ), say ⌃( 1 ), only, then only the conditionals in
1 should be relevant for answering this query (cf. [3]).
The weighting factor Wq (r) indicates how much the priming q triggers the
conditional r. We formalize the influence of the priming according to the spread-
ing activation theory [1] by a labeling of the vertices in an undirected graph
N ( ) with vertices V = ⌃ and edges
E = {{a, b} | 9 r 2 : {a, b} ✓ ⌃(r)}.
The labels are the triggering values ⌧q (a) 2 [0, 1] for a 2 ⌃ which indicate how
much a is triggered by q. Once the triggering values are determined, we follow
the idea that a conditional r is triggered not more than the atoms in ⌃(r) and
define the respective weighting factor by
Wq (r) = min{⌧q (a) | a 2 ⌃(r)}.
The actual labeling of the vertices in N ( ) is an iterative process and starts with
labeling the atoms a 2 ⌃ which are mentioned in the query q, i.e. a 2 ⌃(q), with
⌧q (a) = 1. In the subsequent steps, the neighboring atoms are labeled and so on.
The remaining atoms a0 which are not reachable from the initially labeled atoms
in ⌃(q) are labeled with ⌧q (a0 ) = 0. The labels of the atoms a00 in between are
the sum of the labels of the already labeled neighbors weighted by the sum of
all labels so far plus 1, i.e.,
P
b2L: {a00 ,b}2E ⌧q (b)
⌧q (a00 ) = P ,
1 + b2L ⌧q (b)
where L is the set of the already labeled atoms. This guarantees that these
labels are between 0 and 1 and decrease for increasing iteration steps. Therewith,
the triggering value of an atom depends on both the triggering values of the
associated (earlier triggered) atoms and their quantity.
7
A Brief Introduction Into Activation-Based Conditional Inference
5 Forgetting and Remembering
In ACT-R the base-level activation of a chunk is not constant but decreases over
time and increases when the chunk is retrieved. This integrates the concepts of
forgetting and remembering into ACT-R. In order to capture this dynamic view
on the base-level activation, we propose to extend the base-level activation such
that B (r) is multiplied with a forgetting factor after each inference request. For
a fixed 2 [0, 1), the focus ( ) = ( , A, ✓) and r 2 , the forgetting factor
, ( ) (r) is given by 1 + if r 2 ( ) and otherwise given by 1 . By doing so,
the base-level activation of a conditional is decreased when the conditional is not
selected for answering the query, and it is increased otherwise. When applying
this update of the base-level activation for every inference request, the usage
history of the conditionals is implemented into B (r).
6 Conclusions and Future Work
We applied conditional reasoning to ACT-R [2, 1] and developed a prototypical
model for activation-based conditional inference. For this, we reformulated the
activation function from ACT-R for conditionals and selected the conditionals
with the highest degree of activation for drawing inference. With our approach it
is possible to implement several aspects of human reasoning into modern expert
systems such as focusing, forgetting, and remembering. The main challenge for
future work is to find for a given query q = (B|A) a proper subset 0 of a belief
base such that q is answered the same wrt. to 0 and , i.e., [[q]]I 0 = [[q]]I .
Acknowledgments. This work is supported by DFG Grant KE 1413/10-1 awarded
to Gabriele Kern-Isberner and DFG Grant BE 1700/9-1 awarded to Christoph
Beierle as part of the priority program “Intentional Forgetting in Organizations”
(SPP 1921).
References
1. Anderson, J.R.: How can the human mind occur in the physical universe? Oxford
University Press (2007)
2. Anderson, J.R., Lebiere, C.: The atomic components of thought. Psychology Press
(1998)
3. Kern-Isberner, G., Beierle, C., Brewka, G.: Syntax splitting = relevance + indepen-
dence: New postulates for nonmonotonic reasoning from conditional belief bases.
In: Proceedings of the 17th International Conference on Principles of Knowledge
Representation and Reasoning. pp. 560–571 (2020)
4. Pearl, J.: System Z: A natural ordering of defaults with tractable applications to
nonmonotonic reasoning. In: Proceedings of the 3rd Conference on Theoretical As-
pects of Reasoning about Knowledge. pp. 121–135. Morgan Kaufmann (1990)
5. Spohn, W.: The Laws of Belief: Ranking Theory and Its Philosophical Applications.
Oxford University Press (2012)
6. Wilhelm, M., Kern-Isberner, G.: Focused inference and System P. In: Thirty-Fifth
AAAI Conference on Artificial Intelligence. pp. 6522–6529. AAAI Press (2021)
8