=Paper=
{{Paper
|id=Vol-136/paper-2
|storemode=property
|title=Contextual Reasoning in Concept Spaces
|pdfUrl=https://ceur-ws.org/Vol-136/200.pdf
|volume=Vol-136
}}
==Contextual Reasoning in Concept Spaces==
Contextual Reasoning in Concept Spaces
Stijn De Saeger and Atsushi Shimojima
Japan Advanced Institute of Science and Technology (JAIST)
fstijn,ashimojig@jaist.ac.jp
Abstract. This paper presents ongoing work on a modular theory for
contextual reasoning and the formalization of context. We introduce a
two-tiered knowledge representation formalism that grounds an agent's
pre-symbolic understanding of the world in a state space, and go on
to develop a notion of concept spaces as its logical abstraction. After
discussing some advantages of this model, we show how it accounts quite
naturally for certain high-level forms of context-dependent inference.
1 Introduction
The program behind our research was originally put forward by Barwise and
Seligman in \Information Flow" ([1]). Arguing about the relevance of state spaces
as models for human reasoning, they state:
Within the recent cognitive science literature, logic is often seen as irre-
vocably wed to what is percieved to be an outdated symbol-processing
model of cognition. . . . Perhaps the use of state spaces might allow a
marriage of logic with continuous methods like those used in dynami-
cal systems and so provide a toehold for those who envision a distinctly
di erent model of human reasoning.
Our point of departure is the representation of some agent's empirically ac-
quired knowledge in a real-valued state space, where states represent feasible
combinations of values for the various components in a distributed system. Con-
necting this non-symbolic layer of knowledge representation up with ideas from
Formal Concept Analysis [3] and more recently Boolean Concept Logic [2] al-
lows us to speak about state spaces and their inhabitants at a higher level of
abstraction, a propositional language of concepts. This two-tiered architecture
re ects a cognitive agent's attunement to law-like regularities in some part of the
world perceived { there and then { as a system of interacting components, for all
practical purposes a context. We discuss some basic properties of this logic and
go on to show it can be used to model forms of inference that are traditionally
dealt with in the eld of pragmatics. More concretely, we look at phenomena
like context-dependent inference and restricted forms of presupposition accom-
modation. The theory and all examples in this paper were implemented in the
functional programming language Haskell, and source code can be made avail-
able to interested readers upon request.
2 State Spaces
State spaces o er a means to view any number of arbitrary world entities as
parameters in some complex, distributed system. Mapping those entities and
their interrelation onto an n-dimensional mathematical space therefore gives a
sub-symbolic representation of the state the agent perceived that part of the
world to be in.
Dimensions and States Let a dimension be a pair (i; Ri ) where i is a natural
number taken from a set of indices I = f1; : : : ; ng and Ri R, the reals, is a
range. We will not assume that Ri is nite, only bounded. An n-dimensional
state space on a sequence ofQdimensions Dim = f(1; R1 ); : : : ; (n; Rn )g then is
a subset of the product space ni=1 Ri .
Alternatively, one can think of Dim as a mapping from domain I to ranges
R R such that Dim = f1 7! R1 ; : : : ; n 7! Rn ; (n + 1) 7! ;; : : :g. Since for named
dimensions both the dimension name and the index i serve to uniquely deter-
mine what dimension we are referring to, we will often abbreviate = (i; Ri ) to
(i ; Ri ) when no confusion is likely, simply replacing the dimension's index with
the indexed dimension name.
For any dimension i with (1 i n), the i-th coordinate ri of an n-tuple
= (r1 ; : : : ; rn ) with r1 ; : : : ; rn 2 R is called the dimi value of and written as
dimi (). Given an n-dimensional state space over a set of dimensions Dim,
is called a state in (written 2 ) if dimi () 2 Ri for every i 2 Dim.
An Example In good tradition, our state space example will be the light
circuit from [1] and [4]. Let the four-dimensional space circ represent an agent
A's understanding of an integrated light circuit inside her house, consisting of two
switches, a slider and a bulb. The bulb is on whenever just one of the switches is,
but the slider can be used to adjust the bulb's brightness from 25 to 100 percent
when it is on. circ is de ned on the following dimensions:
sw1 = (1; R1 ) where R1 = f0; 1g
sw2 = (2; R2 ) where R2 = f0; 1g
slid = (3; R3 ) where R3 = f0 : : : 1g
bulb = (4; R4 ) where R4 = f0 : : : 1g
Dimension bulb is an output observable, its value is uniquely determined by
the values of sw1, sw2 and slid : the equation
dimbulb () = jdimsw1 () dimsw2 ()j (3 dimslid () + 1)=4
models A's intuitive understanding of the light circuit where the intensity of
the light is a linear function of the two switches and the slider. Note that this
e ectively limits the range of R4 to f0; 0:25 : : : 1g instead of f0 : : : 1g. This is
why we de ne state spaces in terms of subsets of the original product space.
The range of the rst two dimensions of circ consists of just two values,
0 or 1, representing the corresponding component being o or on. The latter
two dimensions are actually closed intervals of real values between 0 and 1,
showing the position of the slider and the brightness of the bulb respectively.
We assume that for our present objectives the notation \rk : : : rn " represents the
closed interval of reals from rk to rn .
A state = (1; 0; 0:3; 0:475) for example, would be a state in circ .
3 Concept Spaces
A point in a given dimension space is a sub-symbolic representation of the
state an agent perceives some part of the world to be in. The range of values
of the various dimensions of the space and their interdependence re ects the
agent's systematic understanding of her surroundings.
The way this allows for a subtle and ne-grained representation of the agent's
reality is typical for non-symbolic knowledge representation formalisms like con-
nectionist models and such. At the same time, state spaces can't escape the
evident criticisms of non-symbolic approaches either: the lack of principled ab-
straction that symbolic approaches excel at. If logical abstraction is a cognitive
reality then it has to be accounted for.1
Constraint Types We de ne a notion of constraint types as the way in which
an agent distinguishes semantically meaningful patterns in collections of states.
Formally, constraint types bear much resemblance to dimensions. Given a di-
mension = (i; Ri ) taken from a set Dim, a constraint type c is again a pair
(i ; Rc ), where 1 i jDimj is a unique dimension index and Rc is a subset of
R. Note that we do not stipulate the stronger requirement that Rc be a subset
of Ri : in essence c merely partitions a set of states into those that fall into Rc ,
and those that do not. What it means for a state to satisfy a constraint type
c = (i ; Rc ), written c, is :
(i ; Rc ) () dimi () 2 Rc
where dimi ( ) is the respective value of for dimension i .
Concept Spaces Several contemporary theories2 handle knowledge represen-
tation by referring to classi cation objects as rst-class citizens in the theory.
Called formal contexts in formal concept analysis ([3]), classi cations in channel
theory ([1]) or Chu spaces in category theory, the central idea is that of a binary
incidence relation de ned on two sets, objects (or tokens ), and attributes (or
types ). For good introductions and representative examples of such theories, we
refer to [1] and [3].
1
Cognitive models that bring the best of both worlds together form an interesting
research area. A very notable recent attempt is the S 3 inference engine in ([4]).
2
Think for instance of channel theory, formal concept analysis and some other infor-
mation theories based on category theory.
In the same spirit, we conceive of a formal context C = (C ; consC ; j=C ) on
Dim where C represents the associated state space, consC is a set of constraint
types over Dim, and j=C C consC , i.e. for any 2 C , c 2 consC :
(; c) 2 j=C if c.
For the sake of our light circuit example, let Ccirc be the formal context
(circ ; conscirc ; j=circ ). The constraint types in conscirc are straightforward:
Constraint Type Intuitive Meaning
(sw11 ; f1g) Switch 1 is up.
(sw11 ; f0g) Switch 1 is down.
(sw22 ; f1g) Switch 2 is up.
(sw22 ; f0g) Switch 2 is down.
(slid3 ; f0 : : : 0:1g) Slider is down.
(slid3 ; f0:9 : : : 1g) Slider is up.
(bulb4 ; f0 : : : 0:33g) Light is dim.
(bulb4 ; f0:33 : : : 0:66g) Light is medium.
(bulb4 ; f0:66 : : : 1g) Light is bright.
(bulb4 ; f0:1 : : : 1g) Bulb is on.
(bulb4 ; f0g) Bulb is o .
A state = (0; 1; 0:9; 0:925) is thus classi ed as follows: circ f(sw11 ; f0g);
(sw22 ; f1g); (slid3 ; f0:9 : : : 1g); (bulb4 ; f0:66 : : : 1g); (bulb4 ; f0:1 : : : 1g)g.
We assume basic familiarity with the ideas of Formal Concept Analysis [3]
and the like. To recall, for a formal context (O; A; j=), a formal concept (X; Y )
consists of an extent X O and an intent Y A, and furthermore:
X = Y 0 = fx 2 O j 8y 2 Y : x yg
Y = X 0 = fy 2 A j 8x 2 X : x yg
Note that if (X; Y ) is a concept in C , (X; Y ) = (X 00 ; X 0 ) = (Y 0 ; Y 00 ). A semi-
concept, on the other hand, is a pair (X; Y ) with X O and Y A that
satis es either X 0 = Y or Y 0 = X . The notion of semiconcept, as developed in
[2], is a generalization of formal concepts, and so the set of semiconcepts of C is
a superset of the set of formal concepts of C , which are derived by computing
the full concept closure (X 00 ; X 0 ) from semiconcept (X; X 0 ) (or dually, (Y 0 ; Y 00 )
from (Y 0 ; Y )).
We call the semiconcept lattice associated with a formal context a concept
space, and write S (C ) to denote the concept space obtained from a formal context
C = (C ; consC ; j=C ). Being a lattice structure, there is a natural order relation
associated with S (C ) : i.e. for each c1 = (e1 ; i1 ), c2 = (e2 ; i2 ) 2 S (C ), c1 v
c2 () e1 e2 and i1 i2 .
4 The Logic LC
Concept spaces give us that logical abstraction layer over the dynamical system
modeled by a state space. Semantically then, we treat the concepts in the concept
space S (C ) as propositions. Taken as the atoms of some agent's propositional
language, concepts consist of a relevant set of states for which given proposition
holds, and the set of meaningfull properties (the constraint types ) based upon
which this is so. Intuitively, a concept c is a proposition in the sense that it
`predicates' something of a given situation (state of the system) or some relevant
part of it. Put di erently, it asserts that the state currently under discussion
is partly described by the constraint types of c, or conversely, that belongs to
their extent. As a proposition, a concept c = (X; Y ) represents a statement about
a state , so we write ` B c' to denote that ` belongs to c', or `c holds of '.
Formally, B c whenever 2 X . Note that this implies that 8y 2 Y : y .3
Also, we assume two auxiliary functions Ext and Int to access the respective
extent and intent of a concept c.
We could de ne the logic L(C ) in terms of a concept space S (C ) and a set of
operations over S (C ):
L(C ) = (S (C ); fu; t; :; ; ?; >g)
These operations are de ned in [2] as :
Meet: (X1 ; Y1 ) u (X2 ; Y2 ) = (X1 \ X2 ; (X1 \ X2 )0 )
Join: (X1 ; Y1 ) t (X2 ; Y2 ) = ((Y1 \ Y2 )0 ; Y1 \ Y2 )
Negation: : (X; Y ) = (O n X; (O n X )0 )
Opposition: (X; Y ) = ((A n Y )0 ; A n Y )
Bottom: ? = (;; A)
Top: > = (O; ;)
A semiconcept c satis es c u c = c or c t c = c, depending on which 0
operation was used to derive the concept. Accordingly, semiconcepts are called u-
semiconcepts or t-semiconcepts, and their intersection gives exactly the formal
concepts of C . From the de nitions above, it should be clear that the set of
u-semiconcepts Su (C ) is closed under the operations u, : and >, and similarly
St (C ) is closed under operations t, and ?. 4 Because all regularity (knowledge)
in the system is contained in circ , we wish to de ne logical operations on
concepts strictly in terms of their extents. The constraint types in consC provide
a way to discriminate between chunks of knowledge, but not the knowledge itself.
For example, there is nothing in the constraint types that says that one switch
being up and one being down implies that the light is on { that is not their role.
Thus, operating strictly within Su (C ), we need to rede ne disjunctive be-
haviour in our logic so as to have the result of a join operation return a u-
semiconcept as well. Using an alternative join c1 t c2 = : (: c1 u : c2 ),
we will restrict our logic to L(C ) = (S (C ); fu; t ; :; ?; >g). This new de nition
3
4
For convenience we overload this symbol so that B c i B c for every in .
Whenever we use the unspeci ed term `concepts' in what follows, we take it to mean
concepts in the most general guise, that is, semiconcepts drawn from S (C ) that may
or may not be formal concepts. We will call them concepts when discussing S (C ),
and propositions when talking about L(C ).
allows us to talk about the conjunction and disjunction of concepts nicely within
the set of u-semiconcepts of S (C ). 5
Back To The Example We obtain the desired logic L(Ccirc ) from the concept
space S (Ccirc ) for Ccirc = (circ ; conscirc ; j=circ ), together with the above opera-
tions. S (Ccirc ) is a dense, information-packed space of interrelated concepts, the
number of which can be up to 2(jcirc j+jconscirc j) . Within this huge set, we name
a few meaningful concepts as propositions.
Name Concept Intuitive Meaning
up1 (f(sw11 ; f1g)g0 ; f(sw11 ; f1g)g00 ) Switch 1 is up.
dn1 (f(sw11 ; f0g)g0 ; f(sw11 ; f0g)g00 ) Switch 1 is down.
up2 (f(sw22 ; f1g)g0 ; f(sw22 ; f1g)g00 ) Switch 2 is up.
dn2 (f(sw22 ; f0g)g0 ; f(sw22 ; f0g)00 ) Switch 2 is down.
slidUp (f(slid3 ; f0:9 : : : 1g)g0 ; f(slid3 ; f0:9 : : : 1g)g00 ) The slider is up.
slidDn (f(slid3 ; f0 : : : 0:1g)g0 ; f(slid3 ; f0 : : : 0:1g)g00 ) The slider is down.
dark (f(bulb4 ; f0 : : : 0:3g)g0 ; f(bulb4 ; f0 : : : 0:3g)g00 ) The light is dark.
medium (f(bulb4 ; f0:33 : : : 0:6g)g0 ; f(bulb4 ; f0:33 : : : 0:6g)g00 ) The light is medium
bright (f(bulb4 ; f0:66 : : : 1g)g0 ; f(bulb4 ; f0:66 : : : 1g)g00 ) The light is bright.
off (f(bulb4 ; f0g)g0 ; f(bulb4 ; f0g)g00 ) The light is o .
on : off The light is on.
Note that we could have chosen to express medium as :(bright t dark) to
mean anything that is not dark or bright. The point is that propositions, however
complex, are always expressible as single concepts in S (Ccirc ).
Sequent Calculus in L(C) We already mentioned the natural subsumption
relation between semiconcepts in S (C ). For our purposes, this subsumption hi-
erarchy gives us a semantic implication relation.
8 x; y 2 S (C ) : x `L(C) y () x vS (C) y
Hence, if x vS (C) y then B x implies B y . With this notion of entailment in
place, we can de ne a sequent calculus for L(C ).
A sequent ( ; ) is de ned in the usual way as a pair of sets of propositions,
and we say that ( ; ) is valid in L(C ) i there is no conceivable state such
that both 8 2 : B and @ 2 : B . In other words, the set is
interpreted conjunctively and disjunctively, as usual. Moreover, conjunction
and disjunction are easily expressed in S (C ) using meet and join operations. If
by doing so we are able to reduce and to single concepts in S (C ), we can
check the entailment relation `L(C) between the unique resulting concepts. So,
for any pair ( ; ) of sets of concepts in S (C ): 6
`L(C) () u vS (C) t
5
It is shown in [2] that (S (C ); u; t ; :; >) and its dual (S (C ); u ; t; ; ?) form a
double Boolean algebra.
6
See the appendix for a proof.
5 Context-based Reasoning
Context-based reasoning has become a bit of a household term in any discipline
that deals with the non-trivial representation of meaning and the complex inter-
actions it must support. The formalization of context is a point where research
from such elds as arti cial intelligence, cognitive science, logic and the phi-
losophy of language converges. Nonetheless, an elusive concept such as context
means di erent things to di erent people, and there seems to be no one de -
nition of what constitutes contextual reasoning that researchers from all these
disciplines can agree upon, let alone a uni ed theory.
In what follows we silently subscribe to the relevance-theoretic view on con-
text and its role in supporting inferential communication (Sperber & Wilson,
1986 [5]), but will nevertheless use the term quite generally to refer to any
form of reasoning in which some notion of context plays an dynamic role in the
inference process. In any case, our work attempts to formalize the processes un-
derlying such behaviour and how it follows straightforwardly from the cognitive
architecture we described, rather than contributing to the literature on these
subjects in terms of analysis. Speci cally, we single out two related phenomena:
context-dependent inference and accommodation.
5.1 Context As Background
Let's say that a set of background assumptions represents a body of (partial)
knowledge about a given context that was established prior to the actual infer-
ence. In general we want to know whether some sequent holds for every con-
ceivable state in the space. Often though we already have partial information
about in the form of a set of propositions P , so determining the validity of
( ; ) against P now comes down to checking ( ; ) in the relevant subspace of
S (C ), where the meet of P becomes the new top concept >. This new subspace
we obtain through context transformations.
We call a context transformation any operation that turns a formal context C
into a new context C such that there exists an infomorphism between C and C .
To clarify, infomorphisms are constructs from channel theory (see [1]) denoting
a mapping between two formal contexts that preserves certain properties about
their information structure. In short, for C = (O; A; j=C ) and C = (O ; A ; j=C ),
an infomorphism I from C to C is a pair of contravariant functions (f _ : O 7!
O; f ^ : A 7! A ) such that 8o 2 O ; a 2 A : f _ (o) C a () o C f ^ (a).
Arguably, there is a fair number of such transformations for which this prop-
erty could hold. However, keeping the de nition suciently general at this point
will allow for a variety of more specialized context transformations for doing the
heavy lifting when dealing with other forms of context shifting behaviour later
on. For the purpose of this paper however, we focus on one particularly simple
transformation that restricts the state space to the subspace of relevant states
that satisfy some set of background conditions. Formally, given a formal context
C and a proposition p 2 S (C ), a context transformation T on p and C is :
T(C ; p) = (Ext(p); consC ; j=C )
Let C be such T(C ; p) for some p. Then (f ^ ; f _ ) taken as a pair of identity
functions gives a straightforward infomorphism I from C to C . Think of this T
operation as restricting the focus of attention to a relevant subspace of possible
states that all satisfy the background assumption p. Furthermore, since arbitrar-
ily complex propositional formulas in L(C ) are reducable to a unique concept in
S (C ) through a series of meet operations, any set of background assumptions P
can be mapped to a single concept in S (C ) as well.
Let's look at some example sequents. We will freely simplify expressions like
\fa; bg `L( Ccirc ) fc; dg" to \a; b ` c; d" whenever we mean the inference to take
place in L(Ccirc ), and explicitly mention the logic otherwise.
1. up1; slidUp ` bright; up2 ?
This sequent asks whether it is true that in case both switch 1 and the slider
are up, either the light bulb is bright or switch 2 is up as well (which would
turn o the light). As understanding of the system suggests, this sequent is
judged valid, as (up1 u slidUp) vS (Ccirc ) (bright t up2).
2. dn1; dark `L(T(Ccirc ; up2)) on ?
In the normal concept space S (Ccirc ) switch 1 being down and the light
being dim does not entail that the bulb is on. That changes though when
confronted with some new knowledge about the state of the light circuit.
Suppose it is brought to our agent's attention that switch 2 is up. In that
case it makes no sense to check any further sequent in the whole of S (Ccirc ),
because the present context has shifted to a subspace of relevant states in
which switch 2 is up. In this new space the top concept > becomes up2. As it
turns out, this sequent is valid in L(T(Ccirc ; up2)), meaning that against the
background condition that switch 2 is up, switch 1 being down does entail
that the bulb is on, though the bulb may be dimmed by the slider switch.
3. slidUp `L(T(Ccirc ; dark)) (dn1 u dn2); (up1 u up2) ?
This sequent represents a rather complex inference. It says: \Given the back-
ground knowledge that the light is dark, does the slider being all the way
up imply that both switches are either down or up?" This sequent again is
valid in L(T(Ccirc ; dark)), as can be con rmed by doing the step by step
calculation.
5.2 Accommodation
Accommodation builds upon this notion of reasoning against background con-
ditions. We need to qualify what we mean by accommodation though. In the
literature on natural language semantics, accommodation was introduced by
Lewis (1979) as a repair strategy used to salvage some necessary truth condi-
tions associated with a certain class of lexical items (so-called presupposition
triggers). As our theory is not designed to handle semantic presupposition, we
will have little to say about those particular forms of accommodation.
Instead, we follow Thomason (1990) and the neo-Griceans by treating ac-
commodation more broadly as a catering strategy for pragmatic presuppositions
{ those presuppositions a cognitive agent is invited to supply so some new piece
of information (typically an utterance) can be interpreted as making a mean-
ingful contribution to the exchange. Hence, we think of accommodation quite
generally as the process of supplying a minimal set of background conditions
in S (C ) that make a sequent ( ; ) true while remaining consistent with
existing assumptions . Since S (C ) is a space of semantically interconnected
propositions, this becomes a feasible operation.
This kind of inferential behaviour is arguably most prominent in communica-
tion. Imagine two agents A and B exchanging information about some situation
under discussion, say, the light circuit in A's house. Whereas full knowledge about
the current state of the system would be represented by a single unique state
, partial knowledge p about the circuit state takes the form of a set of possible
states , represented by some subspace T(Ccirc ; p). 7
Imagine at some point in the exchange that A is inquiring about the value
dimi () of the current state for a given dimension i . Furthermore, assume
that represents the (possibly emtpy) set of background knowledge that was
established throughout the conversation, the common ground so to speak. In
this setting, let B's answer to A be some utterance u, taken to be just another
proposition. At this point, several scenarios are thinkable. If u provides satisfac-
tory information about i then all is well. The more interesting case is when u
does not seem to answer A's question in any obvious way. In accordance with
Gricean pragmatics (notably Sperber and Wilson's relevance theory [5]) how-
ever, A is justi ed in assuming that B was trying to make a relevant contribution
regardless, and that B communicated u in the belief that A's inferentially pro-
cessing it will provide the required information and potentially other meaningful
information along the way. In other words, A will need to accommodate B's ut-
terance, which means determine the minimal background conditions such
that u `L(T(Ccirc ; u )) v , for some v that provides conclusive information
regarding i .
The phrase \minimal background conditions" needs explaining. Is it possible
to compare two propositions p and q in terms of informativity? Recall that
concept space S (C ) forms a lattice ordered by the v relation, so it seems so.
When p v q (or, Ext(p) Ext(q ) and Int(p) Int(q )), we say that p is a
more speci c concept than q , and (S (C ); v) forms an informational scale. The
most general concept > has the entire state space for its extent and ; as
intent; it contains no information. Put di erently, for any formal context C ,
L(C ) = L(T(C ; >)).
Hence, if there is no relevant v such that u `L(T(C; >)) v , accommodating
u means supplying the background assumption t such that we nd a relevant
concept v so that u `L(T(C; t)) v and there is no t with u `L(T(C; t )) v and
t v t . This gives us a very straightforward algorithm to accommodate u: look
for a relevant proposition v entailed by u in concept space S (C ). If found then
7
This is where having transformations on formal contexts pays o . Realistically, an
agent is rarely if ever interested in full knowledge about some complex system,
moreover unnecessarily representing it in full may be costly and inecient.
return v , else calculate the next minimal subspace T(C ; t) where t is a concept
one level down in the concept space ordered by vS (C) , and repeat.
Example A small example of said behaviour might be illustrative at this point.
Admittedly, a dialogue example about light circuits is a bit arti cial, but if
nothing else it may generate some basic intuitions about the phenomenon we
are trying to capture. Imagine the following conversation taking place between
A and B.
1. A : A bit dim here, no? Could you check whether the light is on?
...
2. B : It seems the slider was down.
In the framework of our model, A's utterance seems to be doing two things:
by asserting that the place is dim it xes a common background fdarkg, and fur-
thermore it asks for information regarding dimension bulb4 . Let's say B's answer
translates to proposition slidDn. However, neither \slidDn `L(T(Ccirc ; dark)) on"
nor \slidDn `L(T(Ccirc ; dark)) off " represents a valid inference. In other words,
B's reply does not provide A's requested information. As a competent com-
municator though, B's answer holds the promise of a relevant interpretation,
only the hearer must supply some additional background in order to retrieve
it. Searching the concept space S (T(Ccirc ; dark)) from the most general con-
cept > downwards, two concepts verify the sequent when added to the back-
ground: \dn1 u up2" and \up1 u dn2", not surprisingly the two switch con g-
urations that turn the bulb on. Since either one of these will do, their disjunc-
tion represents the smallest commitment on A's part that accommodates the
utterance. As our inference engine con rms, \slidDn `L(T(Ccirc ; p)) on", with
p = fdark; ((up1 u dn2) t (dn1 u up2))g, is a valid inference.
5.3 Discussion
This line of thinking raises a number of interesting questions as well. For one,
if B really intended A to arrive at this conclusion, why didn't she just give a
straight answer by pointing out that the light is on { after all, that would have
saved A some apparently unnecessary inferential processing. Clearly, B must have
thought the extra inferential e ort to be o set by some additional information
import as a side e ect. In this case, presumably B's utterance gives an additional
explanation of what made the room dark, besides honoring A's question.
This seems to make a lot of sense, as it also explains why for instance B's
utterance is unlikely to get accommodated with the highly similar proposition
q = f((up1 u up2) t (dn1 u dn2))g. This time, \slidDn `L(T(Ccirc ; q)) off " would
make for an interpretation to the e ect that the light is off . Realistically, q is
neither more nor less informative than the above-mentioned p. What is di erent
in this case is that it is unclear what the role of B's utterance (slidDn) would
be as premise of this sequent. Indeed, \; `L(T(Ccirc ; q)) off " would be equally
valid, and as such the added value of an utterance like \slidDn" is anything but
clear. Many researchers in the eld of pragmatics have proposed sensible answers
to these and similar questions, and future work will have to show whether the
present model, suitably adapted, can stand up to these challenging issues.
6 Conclusion
We have presented work in progress on a theory of contextual inference based
on a two-tiered model of knowledge representation. As a framework for contex-
tual reasoning, it makes some important rst steps towards implementing the
dynamic program mentioned by Barwise and Seligman in [1]. With abstract,
logical reasoning radically grounded in and relative to an agent's sub-symbolic
understanding of her surroundings we are able to generate the sort of subtle
inferential behaviour that tends to be hard to axiomatize. For the sake of discus-
sion, we further gave some examples of what such context-dependent reasoning
looks like in our model.
Obviously a great many interesting questions still lie ahead. Nevertheless,
the inference patterns we talked about clearly lie at the heart of contextual
reasoning, and it is our contention that a theory that `gets them right' can
have many promising applications wherever the elds of logic, language and
information meet.
References
1. Barwise, Jon & Jerry Seligman. Information Flow. The Logic of Distributed
Systems. Cambridge Tracts in Theoretical Computer Science. (1997)
2. Wille, Rudolf. Boolean Concept Logic. in : Ganter & Mineau. Conceptual Struc-
tures: Logical, Linguistic and Computational Issues, Lecture Notes in Arti cial
Intelligence, Springer. (2000)
3. Ganter, Bernhard & Rudolf Wille. Formal Concept Analysis { Mathematical
Foundations. Springer Verlag. (1999)
4. Martinez, Maricarmen. Commonsense Reasoning Via Product State Spaces.
Doctoral dissertation. (2004)
5. Sperber, Dan & Deirdre Wilson. Relevance. Communication and Cognition.
Blackwell Publishing. (1986,1995)
6. Van Gelder, Timothy. Dynamics and Cognition. In : Haugeland John. Mind
Design II. MIT Press. (1997)
7. Kadmon, Nirit. Formal Pragmatics. Blackwell Publishing. (2000)
A Appendix: Proof
This appendix contains a proof of our sequent calculus theorem:
`L(Ccl ) () u vS (Ccl ) t
We need to prove both conditionals. Throughout the proof we omit the sub-
scripts in `L(Ccl ) and vS (Ccl ) for readability.
1. ` ) u v t .
Assume ` , that is, assume that 8 2 : if B for every 2 , then
B for some 2 . We must show that the meet of is a subconcept of
the join of . Let be an arbitrary state in such that 8 2 : B .
By the de nition T of B then, this means that 8 2 : 2 Ext( ). This
implies that 2 Ext( ) then, so B u . Furthermore, we know that
9 2 : B . Then :
9 2 : B = 9 2S : 2 Ext()
= 2 2SExt( )
= 2=
Tn 2 Ext()
= 2= 2T n Ext( ) (De Morgan)
= 2 n 2 n Ext( )
= B t
Discharging our initial assumption that ` , we conclude that u v t
if ` as the rst half of the biconditional. The second half is totally
parallel.
2. u v t ) `
Similarly, we rst assume that u v t, or that the meet of is a sub-
concept of the join of . Based on this, we show it follows that every
state 2 that belongs to all 2 must necessarily belong to some
2 . Let be an arbitrary state such that B u . This again means that
8 2 : 2 T Ext( ). Also, by the assumption and the de nition of the v
relation, B t . Then
T :
B t = 2 T n 2 n Ext()
= 2= 2S n Ext( )
= 2=
Sn 2 Ext() (De Morgan)
= 2 2 Ext( )
= 9 2 : 2 Ext( )
=9 2:B
We can again drop the original assumption and conclude that ` when-
ever u v t . Bringing both halfs together we can therefore conclude that
`L(Ccl ) () u vS (Ccl ) t .