=Paper=
{{Paper
|id=Vol-2373/paper-24
|storemode=property
|title=A Toothful of Concepts: Towards a Theory of Weighted Concept Combination
|pdfUrl=https://ceur-ws.org/Vol-2373/paper-24.pdf
|volume=Vol-2373
|authors=Daniele Porello,Oliver Kutz,Guendalina Righetti,Nicolas Troquard,Pietro Galliani,Claudio Masolo
|dblpUrl=https://dblp.org/rec/conf/dlog/PorelloKRTGM19
}}
==A Toothful of Concepts: Towards a Theory of Weighted Concept Combination==
A Toothful of Concepts:
Towards a theory of weighted concept
combination
Daniele Porello1 , Oliver Kutz2 , Guendalina Righetti2 ,
Nicolas Troquard2 , Pietro Galliani2 , and Claudio Masolo1
1
Laboratory for Applied Ontology, ISTC-CNR, Trento, Italy
2
KRDB Research Centre for Knowledge and Data,
Free University of Bozen-Bolzano, Italy
Abstract. We introduce a family of operators to combine Description
Logic concepts. They aim to characterise complex concepts that apply
to instances that satisfy “enough” of the concept descriptions given. For
instance, an individual might not have any tusks, but still be considered
an elephant. To formalise the meaning of “enough”, the operators take a
list of weighted concepts as arguments, and a certain threshold to be met.
We commence a study of the formal properties of these operators, and
study some variations. The intended applications concern the represen-
tation of cognitive aspects of classification tasks: the interdependencies
among the attributes that define a concept, the prototype of a concept,
and the typicality of the instances.
1 Introduction
We begin the project of extending description logics to model cognitively relevant
features of classification. We start from familiar Description Logic formalisms (in
particular from ALC), which is an important logical language to model concepts
and concept combinations in knowledge representation. We introduce a family
of operators which apply to sets of concept descriptions and return a composed
concept whose instances are those that satisfy “enough” of the listed concept
descriptions. To provide a meaning of “enough”, the operator takes a list of
weighted concepts as argument, as well as a threshold. The combined concept
applies to every instance whose sum of the weights of the concepts it satisfies
meets the threshold. Using a threshold, the presentation focuses on crisp cat-
egorisations. Although the framework of weighted concepts easily adapts to a
many-valued setting, we do not admit degrees of classification here.
Depending on the base description logic used, the operators introduced might
or might not extend the extensional expressivity of the concept language in the
sense of increasing the expressive power to define new, previously undefinable
concepts. However, they always allow for a more cognitively grounded modelling
of the intensional aspects of classifications, which are concerned with how the
parts of a concept definition contribute to the classification task overall. The
operators also allow for more compact representations.
The approach to weighted logics that we follow here takes inspiration from
the use of sets of weighted proposition for representing utility functions in [1].
Extensions of that approach to description logics have been developed in [2]. Two
related articles are [3] and [4], where cognitive features of categorisation have
been modelled by means of sets of weighted predicative formulas. The main
difference in the present approach is that we study weighted combinations of
concepts by explicitly introducing syntactic operators on concepts extending the
basic concept languages, and investigating their logical properties.
The intended applications of this framework are inspired by the idea of pro-
viding a cognitively meaningful representation of classification tasks. Cognitive
models of concepts and classification are usually grouped into the prototype view,
the exemplar view, and the knowledge view also called theory-theory (see [5, 6]),
but also Gärdenfors’s theory of conceptual spaces [7] and Barsalou’s theory of
frames [8] enter this category. They hardly rely on logical representation of con-
cepts, however. In this paper, we want to explore the possibility of extending
logic-based representations of concepts to capture aspects of cognitive modelling.
In particular, we will see how the proposed operators allow for represent-
ing the prototype view of classification under a concept, cf. ([9]). Moreover, a
number of cognitively relevant phenomena can be represented in this setting.
For instance, the marginal contribution of the attributes entering the definition
of a complex concept, the contextual dependence of a classification task on the
available information, and the typicality of an instances, cf. [10]. Our work here
is largely independent of the specifics of the concept language used; we will here
use standard definitions and terminology from description logics [11], primarily
working with the language ALC.
2 Weighted concept combination
We introduce a class of m-ary operators, denoted by the symbol ∇∇ (spoken
‘tooth’), for combining concepts. Each operator works a follows: i) it takes a
list of concept descriptions, ii) it associates a vector of weights to them, and
iii) it returns a complex concept that applies to those instances that satisfy a
certain combination of concepts, i.e., those instances for which, by summing up
the weights of the satisfied concepts, a certain threshold is met.
The new logic is denoted by ALC ∇∇R , where weights and thresholds range over
real numbers r ∈ R. In the following we will refer to the languages for brevity
just as ALC ∇∇ . To define the extended language of ALC ∇∇ , we add combination
operators as follows, which behave syntactically just like m-ary modalities. We
assume a vector of m weights w ∈ Rm and a threshold value t ∈ R. Each pair w,
t specifies an operator: if C1 , . . . , Cm are concepts of ALC, then ∇∇tw (C1 , . . . , Cm )
is a concept of ALC ∇∇ . Note that in this basic definition, the possible nesting of
the operator is excluded.3
3
In a more fine-grained definition ALC i∇∇K , i ≥ 0, is the logic with i levels of allowed
nesting and where weights and thresholds range over K; we will comment on this
further below.
2
For Ci0 ∈ ALC, the set of ALC ∇∇ concepts is then described by the grammar:
C ::= A | ¬C | C u C | C t C | ∀R.C | ∃R.C | ∇∇tw (C10 , . . . , Cm
0
)
The semantics of the operator is obtained by extending the definition of the
semantics of ALC as follows. Let I = (∆I , ·I ) be an interpretation of ALC.
We define the value of a d ∈ ∆I under a ∇∇-concept C = ∇∇tw (C1 , . . . , Cm ) by
setting:
X
vCI (d) = {wi | d ∈ CiI } (1)
i∈{1,...,m}
The interpretation (i.e., the extension) of a ∇∇-concept in I = (∆I , ·I ) is then:
∇tw (C1 , . . . , Cm ))I = {d ∈ ∆I | vCI (d) ≥ t}
(∇ (2)
To better visualise the weights an operator associates to the concepts, we some-
times use the notation ∇∇t ((C1 , w1 ), . . . , (Cm , wm )) instead of ∇∇tw (C1 , . . . , Cm ).
In the following examples, we will consider the value of an object name a (aka
individual constant) wrt. a ∇∇-concept for interpretations that satisfy a certain
knowledge base K (i.e. a set of formulas).
Definition 1 (Weights relative to a knowledge base). Let a be an object
name of ALC and K an ALC knowledge base. We set
X
vCK (a) := {wi | K |= Ci (a)}
i∈{1,...,m}
I.e., vCK (a) gives the accumulated weight of those Ci that are entailed by K to
satisfy a.
Note that for positive weights, a given name a and a fixed interpretation I such
that I |= K, we always have that vCK (a) ≤ vCI (aI ).
Example 1. Consider the set of concepts C = {Red, Round, Coloured} and the
concept C defined by means of the ∇∇ operator
∇t ((Red t Round, w1 ), (∃above.Coloured, w2 ))
C=∇
The definition of C means that the relevant information to establish the cate-
gorisation under C of an object is whether (i) it is red or round, and (ii) it is
above a coloured thing.
Consider the following knowledge base K = {Red(a), ∃above.Blue(a), Blue v
Coloured}, i.e., an agent knows that the object a is red and it is above a blue
thing and that blue things are coloured, Blue v Coloured.
The value of a returned by vCK is computed as follows. Firstly, if a satisfies
Red, then a satisfies Red t Round, so the weight w1 can be obtained. More-
over, since Blue v Coloured ∈ K and a satisfies ∃above.Blue, then a satisfies
∃above.Coloured, so also the weight w2 can be obtained. Thus, vCK (a) is w1 +w2 .
If w1 + w2 ≥ t, then a is classified under C.
3
2.1 General properties of the ∇
∇ concept constructor
We discuss a few general properties of the ∇∇ operators which allow for reasoning
about combinations of concepts.
Firstly, we note that, for every possible choice of weights and thresholds,
the operator is well-defined: the ∇∇s of equivalent concepts return equivalent
concepts, i.e. equivalence is a congruence for the tooth. For every I,
CiI = DiI =⇒ (∇
∇tw (C1 , . . . , Ci , . . . , Cm ))I = (∇∇tw (C1 , . . . , Di , . . . , Cm ))I (3)
I I
Proof. Assume an interpretation I such that CP i = Di . Suppose that d ∈
t I
∇w (C1 , . . . , Ci , . . . , Cm )) , thus, by definition,
(∇ {wi | d ∈ CiI } ≥ t. Since
Di is equivalent to Ci , d is also in (∇∇w (C1 , . . . , Di , . . . , Cm ))I .
t
tu
Consider now the following statement, which resembles the monotonicity con-
dition of (normal) modal operators. The statement holds true whenever the
weights are non-negative numbers, i.e. for wi ∈ R+
0 we have:
CiI ⊆ DiI =⇒ (∇
∇tw (C1 , . . . , Ci , . . . , Cm ))I ⊆ (∇∇tw (C1 , . . . , Di , . . . , Cm ))I (4)
Proof. We assume all weights are non-negative and establish 4. Assume P that
CiI ⊆ DiI . Suppose d ∈ ∇ ∇tw (C1 , . . . , Ci , . . . , Cm ))I , then, by definition, {wi |
d ∈ Ci } ≥ t. We have two relevant cases: (i) d ∈ CiI , thus, by assumption,
d ∈ DiI . Since the weight associated to Di is the same as the weight associated
to Ci the sum does not change. (ii) Suppose d ∈ / CiI and d ∈ DiI . In this case, the
sum now adds the weight associated
P to Di . Since the weights are non-negative,
the sum is increasing, thus {wi | d ∈ Ci } + wi is still greater than t. t
u
Extending on this result, only under certain conditions does the tooth operator
fit in between the conjunction and the disjunction of the concepts. Namely,
∇tw (C1 , . . . , Cm ))I ⊆ (C1 t · · · t Cm )I
t > 0 =⇒ (∇ (5)
Indeed, for any d 6∈ (C1 t . . . t Cm )I the value vCI (d) would be zero, and hence
∇tw (C1 , . . . , Cm ))I .
d 6∈ ∇
On the other hand,
X
t≤ wi =⇒ (C1 u · · · u Cm )I ⊆ (∇∇tw (C1 , . . . , Cm ))I (6)
i
Indeed, for any d ∈ (C1 u . . . u Cm )I we have that vCI (d) = i wi ≥ t and hence
P
that d ∈ (∇∇tw (C1 , . . . , Cm ))I .
Moreover, if the set of weights is restricted to non-negative numbers, wi ∈ R+0,
then:
(∇∇tw (C1 , . . . , Cm ))I ⊆ (∇∇tw,wm+1 (C1 , . . . , Cm , Cm+1 ))I (7)
That is, by adding positive attributes to the definition of a concept, we cannot
invalidate the categorisation of an instance under the concept.
4
t=2 t=1
0 -1
1 1 0 0
C1 2 C2 C1 1 C2
∇2(1,1) (C1 , C2 ) (on the left) and ∇
Fig. 1. Consider: ∇ ∇1(1,1,−1) (C1 , C2 , >) (to the right).
The properties of weights affect the classification in the following way. Uni-
form permutations of weights and concepts arguments correspond to the same
concept. For every permutation σ:
∇tw (C1 , . . . , Cm ))I = (∇∇tσ(w) (σ(C1 , . . . , Cm )))I
(∇ (8)
When C1I = C2I (in particular when C1 = C2 ):
∇t(w1 ,...,wm ) (C1 , . . . , Cm ))I = (∇∇t(w1 +w2 ,w3 ,...,wm ) (C1 , C3 , . . . , Cm ))I
(∇ (9)
Moreover, every positive transformation of weights and thresholds returns the
same sets of entities. For every k > 0, we have that:
∇tw (C1 , . . . , Cm ))I = (∇∇k·t
(∇ k·w (C1 , . . . , Cm ))
I
(10)
For every k, we have that:
∇t(w1 ,...,wm ) (C1 , . . . , Cm ))I = (∇∇t+k
(∇ (w1 ,...,wm ,k) (C1 , . . . , Cm , >))
I
(11)
One example of Eq. 11, which can be represented as shown in Figure 1, is the
following:
(C1 u C2 )I = (∇
∇2(1,1) (C1 , C2 ))I = (∇∇1(1,1,−1) (C1 , C2 , >))I
3 Expressivity and Definability
We discuss in this section the definability of concepts from the purely extensional
point of view. Every (complex) concept C of ALC can be trivially represented by
means of ∇ ∇t(t) (C). However, it is interesting to discuss whether we can provide a
representation of C in terms of weights to be associated with the atomic concepts
that define C. We first focus solely on the Boolean structure of complex concepts
followed by a discussion of counting and maximisation.
5
3.1 Boolean operations and the ∇
∇
For instance, the Boolean operators can be expressed as special cases of ∇∇ of
atomic concepts:
– (C1 u C2 )I = (∇∇2(1,1) (C1 , C2 ))I
– (C1 t C2 )I = (∇∇1(1,1) (C1 , C2 ))I
I
– (¬C1 ) = (∇ ∇(−1) (C1 ))I
0
More generally, some Boolean functions can be captured with ∇∇, without
having recourse to complex concepts as arguments. In the case of Boolean func-
tions over two variables (i.e. atomic concepts) C1 and C2 , we obtain the following:
> =∇∇0(0,0) (C1 , C2 ) ⊥ = ∇∇1(0,0) (C1 , C2 )
C1 =∇∇1(1,0) (C1 , C2 ) C2 = ∇∇1(0,1) (C1 , C2 )
¬C1 =∇∇0(−1,0) (C1 , C2 ) ¬C2 = ∇∇0(0,−1) (C1 , C2 )
C1 u C2 = ∇∇2(1,1) (C1 , C2 ) ¬C1 u C2 = ∇∇1(−1,1) (C1 , C2 )
∇1(1,−1) (C1 , C2 )
C1 u ¬C2 = ∇ ¬C1 u ¬C2 = ∇∇0(−1,−1) (C1 , C2 )
C1 t C2 = ∇∇1(1,1) (C1 , C2 ) ¬C1 t C2 = ∇∇0(−1,1) (C1 , C2 )
∇0(1,−1) (C1 , C2 )
C1 t ¬C2 = ∇ ¬C1 t ¬C2 = ∇∇−1
(−1,−1) (C1 , C2 )
The operator ∇ ∇ is thus, by itself, a functionally complete logical connective
if we allow for nesting the operator. However, it is impossible to represent (C1 u
¬C2 ) t (¬C1 u C2 ) (the symmetric difference, XOR) and (C1 u C2 ) t (¬C1 u ¬C2 )
(both or none, the negation of XOR) without recursion (that is, nesting of the
∇∇), complex concepts as arguments, or without the Boolean combination of
more than one ∇ ∇.
Indeed, suppose that the symmetric difference C1 XOR C2 is definable as
an expression of the form ∇ ∇tw (C1 , C2 , >, ⊥) for w = (w1 , w2 , w> , w⊥ ). Then it
can be easily verified that w1 > 0, since for d 6∈ C1I ∪ C2I we must have that
vCI (d) = w> < t while for d ∈ C1I \ C2I we have that vCI (d) = w> + w1 ≥ t. A
similar argument shows that w2 > 0 as well; but then, for d ∈ C1I ∩ C2I we have
that vCI (d) = w> + w1 + w2 > t and hence I, d |= ∇∇tw (C1 , C2 , >, ⊥).
Of course, one can use the following Boolean combinations, using the previous
characterizations:
– (C1 u ¬C2 ) t (¬C1 u C2 ) = ∇∇1(1,−1) (C1 , C2 ) t ∇∇0(−1,1) (C1 , C2 )
– (C1 u C2 ) t (¬C1 u ¬C2 ) = ∇∇2(1,1) (C1 , C2 ) t ∇∇−1
(−1,−1) (C1 , C2 )
With complex concept arguments, we have:
– (C1 u ¬C2 ) t (¬C1 u C2 ) = ∇∇1(1,1,−2) (C1 , C2 , C1 u C2 )
– (C1 u C2 ) t (¬C1 u ¬C2 ) = ∇∇0(−1,−1,2) (C1 , C2 , C1 u C2 )
With nesting, we could do:
– (C1 u ¬C2 ) t (¬C1 u C2 ) = ∇∇1(1,1,−2) (C1 , C2 , ∇∇2(1,1) (C1 , C2 ))
6
– (C1 u C2 ) t (¬C1 u ¬C2 ) = ∇∇0(−1,−1,2) (C1 , C2 , ∇∇2(1,1) (C1 , C2 ))
On the other hand,Pgiven any expression ∇∇tw (C1 , . . . , Cm ), consider the set
Γ = {U ⊆ {1 . . . n} : i∈U wi ≥ t} of all sets of indices that, if they were the
only ones corresponding to concepts that are satisfied by an individual d in an
interpretation I, would make it so that d ∈ ∇∇tw (C1 , . . . , Cm )I . Then we have at
once that ∇∇tw (C1 , . . . , Cm ) is logically equivalent to
G l l
Ci u ¬Ci
U ∈Γ i∈U i∈{1...n}\U
Notice that this translation may lead to an exponentially longer formula.
3.2 Counting and Majority
The combination operators can be instantiated to capture, in a compact way, a
large class of concept compositions. Suppose that D is a complex concept whose
definition applies the concepts C1 , . . . , Cm .
We can define a concept D such that d is in DI iff the majority of concepts
among C1 . . . , Cm applies to d. In this case, we are assuming that all the m
concepts have equal weight and d is classified under the majority iff it satisfies
more than m 2 concepts. Equivalently, we associate weight 2 to each concept Ci
and we set t = m: ∇ ∇m ((C1 , 2), . . . , (Cm , 2)). In a similar way, we can introduce
operators that use a quota rule, i.e they apply to instances that satisfy at least
a number q ∈ {1, . . . , m} of the concepts in the scope of ∇∇.
Moreover, a preferential structure on the combined concepts can be rendered
by means of (Borda) scores: ∇∇t ((C1 , s1 ), . . . , (Cm , sm )), where the weights are
subject to the constraint s1 ≥ · · · ≥ sm . This means that, e.g., satisfying C1 is
more important than C2 , and so on. By choosing the weights and the threshold
in a suitable way, we can then express situations where satisfying the first l
concepts is significant, or dominating for the classification. I.e., the combined
weight of the first l concepts will suffice to classify under the concept, and if
weights are set high enough, this condition can be turned into a necessary one.
Finally, it is possible to define the set of instances that at most reach a given
threshold:
∇≤t ((C1 , w1 ), . . . (Cm , wm ) ≡ ∇∇−t ((C1 , −w1 ), . . . , (Cm , −wm ))
∇ (12)
and the concept of instances that exactly score a certain threshold value t:
∇=t ((C1 , w1 ), . . . (Cm , wm ) ≡
∇
∇t ((C1 , w1 ), . . . , (Cm , wm ) u ∇∇≤t ((C1 , w1 ), . . . (Cm , wm )
∇ (13)
7
3.3 Maximisation and the Universal Modality
Finally, it is interesting to consider the set of entities that maximally satisfy a
combination of concepts C1 , . . . Cm of ALC. That is, we may define an operator
with the following semantics:
max
I I I 0 0
∇
∇ ((C1 , w1 ), . . . (Cm , wm )) = {d ∈ ∆ | vC (d) ≥ vC (d ) for all d ∈ ∆} (14)
Defining ∇ ∇max in terms of ∇∇t would require to use a universal role, which
significantly increases the expressive power of ALC [12].
First of all, let us see how the ∇∇max operator may be defined in terms P of the
universal modality. Given a tuple w = (w1 . . . wm ) of weights, let S = { i ai wi :
ai ∈ {0, 1}} be the (finite) set of all possible scores that can be obtained by using
these weights; and then, let = min{|v − v 0 | : v, v 0 ∈ S} be the smallest possible
distance between different scores. Then we have that
G
∇max ∇tw (C1 . . . Cm ) u ∀U.¬∇
∇t+
∇ w (C1 . . . Cm ) ≡ ∇ w (C1 . . . Cm ) (15)
t∈S
Proof. Indeed, suppose that I, d |= ∇∇max w (C1 . . . Cm ) for some individual d
of our domain.4 Now let t = vCI (d): then, by definition, t ≥ vCI (d0 ) for all in-
dividuals d0 ∈ ∆. This implies that I, d |= ∇∇tw (C1 . . . Cm ) and that I, d0 6|=
∇∇wt+
(C1 . . . Cm ) for all d0 ∈ ∆, which implies at once that I, d |= ∇∇tw (C1 . . . Cm )u
∀U.¬∇ ∇t+
w (C1 . . . Cm ) as required.
Conversely, suppose that I, d |= ∇∇tw (C1 . . . Cm ) u ∀U.¬∇∇t+ w (C1 . . . Cm ) for
some t ∈ S. Then vCI (d) = t, and for all individuals d0 ∈ ∆ we have that I, d0 6|=
∇∇wt+
(C1 . . . Cm ), which by the definition of implies at once that vCI (d0 ) ≤ t.
Thus I, d |= ∇ ∇max
w (C1 . . . Cm ), and this concludes the proof. t
u
∇max operator it is possible to define the universal
Conversely, given the ∇
modality as
∀U.C ≡ ∇∇max
(−1) (C) u C (16)
Proof. Indeed, suppose that I, d |= ∀U.C. This means that I, d0 |= C for all
individuals d0 ∈ ∆; and, therefore, v(C,−1)
I
(d0 ) = −1 for all d0 ∈ ∆. In particular,
we thus have that I, d |= C; and moreover, v(C,−1)I I
(d) = −1 = v(C,−1) (d0 ) for all
0 max max
d ∈ ∆, and so I, d |= ∇ ∇(−1) (C). So I, d |= (∇∇(−1) (C)) u C, as required.
Conversely, if I, d |= (∇∇max
(−1) (C)) u C then first of all we have that I, d |= C
and that therefore v(C,−1) (d) = −1. Now take any other individual d0 ∈ ∆. We
I
state that I, d0 |= C as well; indeed, if instead I, d0 6|= C we would have that
I
v(C,−1) (d0 ) = 0 > v(C,−1)
I
(d), which is impossible since by assumption I, d |=
max I
(∇∇(−1) (C)). Thus C = ∆ as required. t
u
We leave a detailed study of the ∇∇max operator and similar extensions for
future work.
4
This is a mild abuse of notation for d ∈ ∇ ∇max
w (C1 . . . Cm )I . In what follows, we will
freely write expressions like I, d |= C with the intended meaning of d ∈ C I .
8
4 Modelling with the Tooth Operator
We see now how the ∇ ∇ operators allow for representing fine-grained depen-
dencies among the attributes that define a concept. We may call this feature
intensional expressivity. We study situations where an agent who has knowledge
represented by K performs the task of classifying an object a under the con-
cept C = ∇ ∇tw (C1 , . . . , Cm ). That is, we focus on how K |= ∇∇tw (C1 , . . . , Cm ) (a)
and we discuss how the pieces of knowledge in K may contribute to satisfy the
attributes occurring in C.
We say that the function vCK is additive on the information in K = {φ1 , . . . , φl }
P {φi }
iff for every individual a, vCK (a) = {vC (a) for φi ∈ K}. In this case, the
satisfaction of each φi contributes independently of the other formulas in K.
We say that the function vCK is super-additive on K iff for every a, vCK (a) ≥
P {φi } P {φi }
{vC (a) for φi ∈ K} and sub-additive iff vCK (a) ≤ {vC (a) for φi ∈ K}.
Super-additivity represents positive synergies between the information provided
by K, whereas sub-additivity expresses negative synergies. We illustrate this by
means of the following examples.
Example 2. Suppose that the concept elephant E is defined by four attributes:
∇t ((Large, w1 ), (Heavy, w2 ), (hasTrunk, w3 ), (Grey, w4 ))
E=∇
This definition entails that each of the attributes in E contributes independently
of the others to the classification of an object as an elephant.
Consider a knowledge base K = {Large(a), Heavy(a), hastrunk(a), Grey(a)}.
Large(a) Heavy(a) Grey(a) hasTrunk(a)
Then, vEK (a) = vE (a) + vE (a) + vE (a) + vE (a), that is,
K
vE is an additive wrt. the formulas in K.
By exploiting complex concept descriptions in the scope of ∇∇, we can enable
positive and negative synergies among the attributes.
Example 3. Suppose now we redefine the concept elephant as follows, where we
suppose that w5 > w1 + w3 , w6 > w2 + w3 , and w7 > w3 + w4 and each Wi is
positive.
E0 =∇
∇t ((Large, w1 ), (Heavy, w2 ), (hasTrunk, w3 ), (Grey, w4 ),
(Large u hasTrunk, w5 ), (Heavy u hasTrunk, w6 ), (Grey u hasTrunk, w7 ))
In this case, the relevance of the pieces of information in K = {Large(a), Heavy(a),
hasTrunk(a), Grey(a)} outweighs the sum of the values of each attribute. That
Large(a) Heavy(a) Grey(a) hasTrunk(a)
is, vEK0 (a) ≥ vE0 (a) + vE0 (a) + vE0 (a) + vE0 (a).
The meaning of this representation is that adding “having a trunk” signifi-
cantly increases the importance of the combination of attributes for classifying
an elephant. Accordingly the value of vEK0 is in this case super-additive on K.
In the case of sub-additive functions, the combination of two or more attributes
may lower the salience for the classification. The combination of the use of con-
junctions of concepts and negative weights allows for modelling how certain
attributes may decrease the salience of the combination of other attributes.
9
Example 4. Suppose that we want to classify an individual according to the
disease that she may suffer. For instance, the concept of flu may be represented
as follows:
∇((Fever u Nausea, w1 ), (Fever u Spots, −w2 ), (Nausea u Spots, −w3 ),
FLU =∇
(Fever, w4 ), (Nausea, w5 ))
In this case, the combination of fever and nausea is highly significant for the diag-
nosis, whereas adding the symptom ‘spots’ significantly decreases the reliability
of the classification under FLU, because it is a strong indication of chickenpox.
So, w1 > w4 + w5 and w2 and w3 are both greater than w1 + w4 + w5 . That is,
K
the function vFLU is sub-additive on K = {Fever(a), Nausea(a), Spots(a)}.
Complex concepts occurring as arguments of ∇∇ provide a way to express
many types of dependencies among the weights of the attributes. A compre-
hensive study of this type of expressivity is left for a future work and requires
rephrasing the results in [1] to the case of DLs.
5 Applications: prototypes, typicality, and similarity
We illustrate how the ∇ ∇ operators can represent the cognitive approach to
concepts based on prototypes. In particular, we follow [13] where a “prototype is
a prestored representation of the usual properties associated with the concept’s
instances”[13, p.487]. A prototype is represented in terms of a set of attributes
(e.g., colour or weight) and a set of values for each attribute (e.g., red and
blue for the colour-attribute). The relevance of an attribute for classifying an
object—e.g., the relevance of colour for classifying apples—is represented by
its diagnosticity (a numerical value), while the salience of an attribute-value
represents its typicality (also a numerical value)—e.g., how frequent is for apple
to have a red colour.
Let us define a prototype πC for a concept C as (where Qji is the i-th value of
the j-th attribute, sji is the salience of Qji wrt. C, dj is the diagnosticity of the
j-th attribute wrt. C):
πC = {(Q11 , s11 ·d1 ), . . . , (Q1r , s1r ·d1 ), . . . , (Qn1 , sn1 ·dn ), . . . , (Qnm , snm ·dn )}.
Note that here we weighted the salience of each attribute-value with the diag-
nosticity of the attribute, but more elaborate strategies exist. Furthermore, the
attributes within a single dimension are assumed as mutually exclusive, e.g., if
something is red, then it is not of any other colour, formally Qji ∩ Qjk = ∅ for
i 6= k. Assume now to know some attribute-values of a given object a. The clas-
sification of a under the concept C is usually done by leveraging on a (usually
metric) function that establishes, on the basis of the matching of features, how
similar the object a and the prototype of C are.
In our setting, we can introduce a concept C by using a ∇∇ operator that
directly considers the attribute-values, the saliences, and the diagnosticities in
10
the prototype πC (where we assume Qji u Qjk ≡ ⊥ for i 6= k):
∇t ((Q11 , s11 ·d1 ), . . . , (Qr1 , s1r ·d1 ), . . . , (Q1n , sn1 ·dn ), . . . , (Qm
C=∇ n
, snm ·dn ))
The classification under C applies then to the objects that have enough fea-
tures in common with the prototype to exceed the threshold t, i.e., they are close
enough with respect to the prototype. We can also individuate the prototypical
instances of C as the objects (if they exist) that satisfy all the Qji in C.
Note that the object-prototype similarity is here simply rendered by summing
up all the weights of the matching Qji . The discussion of more sophisticated
choices to measure the distance to the prototype is left for future work. Moreover,
in our setting, we can enable synergies between attributes in a super-additive
or in a sub-additive fashion. However, to define prototypes in this rich setting,
we will use the ∇ ∇max operator, or approximate it by carefully selecting the
threshold.
This setting allows for defining a typicality operator, as in [10]. The most
typical instances of a concept can be defined as those instances that maximise
the sum of weights in w. Notice that the instances can be ordered in terms
of typicality with respect to a concept C, by means of the values vCI (cf. [10]).
Thus, in principle, we can enable comparisons between instances concerning their
typicality wrt. a concept and introduce various degrees of typicality.
∇=t operator may be used to define similarity of instances with
Finally, the ∇
respect to a (number of) concept(s), that is, those instances that are not distin-
guishable in terms of the complex concept.
6 Conclusion and future work
We introduced a class of operators for defining complex concepts that weigh the
role of the defining attributes. We presented a few general properties of these
operators, and we started investigating their expressivity. Pinpointing the exact
expressivity of various combinations of DLs and ∇∇-operators as well as studying
succinctness effects is part of future work.
We further illustrated how the operators may be applied to render cogni-
tively meaningful mechanisms for classification. Future work will be dedicated
to properly investigate the logical properties of the operators and their natural
extensions, and to apply them to describe salient cognitive features of concepts
such as concept combinations and blending [14]. Another line of research will
deepen the comparison with the formal studies on typicality (e.g. [10]), work on
threshold concepts [15], relaxed query answering [16], and the relation and com-
bination with similarity frameworks based on a notion of distance (e.g. [17–19]).
11
References
1. J. Uckelman, Y. Chevaleyre, U. Endriss, and J. Lang. Representing utility functions
via weighted goals. Mathemathical Logic Quartely, 55(4):341–361, 2009.
2. A. Ragone, T. D. Noia, F. M. Donini, E. D. Sciascio, and M. P. Wellman. Weighted
description logics preference formulas for multiattribute negotiation. In Scalable
Uncertainty Management, Third International Conference, SUM 2009, Washing-
ton, DC, USA, September 28-30, 2009. Proceedings, pages 193–205, 2009.
3. C. Masolo and D. Porello. Representing concepts by weighted formulas. In Formal
Ontology in Information Systems - Proceedings of the 10th International Confer-
ence, FOIS 2018, Cape Town, South Africa, 19-21 September 2018, pages 55–68,
2018.
4. C. Masolo and D. Porello. Understanding predication in conceptual spaces. In
R. Ferrario and W. Kuhn, editors, Formal Ontology in Information Systems: Pro-
ceedings of the 9th International Conference (FOIS 2016), pages 139–152. IOS
Press, 2016.
5. E. Margolis and S. Laurence, editors. Concepts: Core Readings. MIT Press, 1999.
6. G. L. Murphy. The Big Book of Concepts. MIT press, 2002.
7. P. Gärdenfors. Conceptual spaces - The geometry of thought. MIT Press, 2000.
8. L. W. Barsalou. Frames, concepts, and conceptual fields. In A. Leherer and
E. F. Kittay, editors, Frames, Fields, and Contrasts - New Essays in Semantic and
Lexical Organization, chapter 1, pages 21–74. Lawrence Erlbaum Associates, Inc,
1992.
9. E. Rosch. Principles of categorization. In E. Margolis and S. Laurence, editors,
Concepts: Core Readings, volume 189, chapter 8, pages 189–206. MIT press, 1999.
10. A. Lieto and G. L. Pozzato. A description logic of typicality for conceptual com-
bination. In Foundations of Intelligent Systems - 24th International Symposium,
ISMIS 2018, Limassol, Cyprus, October 29-31, 2018, Proceedings, pages 189–199,
2018.
11. F. Baader, I. Horrocks, C. Lutz, and U. Sattler. An Introduction to Description
Logic. Cambridge University Press, 2017.
12. E. Hemaspaandra. The price of universality. Notre Dame Journal of Formal Logic,
37:174–203, 1996.
13. E. E. Smith, D. N. Osherson, L. J. Rips, and M. Keane. Combining prototypes: A
selective modification model. Cognitive Science, 12(4):485–527, 1988.
14. M. Eppe, E. Maclean, R. Confalonieri, O. Kutz, M. Schorlemmer, E. Plaza, and
K.-U. Kühnberger. A computational framework for conceptual blending. Artificial
Intelligence, 256:105–129, 2018.
15. F. Baader, G. Brewka, and O. F. Gil. Adding threshold concepts to the description
logic EL. In C. Lutz and S. Ranise, editors, Frontiers of Combining Systems, pages
33–48, Cham, 2015. Springer International Publishing.
16. A. Ecke, R. Peñaloza, and A.-Y. Turhan. Similarity-based relaxed instance queries.
Journal of Applied Logic, 13(4, Part 1):480–508, 2015.
17. R. Confalonieri, O. Kutz, P. Galliani, R. Peñaloza, D. Porello, M. Schorlemmer,
and N. Troquard. Coherence, similarity, and concept generalisation. In Proc. of
DL, volume 1879. CEUR-WS, 2017.
18. M. Sheremet, D. Tishkovsky, F. Wolter, and M. Zakharyaschev. A Logic for Con-
cepts and Similarity. J. of Logic and Computation, 17(3):415–452, 2007.
19. J. Hois and O. Kutz. Counterparts in Language and Space—Similarity and S-Con-
nection. In C. Eschenbach and M. Grüninger, editors, Formal Ontology in Infor-
mation Systems (FOIS 2008), pages 266–279. IOS Press, 2008.
12