<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Concept Combination in Weighted Logic</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Guendalina Righetti</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio Masolo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicolas Troquard</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oliver Kutz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Porello</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ISTC-CNR Laboratory for Applied Ontology</institution>
          ,
          <addr-line>Via alla cascata 56/C, 38123, Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>KRDB Research Centre for Knowledge and Data, Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>piazza Domenicani 3, 39100, Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Genoa</institution>
          ,
          <addr-line>Via Balbi 2-4-6, 16126, Genova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We present an algorithm for concept combination inspired and informed by the research in cognitive and experimental psychology. Dealing with concept combination requires, from a symbolic AI perspective, to cope with competitive needs: the need for compositionality and the need to account for typicality efects. Building on our previous work on weighted logic, the proposed algorithm can be seen as a step towards the management of both these needs. More precisely, following a proposal of Hampton [1], it combines two weighted Description Logic formulas, each defining a concept, using the following general strategy. First it selects all the features needed for the combination, based on the logical distinction between necessary and impossible features. Second, it determines the threshold and assigns new weights to the features of the combined concept trying to preserve the relevance and the necessity of the features. We illustrate how the algorithm works exploiting some paradigmatic examples discussed in the cognitive literature.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Dealing with concept combination requires, from an AI point of view that blends logical and
cognitive perspectives, to cope with competitive needs: the need for compositionality and the
need to account for typicality efects . Compositionality would require (the representation of) a
combined concept to be a function of (the representations of) the combining concepts. It is often
advocated as one of the main explanations for the human ability to create and understand new
meaningful concepts [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Typicality efects refer to a number of phenomena—mainly observed
in cognitive psychology [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]—related to the categorization task: some instances of a concept are
more representative, and then easier to be categorized, than others.
      </p>
      <p>
        Logically, concepts are often reduced to sets of necessary and suficient conditions precluding
the possibility to deal with typicality efects. Vice versa, cognitive theories of concepts focus
on typicality efects by sacrificing compositionality: this is the case of, e.g. Prototype Theory.
According to the Prototype Theory [
        <xref ref-type="bibr" rid="ref1 ref4">4, 1</xref>
        ], concepts are represented by means of prototypes, i.e.,
sets of features associated with weights representing their relevance for the concept.1 Typicality
can be evaluated by summing up the weights of the features (in the prototype) matched by a
given individual: the most typical members, the best exemplars, are the individuals with the
highest score. However, the Prototype Theory seems inadequate to capture compositionality, as
paradigmatically illustrated by the Pet Fish example. A gold fish is a very typical example of Pet
Fish but it is a quite atypical example of Fish, and a quite poor example of Pet. This is known as
conjunction efect : when an individual is well described by a concept combination, it is usually
more typical of the combined concept than of the two components [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In a diferent perspective,
the prototypical instances of Pet are furry, the ones of Fish are grey, but the prototypical
instances of Pet Fish are neither furry nor (likely) grey. So, it is argued, the typicality of the
combined concept is not predictable from the one of the component concepts [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Consequently,
prototypes are unable to account for concept combination and for the productivity of human
concepts, i.e., concepts cannot be represented by prototypes [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Some approaches, e.g., [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], tried to overcome this impasse by ‘importing’ prototypes
into a formal setting. Here we pursue this general idea by deploying the logical framework we
proposed in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] which extends Description Logic languages by means of a family of operators
(denoted by the symbol ∇∇, spoken ‘tooth’) allowing one to define concepts in terms of weighted
features. Similarly to prototypes, each operator takes a list of concept descriptions and assigns
a weight to each of them. Furthermore, once a threshold is determined, the operator returns a
complex concept which applies to those instances that satisfy certain combinations of features,
the ones that reach the chosen threshold by summing up the weight of the satisfied features.
      </p>
      <p>
        Given the ∇∇-definitions of two concepts (and a given knowledge base), we introduce an
algorithm that returns the ∇∇-definition of the combined concept. The general rules governing
the algorithm are grounded on the ones analyzed in cognitive science studies. In particular,
we refer to the work done by Hampton in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. We then illustrate how the algorithm works by
analysing the Pet Fish example and several paradigmatic examples discussed in the literature.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Hampton’s Model of Attribute Inheritance</title>
      <p>
        Diferent models have been proposed in the context of prototype theory in order to deal with the
kind of conjunction efects described above. A quite famous one is the selective modification model
proposed by [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The authors propose an elaboration of the Prototype Theory, interpreting
concepts in terms of schema structures. Distinguishing between dimensions (e.g., color) and
features (e.g., red) of a concept, the proposed model is able to account for the conjunction efect
in the case of adjective-noun combinations (e.g. red apple). Unfortunately, it has little to say in
the context of noun-noun combinations.
      </p>
      <p>
        A diferent kind of analysis is proposed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Still within the Prototype Theory, Hampton
proposes an attribute inheritance model2, which analyses the case of conjunctive noun-noun
combinations (e.g., a Sport which is a Game). According to this proposal, the features of the
combined concept are initially collected from the ones of the constituent concepts following the
1The Prototype Theory usually distinguishes attributes or dimensions (e.g., color) from values or features (e.g.,
crimson). In the following we do no consider this distinction and we focus on features (values).
      </p>
      <p>2Following Hampton, in this section the terms ‘attribute’ and ‘feature’ are used interchangeably.
standard rule for conjunction. The weight of the features of the combined concept depends on
the weight(s) this feature has in the prototypes of the combining concepts. In particular, when
the feature appears in the prototypes of both the combining concepts, Hampton considers the
average value, a sort of trade-of.</p>
      <p>An original aspect of the proposal concerns the constraints posed on the inheritance of the
features. Hampton introduces two main constraints: () the features that are necessary for
either constituent are also necessary for the combination; () the features that are impossible
for either constituent are impossible also for the combination. The notions of necessity and
impossibility are characterised in a logical way: an attribute is necessary when it holds for all
instances of the concept while it is impossible when it is necessarily false for all the instances of
the concept. Hampton analyzes the example of the Pet Fish: a Pet is necessarily owned, and for a
Fish it is impossible to be cuddly. Then Pet Fish must inherit the first attribute but cannot inherit
the second one. He also suggests that the idea of averaging the weights of the features shared
by the combining concepts may not work in the case of impossible and necessary features.
In case of necessity (impossibility) a maximum (minimum) rule is applied, i.e., the weight of
the necessary (impossible) features of the combined concept is inherited from the combining
concept where this feature is more (less) relevant.</p>
      <p>The model proposed by Hampton is able to explain several phenomena concerning
conceptcombination as observed in the context of experimental psychology. Two of them are of
particular interest in the analysis of Pet Fish, namely, inheritance failure and attribute emergence.</p>
      <p>Inheritance failure occurs when a feature which is important for a constituent () becomes
irrelevant for the conjunction or () it is not inherited at all. The case () can be explained as
an efect of the averaging procedure: when the constituents share a feature, its weight can be
high in the prototype of one constituent but very low in the prototype of the other constituent.
The case () can be explained by exploiting the notion of impossibility discussed above: if a
feature is impossible for one of the constituent concepts, it is not inherited at all.</p>
      <p>Attribute emergence is the inverse of inheritance failure, namely the increased weight of
a feature in the prototype of the conjunction w.r.t. its weights in the prototypes of the
constituent concepts, or the emergence of a new feature of the conjunction. Hampton explains this
phenomenon in terms of extensional feedback: there is a feedback from past experiences and
background knowledge into the combined concept. This means that the prototype of the
combined concept can be adapted taking into account the experienced exemplars and the available
knowledge about the environment. In the case of Pet Fish, one can for instance introduce some
necessary features like ‘small’ and ‘lives in aquarium’.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Tooth Operators and Preliminary Hypotheses</title>
      <p>
        In this section, we delineate the formal framework necessary to introduce the ∇∇-definitions
of concepts serving as input for the algorithm for concept combination presented in Sect. 4.
Following the work done in [
        <xref ref-type="bibr" rid="ref11 ref12 ref9">9, 11, 12</xref>
        ], we extend standard DL languages [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] with a class of
-ary operators denoted by the symbol ∇∇ (spoken ‘tooth’). Each operator works as follows:
() it takes a list of concepts, () it associates a weight (i.e., a number) to each of them, and ()
it returns a complex concept that applies to those instances that satisfy a certain combination of
concepts, i.e., those instances for which, by summing up the weights of the satisfied concepts,
a certain threshold is met. More precisely, we assume a vector of  weights ⃗ ∈ R and a
threshold value  ∈ R. If 1, . . . ,  are concepts of ℒ, then ∇∇⃗(1, . . . , ) is a concept
of ℒ∇∇. For ′ ∈ ℒ, the set of ℒ∇∇ concepts is described by the grammar:
 ::=  | ¬ |  ⊓  |  ⊔  | ∀. | ∃. | ∇∇⃗(1′, . . . , ′)
      </p>
      <p>To better visualise the weights an operator associates to the concepts, we often use the
notation ∇∇((1, 1), . . . , (, )) instead of ∇∇⃗(1, . . . , ). A knowledge base KB is
a finite set of concept inclusions of the form  ⊑ , where  and  are concept expressions.
We write  ≡  to signify that  ⊑  and  ⊑ .</p>
      <p>Given finite, disjoint sets  and  of concept and role names, respectively, an
interpretation  consists of a non-empty set ∆  and a mapping ·  that maps every concept name 
to a subset  ⊆ ∆  and every role name  ∈  to a binary relation  ⊆ ∆  × ∆  . The
semantics of the operator is obtained by extending the definition of the semantics of ℒ as
follows. Let  = (∆  , ·  ) be an interpretation of ℒ. The interpretation of a ∇∇-concept
 = ∇∇((1, 1), . . . , (, )) is:
where  () is the value of  ∈ ∆  under the concept , defined as:</p>
      <p>= { ∈ ∆  |  () ≥ }
 () =</p>
      <p>∑︁
∈{1,...,}
{ |  ∈  }
(1)
(2)
For instance consider  = ∇∇1.8((1, 1.2), (2, 1), (3, 0.4), (4, 0.1)). If an individual  ∈
1 ∩ 3 but  ∈/ 2 , then  ∈/   because even when  ∈ 4 we have that 
 () =
1.2 + 0 + 0.4 + 0.1 = 1.7 &lt; 1.8.</p>
      <p>
        The interpretation  is a model of the knowledge base KB if for every concept inclusion
 ⊑  in KB, it is the case that  ⊆  . A concept inclusion  ⊑  is entailed by the
knowledge base KB (noted KB |=  ⊑ ) when  ⊆  holds for every model  of KB. A
concept  is satisfiable in KB when  ̸= ∅ for some model  of KB. Adding tooth-expressions
to the language of ℒ is thus done without modifying the standard notion of interpretation in
DL. As observed in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], tooth-operators do not increase the expressive power of any language
that contains the standard Boolean operators. It was shown in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] that adding tooth-operators
to such DL languages does not increase the complexity of the corresponding inference problem.
Tooth-operators behave like perceptrons [
        <xref ref-type="bibr" rid="ref14">15, 14</xref>
        ]: a (non-nested) tooth expression is a linear
classification model, which enables to learn weights and thresholds from real data (in particular,
from set of assertions about individuals) exploiting standard linear classification algorithms.
Multilayer perceptrons can then be represented via nested tooth expressions.
      </p>
      <p>The design of the tooth operator is inspired by the Prototype Theory: the concepts in the
∇∇-definition of  may be seen as the features of  and their weights may be intended to
represent the relevance of such features (for ). This allows us to express typicality efects
in the context of a logical representation: in our setting, the most typical instances, the best
exemplars, of  are the individuals with the highest score, i.e., by exploiting the value  ,

individuals can be ordered in terms of typicality.</p>
      <p>Given  = ∇∇((1, 1), ..., (, )), a knowledge base KB, and a set  of concepts, we
introduce the following sets:
– ft() = {1, . . . , };
– snc() = { ∈ ft() | ∑︀̸=  &lt; };
– nc(KB, , ) = { ∈  | KB ⊨  ⊑  };
– im(KB, , ) = { ∈  | KB ⊨  ⊑ ¬ }.
ft() is the set of the features of  while snc() is the set of the strongly necessary features
of : individuals lacking a feature in snc() cannot reach the threshold. Note that snc()
is defined in a purely syntactic way, logical inference is not deployed here. By relying on nc
and im (that are grounded on logical inference), the sets of necessary and impossible features
of  w.r.t. KB can be defined as nc(KB, , ft()) and im(KB, , ft()), respectively. Note
that snc() ⊆ nc(∅, , ft()); indeed when ⊨  ⊑  (with  ̸= ) we have that snc() ⊂
nc(∅, , ft()). In the previous example, snc() = {1, 2} and, assuming that KB contains
only 2 ⊑ 3, nc(, , ft()) = {1, 2, 3}.</p>
      <p>
        In the following, we always consider a single knowledge base KB. To simplify the notation we
then write nc(, ) and im(, ) rather than nc(KB, , ) and im(KB, , ), respectively.
Furthermore, we assume that all the ∇∇-concepts  are satisfiable in KB and that they are not
redundant, i.e., im(, ft()) = ∅ (none of the features included in the tooth contradicts any
of the necessary features of ). Finally, given a set  of concepts, we write ⌈⌉ to indicate the
conjunction of all the concepts in .
4. An Algorithm to Combine ∇∇-Concepts
We present an algorithm for concept combination inspired by the work of Hampton [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] discussed
in section 2. Following [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we mainly focus on the case of conjunctive concept combination, i.e.,
combinations that closely relate to a conjunction of the constituent concepts. The algorithm
considers as input the ∇∇-definitions of two concepts, one ( ) playing the role of head and
one ( ) playing the role of modifier , and it outputs the ∇∇-definition of the combined concept
noted  ∘ . Without losing generality [
        <xref ref-type="bibr" rid="ref9">9, 15</xref>
        ], we assume that: ()  and  have the same
positive threshold and () all the features have positive weights.
      </p>
      <p>
        The head and modifier roles are based on a linguistic distinction on noun-noun compound
[16]. Looking at noun-noun combinations in English, two parts can be distinguished, the Head
and the Modifier, depending on the syntactic construction of the compound. Considering for
instance a “Tool Weapon”, the noun “weapon” would play the role of the Head, whereas “tool”
would be the Modifier. As the names suggest, the Head provides the base category of the
combined concept, whilst the Modifier alters the attributes of the Head. The result is that a
“Tool Weapon” may be in principle quite diferent from a “Weapon Tool”. However, the role of
the Head concept may here be also compliant with the notion of a dominant concept (discussed
below), as introduced in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], in line with the work in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>The algorithm consists of three phases: phase 1 collects the features of  ∘  by assuming
that the head dominates the modifier; phase 2 selects the weights of the features of  ∘ ; and
phase 3 determines a range of possible thresholds for (the ∇∇-definition of)  ∘ . As we will
see, in phase 1 the logical nature of ∇∇-operators allows one to use the inference power of logic
to determine incompatibilities between the features of  and  . Vice versa, phases 2 and 3
use only the information made available by the (intensional) ∇∇-definitions of concepts.
Phase 1: features. The set of the features of  ∘  is built in two steps:
(Step 1) f¯t( ∘ ) = nc(, ft()) ∪ (nc(, ft( )) ∖ im(, ft( )))
(Step 2) ft( ∘ ) = ft() ∖ im( ⊓ ⌈f¯t( ∘ )⌉, ft()) ∪</p>
      <p>ft( ) ∖ im( ⊓ ⌈f¯t( ∘ )⌉, ft( ))</p>
      <p>Step 1 collects all the necessary features of  together with all the necessary features of 
which are not impossible for . This shows in which sense  dominates  ,  is the base of
 ∘ : in case of incompatibilities we discard necessary features of  , not of . It follows that
ft( ∘ ) and ft(∘  ) can difer.</p>
      <p>Step 2 builds on the previous step, examining all the non-necessary features of both  and
 . Specifically, it aims at excluding all the features of H (resp. M ), which are impossible for
H (resp. M ) itself, once all the necessary features of M (resp. H ) in f¯t( ∘ ) are added.</p>
      <p>
        The selection of features based on the distinction between necessary and impossible features
aims at mimicking the model proposed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and discussed in section 2.
      </p>
      <p>Phase 2: weights. Once built the set ft( ∘ ) containing all the features of  ∘ , weights
are assigned to them in the following way:
(1) the weight of each feature in ft() (resp. ft( )) is divided by the maximal sum of the
weights of consistent (in KB) subsets of ft() (resp. ft( ));
(2) for all the features in s¯ft( ∘ ) = f¯t( ∘ ) ∩ (snc( ) ∪ snc()) we consider the
weight calculated in (1) except for the ones in s¯ft( ∘ ) ∩ ft() ∩ ft( ) for which we
consider the maximal weight (of the weights calculated in (1));
(3) for all the features in ft( ∘ ) ∖ s¯ft( ∘ ) we consider the weight calculated in (1)
except for the ones in (ft( ∘ ) ∖ s¯ft( ∘ )) ∩ ft() ∩ ft( ) for which we consider
the average weight (of the weights calculated in (1)).</p>
      <p>
        In Sect. 3 we observed that the weight of a feature can be seen as an indicator of the relevance
of such feature for the concept. The idea in (1) is to normalize this indicator with respect to the
value  of the possible best exemplars. The numbers of features of  and  may substantially
difer preventing absolute weights to be accurate relevance-indicators. (2) and (3) attribute to the
features of  ∘  the weights calculated in (1) except when a feature belongs to ft() ∩ ft( )
and has diferent normalized weights (in  and  ). In these cases, following Hampton [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we
consider the maximal weight for the features in s¯ft( ∘ )—as we will see in the discussion
of the phase 3, s¯ft( ∘ ) corresponds to snc( ∘ )—and the average weight for the other
features of  ∘ .
      </p>
      <p>Phase 3: threshold. We fix the threshold for  ∘  to assure that s¯ft( ∘ ) = snc( ∘ ),
i.e., the strongly necessary features of  together with the strongly necessary features of 
(that are compatible with the necessary features of ) are also strongly necessary features of
 ∘ . To do that, the threshold must belong to the open interval (− +, − − ) where 
is the sum of the weights of the features in ft( ∘ ), + is the minimal weight of the features
in s¯ft( ∘ ), and − is the maximal weight of the features in ft( ∘ ) ∖ s¯ft( ∘ ). By
increasing the threshold we exclude some combinations of non-necessary features. Furthermore,
assume that ⌈s¯ft( ∘ )⌉ implies some features in ft( ∘ ) ∖ s¯ft( ∘ ). These implied
features are necessary even though the threshold can be reached without counting their weights,
i.e., they are not strongly necessary. It is also possible that some features in f¯t( ∘ ) ∖
s¯ft( ∘ ) are not necessary for  ∘ , i.e., the algorithm preserves the strong necessity but not
the necessity. E.g., consider the case where ft( ) = {1, 2}, 1 ∈ snc( ), 2 ∈/ snc( ),
KB ⊨ 1 ⊑ 2, and 1 (but not 2) is incompatible with the necessary features of . In this
case 2 ∈ f¯t( ∘ ) ∖ s¯ft( ∘ ) but 2 is no more necessary for  ∘  because 1, which
grounds the necessity of 2, has been discarded.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Examples</title>
      <p>
        We illustrate how the proposed algorithm works by means of several paradigmatic examples of
noun-noun and adjective-noun combinations. Without presenting a direct empirical validation,
we analyze how the algorithm accounts for the phenomena and rules identified by Hampton in
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] to build the prototypes of conjunctive combinations of concepts (in particular, inheritance
failure, dominance efect, overextension, and emergence of features). Even though the
∇∇definitions and KBs we consider seem plausible, we cannot commit on their empirical foundation.
Their main intent is to show how the efectiveness of the rules proposed by Hampton critically
depends not only on the weights of the features and on the thresholds in the ∇∇-definitions
but also on the assumed background knowledge. The embedding of prototypes into a logical
framework allows the algorithm to explicitly and formally take into account both these aspects.
      </p>
      <sec id="sec-4-1">
        <title>5.1. Noun-Noun Combinations</title>
        <p>5.1.1. Pet Fish
We start by considering the case of Pet Fish that has been advocated to show the inadequacy of
the Prototype Theory to capture concept combinations. Consider the following ∇∇-definitions
and assume  ℎ is the head and   the modifier:
 ℎ = ∇∇10((∀. , 3), (¬ , 3), (∀ℎ ℎℎ., 3),
(, 0.9), (, 0.9), (∃ℎ . , 1))
  = ∇∇10(( , 3)( , 3), (∀., 3), ( , 0.9),
( , 0.9), (∃ℎ ., 1))
Furthermore, assume the following KB:
∀.  ⊓ ∀. ⊑ ∀.</p>
        <p>⊑  
  ⊓ ∀ℎ ℎℎ. ⊑ ¬
(3a)
(3b)
(3c)</p>
        <p>Phase 1 collects the features of  ∘  ℎ. More precisely, (Step 1) defines the set of
necessary features of Pet ∘ Fish. It collects the necessary features of the Head concept  ℎ
(nc( ℎ, ft( ℎ)) = {∀. , ¬ , ∀ℎ ℎℎ.}) and
the necessary features of the Modifier Pet which are not impossible for  ℎ. In this case there
are no inconsistencies between the necessary features of the two concepts, therefore
f¯t( ∘  ℎ) = {∀. , ¬ , ∀ℎ ℎℎ.,  ,
 , ∀.}
(Step 2) examines the non-necessary features aiming at excluding all the features of Fish
(resp. Pet ), which are impossible for the concept Fish (resp. Pet ) itself, once one adds all the
necessary features of the concept Pet (resp. Fish) in f¯t( ∘  ℎ). In our example,
ft( ∘  ℎ) = {∀. , ¬ , ∀ℎ ℎℎ.,  ,
 , ∀., , ∃ℎ . ,
∃ℎ .}
Indeed:
•   ∈/ ft( ∘  ℎ) because of (3b) in the KB and</p>
        <p>¬  ∈ nc( ℎ, ft( ℎ)) and then ¬  ∈ f¯t( ∘  ℎ);
•   ∈/ ft( ∘  ℎ) again because ¬  ∈ nc( ℎ, ft( ℎ));
•  ∈/ ft( ∘  ℎ) because of (3c) in the KB and  , ∀ℎ ℎℎ. ∈
f¯t( ∘  ℎ).</p>
        <p>Phase 2 assigns weights to the features in ft( ∘  ℎ). First, note that f¯t( ∘  ℎ) =
s¯ft( ∘  ℎ). Second, ft( ℎ) ∩ ft( ) = ∅, therefore there are no cases in (2) and (3) in
phase 2 where we need to maximize or average the weights. For each feature of each component
concept, the weight associated to that feature is divided by the score of the best exemplars of
that concept (11.8 for  ℎ;  ℎ has no incompatible features therefore the best exemplars
have all the features in ft( ℎ), similarly for  ). We obtain:
 ∘  ℎ = ∇∇((∀. , 0.25), (¬ , 0.25),
(∀ℎ ℎℎ., 0.25), ( , 0.25), ( , 0.25),
(∀., 0.25), (, 0.07), (∃ℎ . , 0.08),
(∃ℎ ., 0.08))</p>
        <p>Following the phase 3, the threshold  ∈ (1.48, 1.65). The lower bound is the sum of
the weights of the features in ft( ∘  ℎ) minus the smaller weight of the features in
s¯ft( ∘  ℎ), i.e., 1.73 − 0.25 = 1.48. The upper bound is the sum of the weights of
the features in ft( ∘  ℎ) minus the bigger weight of the features in ft( ∘  ℎ) ∖
s¯ft( ∘  ℎ), i.e., 1.73 − 0.08 = 1.65. When  ∈ (1.48, 1.65), we have snc( ∘  ℎ) =
s¯ft( ∘  ℎ) and none of the non-necessary features of  ℎ and  becomes strongly
necessary for  ∘  ℎ.</p>
        <p>A note on the inheritance failure. One of the main points of the Pet Fish counter-example
against the prototype theory is the inheritance failure of, e.g., the feature : prototype
representations are not compositional, it is argued, since there is no rule able to explain why,
.
.
e.g., fishes, but not petfishes, are usually grey. In our example  ∈ ft( ℎ) but  ∈/
ft( ∘  ℎ). This is due to the fact that ()  is a non-necessary feature of  ℎ; ()
  is necessary feature of   compatible with all the necessary features of  ℎ; and ()
KB contains (3c). Even when dropping one of these assumptions, alternative strategies may be
exploited to at least partially model inheritance failure in our framework,.</p>
        <p>Assume, for instance, that   is a necessary feature of  , but KB does not contain (3c).
In this case we would have that  ,  ∈ ft( ∘  ℎ). The relevance of  w.r.t.
 ∘  ℎ is lower than the one of  w.r.t.  ℎ but this holds in general when the number
of features in  ∘  ℎ is higher than the number of features in  ℎ. We may however find
some mechanisms to enforce  to have an increasingly marginal relevance w.r.t.  ∘  ℎ.
Inspired by the idea of extensional feedback [17], we could decrease (or increase) the weight of
a feature according to the number of instances, within KB, satisfying that feature in the context
of the new concept. For instance, we may observe that, extensionally, when adding the feature
  to the concept  ℎ, the number of  fish (proportionally) decrease, and reduce the
weight of the feature  accordingly. This may be thought as an additional step within our
algorithm that nicely integrates the prototype, knowledge, and exemplar views on concepts.</p>
        <p>Conversely, if   is a non-necessary feature of   and KB contains (3c), still we have
that  ,  ∈ ft( ∘  ℎ). This would not cause any problem in terms of consistency
because both   and  are non-necessary features of  ∘  ℎ. However, one could
modify the algorithm to discard some non-necessary features inconsistent with other
nonnecessary features in order to produce a ∇∇-definition for the combined concept that maximizes
the typicality of the best exemplars.</p>
        <p>A note on emergent features. Assuming KB contains (3a), we have that KB ⊨  ∘  ℎ ⊑
∀. but neither KB ⊨  ℎ ⊑ ∀. nor KB ⊨   ⊑
∀. hold. ∀. can then be seen as an emergent property of
 ∘  ℎ which follows only by the conjunction of the necessary features of  ℎ and  .</p>
        <sec id="sec-4-1-1">
          <title>5.1.2. Sport Game</title>
          <p>The Sport Game concept is among the combinations considered by Hampton in his experiments
and it is an example of combinations becoming quite close to conjunctions. Assume that KB is
empty and that  and  are defined as follows: 3
 = ∇∇4(( ℎ, 3), (∃ . , 0.6), (, 0.7),
(∃ .ℎ, 0.7), (∃.ℎ, 0.8))
 = ∇∇4((∃ . , 2), (, 1), (, 1),</p>
          <p>(¬(∃ℎ.), 1), (∃., 1))
In this case,  has no necessary features4, i.e., nc(, ft()) = ∅. Moreover, the
two concepts do not include clashing information, namely im(, ft()) = ∅ and
im(, ft()) = ∅. We would then obtain f¯t(∘ ) = { ℎ}.</p>
          <p>
            As a result, at the end of phase 1, all the features of  and  are collected, i.e.,
3A similar example was used in [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ], and imitates the features collected in [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ].
          </p>
          <p>4Following the well-known argument proposed in [18].
ft(∘ ) = { ℎ, ∃ . , ,
∃ .ℎ, ∃.ℎ, ,
¬(∃ℎ.), ∃.}</p>
          <p>The weight assignment proceeds as usual but note that  and ∃ . 
are two non-necessary features of ∘  which belong to ft() ∩ ft().
According to (3) of phase 2, for these features we need then to take the average of the weights
they have in  and . At the end of phase 2, we obtain
∘  = ∇∇(( ℎ, 0.5), (∃ . , 0.2), (, 0.14),
(∃ .ℎ, 0.12), (∃.ℎ, 0.13),
(, 0.16), (¬∃ℎ., 0.16),
(∃., 0.16))</p>
          <p>Phase 3 establishes that  ∈ (1.07, 1.37).</p>
          <p>A note on dominance efect. In the example, ft(∘ ) = ft(∘ ).
Actually, the algorithm outputs the same ∇∇-definition for ∘  and ∘ . If
 ℎ ⊑ ∃ℎ.
(4)
is included in KB, the situation does not change, the only diference is that (Step 2) of phase 1
rules out ¬∃ℎ. from ft(∘ ) = ft(∘ ).5</p>
          <p>
            In these cases, the role played by  and  is not relevant and the syntactic distinction
between Head and Modifier given by the word order has no impact in the construction of the
combined concept. However, in his experiments Hampton observed another phenomenon called
dominance efect : when one of the component concepts has a greater number of important
features, the resulting combination is more similar to that concept, no matter what the words
order is in carrying out the combination (see [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ], p.57). First,  has an essential feature while
nc(Game, ft()) = ∅. Second, the best exemplars  of ∘  (and ∘ )6
would obtain an higher score w.r.t.  than w.r.t. , i.e.,  ( ). These
( ) &gt; 
two remarks suggest () that  has more important features than ; and () that our
algorithm is more sensible to the dominance efect than to the Head/Modifier distinction.
A note on overextension. As briefly discussed above, the choice of the threshold plays a
central role in determining the extension of the combination. The flexibility of the threshold
permits to deal with another phenomenon observed by Hampton [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ], namely overextension.
Hampton observed that, when classifying items under the combined concept, people usually do
not follow a rule corresponding to the intersection: the extension of the combination is very
often over-extended in order to include items that are very good examples of one of the two
concepts, but that do not belong to the extension of the other. In our setting, we may account
for overextension appropriately lowering the threshold below the maximal possible value.
          </p>
          <p>For instance, assume that the individual (constant) boxing is characterized by the axioms
5The situation is diferent when considering ¬∃ℎ. as a necessary feature of .
6In the example there are no incompatibilities between the features of ∘  (∘ ), thus the
best exemplars have all the features.
 ℎ(boxing), ∃ . (boxing), (boxing),
∃.ℎ(boxing), ∃ .ℎ(boxing).</p>
          <p>According to the previous definitions of  and , boxing is an instance of  but
not of . However, it is easy to chose a value for the threshold for ∘  in the
(1.07, 1.37) range (determined by the algorithm in the phase 3) to include boxing among the
instances of ∘ . It is enough to set the threshold in the interval (1.07, 1.09].</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>5.1.3. Fish Vehicle</title>
          <p>The Fish Vehicle example is another case analysed by Hampton [19]. While Sport Game is close
to a plain conjunction, Fish Vehicle is an impossible combination due to a number of clashes
between the features of the component concepts. Sport Game and Fish Vehicle can be seen as
two extremes in the the spectrum of concept combinations. Assume that:
 ℎ = ∇∇10((∀. , 3), (, 3), (∀ℎ ℎℎ., 3), (, 0.9),
(, 0.9), (∃ℎ . , 1))
 ℎ = ∇∇10(( , 3), (∃ℎ.  , 3), (, 3), (, 1),
(∃ℎ., 1), (∃ℎ., 0.9))
and consider the following KB:</p>
          <p>⊑ ¬ 
∀ℎ ℎℎ. ⊑ ¬</p>
          <p>⊑ ∀.⊥
  ⊑ ∀ℎ ℎℎ.⊥
(5a)
(5b)
(5c)
(5d)</p>
          <p>Contrarily to the previous example, the Head and Modifier roles have here a strong impact in
the combination. When  ℎ is the Head, all the necessary features of  ℎ are discarded
from f¯t( ℎ∘  ℎ) because they belong to im( ℎ, ft( ℎ)), i.e., they are all
impossible for  ℎ. The combined concept would then be essentially a  ℎ, with few
marginal characteristic of the  ℎ (e.g., being  and ). Conversely, assuming the
 ℎ as the Head would exclude many of the necessary features of  ℎ (i.e.,   and
), leading to the opposite result.</p>
          <p>To obtain a more hybrid combination, a diferent mechanism is needed: see [ 20] for a
computational treatment of the same example in the context of formal ontologies, through the procedure
of axiom weakening. Similar combinations are also analysed in the context of Computational
Conceptual Blending (CCB) (see, e.g., [21, 22]). CCB, however, originates from a quite diferent
conceptual framework [23] and it exploits diferent technical strategies (i.e., the identification
of shared structures between the two input spaces through a cross-space mapping). A thorough
comparison of the two approaches is beyond the scope of this paper.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>5.2. Adjective-Noun Combinations</title>
        <p>Our proposal to combine concepts strongly refers to the work of Hampton that mainly focuses
on noun-noun combinations. However, the most common cases of concept combinations
discussed in the literature are adjective-noun combinations, e.g., red apple or pink elephant. In
the following, we analyze some of the examples of this kind of combinations discussed in [24].</p>
        <p>First observe that the concepts representing adjectives like red, smooth, salty, etc., usually, do
not have a ∇∇-definition. Given the fact that our algorithm applies only to ∇∇-defined concepts,
we introduce here ∇∇-definitions logically equivalent to these ‘adjective-concepts’: for any
‘adjective-concept’  we add the concept ∇∇ defined as ∇∇ = ∇∇((, )). In this way we
assure that ∇∇ ≡ , i.e., ∇∇ is logically indistinguishable from .</p>
        <p>Red Book. Assume that the ∇∇-definition of  does not contain any color-feature. 
is a necessary feature of ∇∇ therefore, assuming  is compatible with all the features of
, the tooth of ∇∇∘  contains the union of the features of  and ∇∇. We also
have that ∇∇∘  ≡ ∘ ∇∇, more specifically, they have the same ∇∇-definition.
Red Apple and Red Brick. Assume   ⊔  (or, more generally, a color-feature 
such that  ⊑ ) is a feature of .  is a strongly necessary feature of ∇∇
therefore, assuming it is compatible with all the features of , it belongs to s¯ft(∇∇∘ ).
It follows that   ⊔  is a (non-strongly) necessary feature of ∇∇∘  (because
 ⊑   ⊔ ). By assuming that KB contains   ⊑ ¬, the instances of
∇∇∘  cannot be yellow because they would lack the necessary feature . Again
∘ ∇∇ ≡ ∇∇∘ .</p>
        <p>The case of Red Brick is similar to the one of Red Apple but now we have that  ⊑ 
where  is the color feature of Brick, i.e., the color of bricks is a specialization of . If 
is a necessary feature of , then all the instances of ∇∇∘  have the color  and
∇∇∘  ≡ , i.e., ∇∇ has no impact on . If  is a non-necessary feature
of , then  is (strongly) necessary for ∇∇∘  but it is still possible to have red
bricks that do not have the color . In both cases ∘ ∇∇ ≡ ∇∇∘ .
Red Fish. According to the previous tooth definition,  is a non-necessary feature of
 ℎ. Assume now that KB contains the axiom  ⊑ ¬. ∇∇∘  ℎ would then have
the strongly necessary feature  which overrules the original feature . Also in this case
 ℎ∘ ∇∇ ≡ ∇∇∘  ℎ.</p>
        <p>Pink Elephant. This case is similar to the one of Red Fish:  is a feature of ℎ
and KB contains   ⊑ ¬. However,  is now a necessary feature of ℎ
incompatible with  . In ℎ∘  ∇∇,  would be overruled by   while
 ∇∇∘ ℎ ≡ ℎ, i.e., by being the Modifier,  ∇∇ has no impact on ℎ.</p>
        <p>These examples show that some expected characteristics of the adjective-noun combinations
are obtained in our framework. For instance, when ∇∇ plays the modifier role,  is a
(strongly) necessary features of ∇∇∘ , ∇∇∘ , and ∇∇∘  ℎ. The case of
the Pink Elephant is less satisfactory because   overrules  only when  ∇∇ is the
head concept (and not the modifier as expected). Probably, the weakest aspect concerns the
fact that ∇∇∘  ≡ ∘ ∇∇, ∇∇∘  ≡ ∘ ∇∇, and ∇∇∘  ℎ ≡
 ℎ∘ ∇∇. These equivalences show that our algorithm does not take into account the
strong asymmetry in the behaviour of adjectives vs. nouns, i.e., it is more tuned to noun-noun
combinations where this asymmetry is less pronounced.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>6. Discussion and Related Work</title>
      <p>We proposed an algorithm for concept combination able to deal, within a logical framework,
with some of the phenomena observed in cognitive and experimental psychology. The algorithm
consists of three phases. First, it selects all the features needed for the combination, based on the
logical distinction between necessary and impossible features. Second, it assigns new weights
to the features of the combined concept trying to preserve the relevance of features. Finally,
it determines the threshold to assure that the consistent and strongly necessary features are
preserved in the combination.</p>
      <p>Diferent assignments of weights and threshold can, however, be considered. A first alternative
consists in strengthening the relevance of the necessary features. In order to do that, we can
maintain the original weights of the necessary feature, collect the original weights of the
nonnecessary features, and then normalise them so that their sum remains strictly lower than the
weight of any of the necessary features. To preserve the necessary features, it sufices to set the
threshold in the interval between the sum of the weights of the necessary features and the sum
of the weights of all the features, minus the highest weight of the non-necessary features.</p>
      <p>
        Considering again the example about  and , we obtain (where  ∈ (3, 5.1)):
∘  = ∇∇(( ℎ, 3), (∃ . , 0.5), (, 0.3),
(∃ .ℎ, 0.3), (∃.ℎ, 0.3),
(, 0.4), (¬∃ℎ., 0.4),
(∃., 0.4))
When  = 3 (the minimal value) the necessary features become also suficient for the
classification under ∘ . Being inspired by the work in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], our model accounts mostly for the
kind of combinations analysed there, namely conjunctive, noun-noun combinations such as a
Sport which is a Game or a Tool which is a Weapon. This does not exhaust the whole spectrum of
combinations. According to [25], there exist at least two other types of noun-noun combinations.
Let’s consider a robin snake [25, p. 168]: it may be interpreted simply as a snake with a red
under-belly, namely in terms of a property interpretation. In these cases, only a single (maybe
very salient) property of the Modifier applies to the Head. Our algorithm can be modified to
deal with this kind of combinations by exploiting the flexibility of the weights assignment and
by strengthening the role of the Head. Still, our procedure would return essentially a ‘mesh-up’
of the concepts being combined. Besides, a robin snake may be interpreted also as a snake that
eats robin, i.e., according to a relation-linking interpretation, where the eat-relationship holding
between the snake and the robin is crucial. To account for this kind of interpretations, our
algorithm would require additional contextual information, or possibly the reference to specific
background ontologies.7
      </p>
      <p>
        In a formal context, noun-noun combinations have been analysed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposes
a model for concept combination based on conceptual spaces (as firstly introduced in [ 24]).
      </p>
      <p>7The example of Brick Red where we assume that Brick Red (but not Red Brick) represents a color (the color
of bricks) seems to require a relation-linking interpretation.</p>
      <p>
        Exploiting the idea of a hierarchy of conceptual spaces, the approach proposed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is able
to merge diferent spaces (corresponding to diferent concepts) to account for some of the
phenomena observed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As acknowledged by the authors themselves, however, many of
the analyzed combinations strongly rely on the probability distribution’s choice associated
to membership function for the combining concepts, which can in some cases lead to quite
unexpected results (e.g., a tree being classified as a Pet Fish with a probability of 0.2). Also, any
appeal to a logical inference mechanism being lacking in their model, the analysis of the notion
of impossibility and necessity is based exclusively on the role of negation: the necessity of a
dimension (a feature, in our setting)  corresponds to the impossibility of ¬. To account for
this notion, a direct negation of the features involved is needed, and it is unclear how this idea
may be exploited in more complex scenarios, e.g., the Pet Fish example analyzed in Sect. 5.1.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] introduces a nonmonotonic Description Logic of typicality. Two kinds of properties can
be associated to a concept : () rigid properties define  (e.g.,  ⊑ ); while () typical
properties with form  :: T() ⊑  represent the degree of belief of the typicality inclusion of
 into . Distinguishing between a head concept  and a modifier concept  the proposed
algorithm outputs the set of typical (and rigid) properties of  ∘ . To determine the properties
of  ∘ , the algorithm selects the most probable scenario (i.e., a selection among the union of
the typical properties of  and ) satisfying three conditions: () it is consistent (including
the rigid properties); () it is non trivial, i.e., it does not include all the typical properties of ;
() in case of couples of typical properties with form  :: T() ⊑  and ′ :: T( ) ⊑ ¬,
the second one is discarded. Applied to   and  ℎ the algorithm shows that ℎ is
a typical property of  ℎ but not of  ∘  ℎ. However, first notice that condition () is
established a priori, i.e., it is not the result of a general combination mechanism. Our algorithm
guarantees this property only when some non-necessary features of  are inconsistent (in
KB) with the necessary features of  (which are consistent with all the necessary features of
). However, our algorithm can be easily modified to always discard some feature of , even
though this sounds quite artificial to us. Second, in our framework, condition () somewhat
corresponds to the rule discarding the necessary features of  inconsistent with the necessary
features of . However incompatible non-necessary features are not dicarded by our algorithm.
Third, in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] the possibility to rule out ℎ depends not only on the degree of belief about
T( ℎ) ⊑ ℎ but also on the number of typical properties of  ℎ with a lower degree
of belief. In our approach ℎ is discarded only when it is inconsistent with the conjunction
of the necessary feature of   (which are consistent with all the necessary features of  ℎ).
      </p>
      <p>
        The approach proposed here is plunged in the Prototype Theory paradigm, both in terms
of general inspiration and in terms of strategies adopted for the combination of concepts. At
the same time, appealing to an external KB, most of the examples proposed here are also in
debt with the so called Theory View on concepts, namely that idea that concepts cannot stand
in isolation, but should be represented as micro-theories, expressing our knowledge about the
world. To deal with this, we mostly exploited the KB in terms of TBox axioms, expressing the
background knowledge needed to carry out the combinations. By taking into account ABox
statements, expressing particular knowledge over individuals, we may also take into account
the Exemplar View on concepts—namely the idea that the categorization under a concept is
based on the exemplars stored in memory. This may be particularly useful in the context of
the weights assignment in the combination procedure: as mentioned above, we could tune the
weight of a feature according to the number of instances, within our ABox, satisfying that
feature. This may be considered a sophistication of the phase 2 of our algorithm, exploiting
what has been done in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This is, however, matter for future work.
Italy, September 16-20, 2020, Proceedings, volume 12387 of Lecture Notes in Computer
Science, Springer, 2020, pp. 183–193. URL: https://doi.org/10.1007/978-3-030-61244-3_13.
doi:10.1007/978-3-030-61244-3\_13.
[15] P. Galliani, O. Kutz, D. Porello, G. Righetti, N. Troquard, On knowledge dependence in
weighted description logic, in: D. Calvanese, L. Iocchi (Eds.), GCAI 2019. Proceedings of
the 5th Global Conference on Artificial Intelligence, Bozen/Bolzano, Italy, 17-19 September
2019, volume 65 of EPiC Series in Computing, EasyChair, 2019, pp. 68–80.
[16] R. Jackendof, English noun-noun compounds in conceptual semantics, The semantics of
compounding (2016) 15–37.
[17] J. A. Hampton, Overextension of conjunctive concepts: Evidence for a unitary model of
concept typicality and class inclusion., Journal of Experimental Psychology: Learning,
Memory, and Cognition 14 (1988) 12–32.
[18] L. Wittgenstein, Tractatus Logico-Philosophicus, New York: Routledge, 2001 [1921].
[19] J. A. Hampton, Compositionality and concepts, in: J. A. Hampton, Y. Winter (Eds.),
Compositionality and Concepts in Linguistics and Psychology, Springer International
Publishing, Cham, 2017, pp. 95–121.
[20] G. Righetti, D. Porello, N. Troquard, O. Kutz, M. Hedblom, P. Galliani, Asymmetric hybrids:
Dialogues for computational concept combination, in: Formal Ontology in Information
Systems: Proceedings of the 12th International Conference (FOIS 2021), IOS Press, 2021.
[21] M. Eppe, E. Maclean, R. Confalonieri, O. Kutz, M. Schorlemmer, E. Plaza, K.-U. Kühnberger,
A computational framework for conceptual blending, Artificial Intelligence 256 (2018)
105–129.
[22] R. Confalonieri, M. Eppe, M. Schorlemmer, O. Kutz, R. Peñaloza, E. Plaza, Upward
refinement operators for conceptual blending in the description logic ℰℒ++, Ann.
Math. Artif. Intell. 82 (2018) 69–99. URL: https://doi.org/10.1007/s10472-016-9524-8.
doi:10.1007/s10472-016-9524-8.
[23] G. Fauconnier, M. Turner, The Way We Think: Conceptual Blending and the Mind’s Hidden
      </p>
      <p>Complexities, Basic Books, New York, 2003.
[24] P. Gärdenfors, Conceptual spaces - The geometry of thought, MIT Press, 2000.
[25] E. J. Wisniewski, When concepts combine, Psychonomic bulletin &amp; review 4 (1997)
167–183.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Hampton</surname>
          </string-name>
          ,
          <article-title>Inheritance of attributes in natural concept conjunctions</article-title>
          ,
          <source>Memory &amp; Cognition</source>
          <volume>15</volume>
          (
          <year>1987</year>
          )
          <fpage>55</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Fodor</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Lepore,</surname>
          </string-name>
          <article-title>The red herring and the pet fish: Why concepts still can't be prototypes</article-title>
          ,
          <source>Cognition</source>
          <volume>58</volume>
          (
          <year>1996</year>
          )
          <fpage>253</fpage>
          -
          <lpage>270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rosch</surname>
          </string-name>
          , Principles of categorization, in: E. Margolis, S. Laurence (Eds.), Concepts:
          <source>Core Readings</source>
          , volume
          <volume>189</volume>
          , MIT press,
          <year>1999</year>
          , pp.
          <fpage>189</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rosch</surname>
          </string-name>
          ,
          <article-title>Cognitive representations of semantic categories</article-title>
          ,
          <source>Journal of Experimental Psychology:General</source>
          <volume>104</volume>
          (
          <year>1975</year>
          )
          <fpage>192</fpage>
          -
          <lpage>233</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <source>The Big Book of Concepts</source>
          , MIT press,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D. N.</given-names>
            <surname>Osherson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>On the adequacy of prototype theory as a theory of concepts</article-title>
          ,
          <source>Cognition</source>
          <volume>9</volume>
          (
          <year>1981</year>
          )
          <fpage>35</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lawry</surname>
          </string-name>
          ,
          <article-title>Hierarchical conceptual spaces for concept combination</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>237</volume>
          (
          <year>2016</year>
          )
          <fpage>204</fpage>
          -
          <lpage>227</lpage>
          . URL: https://doi.org/10.1016/j.artint.
          <year>2016</year>
          .
          <volume>04</volume>
          .008. doi:
          <volume>10</volume>
          .1016/ j.artint.
          <year>2016</year>
          .
          <volume>04</volume>
          .008.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          ,
          <article-title>A description logic framework for commonsense conceptual combination integrating typicality, probabilities and cognitive heuristics</article-title>
          ,
          <source>J. Exp. Theor. Artif. Intell</source>
          .
          <volume>32</volume>
          (
          <year>2020</year>
          )
          <fpage>769</fpage>
          -
          <lpage>804</lpage>
          . URL: https://doi.org/10.1080/0952813X.
          <year>2019</year>
          .
          <volume>1672799</volume>
          . doi:
          <volume>10</volume>
          . 1080/0952813X.
          <year>2019</year>
          .
          <volume>1672799</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          , G. Righetti,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <article-title>A toothful of concepts: Towards a theory of weighted concept combination</article-title>
          , in: M.
          <string-name>
            <surname>Simkus</surname>
          </string-name>
          , G. E. Weddell (Eds.),
          <source>Proceedings of the 32nd International Workshop on Description Logics</source>
          , Oslo, Norway, June 18-21,
          <year>2019</year>
          , volume
          <volume>2373</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2373</volume>
          /paper-24.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. N.</given-names>
            <surname>Osherson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Rips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Keane</surname>
          </string-name>
          ,
          <article-title>Combining prototypes: A selective modification model</article-title>
          ,
          <source>Cognitive Science 12</source>
          (
          <year>1988</year>
          )
          <fpage>485</fpage>
          -
          <lpage>527</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <article-title>Pink panthers and toothless tigers: three problems in classification</article-title>
          .,
          <source>in: Proceedings of the 7th International Workshop on Artificial Intelligence and Cognition (AIC</source>
          <year>2019</year>
          ),
          <year>2019</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>Weighted description logic for classification problems</article-title>
          , in: D.
          <string-name>
            <surname>Calvanese</surname>
          </string-name>
          , L. Iocchi (Eds.),
          <source>GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence</source>
          , Bozen/Bolzano, Italy,
          <fpage>17</fpage>
          -19
          <source>September</source>
          <year>2019</year>
          , volume
          <volume>65</volume>
          of EPiC Series in Computing, EasyChair,
          <year>2019</year>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          , I. Horrocks,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sattler</surname>
          </string-name>
          , An Introduction to Description Logic, Cambridge University Press,
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .1017/9781139025355.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>Perceptron connectives in knowledge representation</article-title>
          , in: C.
          <string-name>
            <surname>M. Keet</surname>
          </string-name>
          , M. Dumontier (Eds.),
          <source>Knowledge Engineering and Knowledge Management - 22nd International Conference, EKAW</source>
          <year>2020</year>
          , Bolzano,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>