<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On a Plausible Concept-wise Multipreference Semantics and its Relations with Self-organising Maps ?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Giordano</string-name>
          <email>laura.giordano@uniupo.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Gliozzi</string-name>
          <email>valentina.gliozzi@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Theseider Dupre´</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Logic</institution>
          ,
          <addr-line>Language and Cognition, Dipartimento di Informatica, Universita` di Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DISIT - Universita` del Piemonte Orientale</institution>
          ,
          <addr-line>Alessandria</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we describe a concept-wise multi-preference semantics for description logic which has its root in the preferential approach for modeling defeasible reasoning in knowledge representation. We argue that this proposal, beside satisfying some desired properties, such as KLM postulates, and avoiding the drowning problem, also defines a plausible notion of semantics. We motivate the plausibility of the concept-wise multi-preference semantics by developing a logical semantics of self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation, in terms of multi-preference interpretations.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Conditional logics have their roots in philosophical logic. They have been studied first by
Lewis [
        <xref ref-type="bibr" rid="ref25 ref28">25, 28</xref>
        ] to formalize hypothetical and counterfactual reasoning (if A were the case
then B) that cannot be captured by classical logic with its material implication. From the
80’s they have been considered in computer science and artificial intelligence and they
have provided an axiomatic foundation of non-monotonic and common sense reasoning
[
        <xref ref-type="bibr" rid="ref12 ref23">12, 23</xref>
        ]. In particular, preferential approaches [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ] to common sense reasoning
have been more recently extended to description logics, to deal with inheritance with
exceptions in ontologies, allowing for non-strict forms of inclusions, called typicality or
defeasible inclusions (namely, conditionals), with different preferential semantics [
        <xref ref-type="bibr" rid="ref15 ref5">15, 5</xref>
        ]
and closure constructions [
        <xref ref-type="bibr" rid="ref18 ref30 ref6 ref7">7, 6, 18, 30</xref>
        ].
      </p>
      <p>
        In this paper we consider a “concept-aware” multipreference semantics [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] that
has been recently introduced for a lightweight description logic of the E L? family,
which takes into account preferences with respect to different concepts, and integrates
them into a preferential semantics. To support the plausibility of this semantics we
show that it can be used to provide a logical semantics of self-organising maps [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
Self-organising maps (SOMs) have been proposed as possible candidates to explain the
psychological mechanisms underlying category generalisation. They are psychologically
and biologically plausible neural network models that can learn after limited exposure to
positive category examples, without any need of contrastive information.
      </p>
      <p>
        We show that the process of category generalization in self-organising maps produces,
as a result, a multipreference model in which a preference relation is associated to each
concept (each learned category) and the combination of the preferences into a global
one, following the approach in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], defines a standard KLM preferential model. The
model can be exploited to learn or validate conditional knowledge from the empirical
data used in the category generalization process, and the evaluation of conditionals can
be done by model checking, using the information recorded in the SOM.
      </p>
      <p>Based on the assumption that the abstraction process in the SOM is able to identify
the most typical exemplars for a given category, in the semantic representation of a
category, we will identify some specific exemplars (namely, the best matching units of
the category) as the typical exemplars of the category, thus defining a preference relation
among the instances of a category.</p>
      <p>The category generalization process can then be regarded as a model building process
and, in a way, as a belief revision process. Indeed, initially we have no belief about
which is the category of any exemplar. During training, the current state of the SOM
corresponds to a model representing the beliefs about the input exemplars considered
so far (concerning their category). Each time a new input exemplar is considered, this
model is revised adding the exemplar into the proper category.
2</p>
      <p>
        Preliminary: the description logic E L?
We consider the description logic E L? of the E L family [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Let NC be a set of concept
names, NR a set of role names and NI a set of individual names. The set of E L?
concepts can be defined as follows: C := A j &gt; j ? j C u C j 9r:C, where a 2 NI ,
A 2 NC and r 2 NR. Observe that union, complement and universal restriction are not
E L? constructs. A knowledge base (KB) K is a pair (T ; A), where T is a TBox and A
is an ABox. The TBox T is a set of concept inclusions (or subsumptions) of the form
C v D, where C; D are concepts. The ABox A is a set of assertions of the form C(a)
and r(a; b) where C is a concept, r 2 NR, and a; b 2 NI .
      </p>
      <p>An interpretation for E L? is a pair I = h ; I i where: is a non-empty domain—a
set whose elements are denoted by x; y; z; : : : —and I is an extension function that
maps each concept name C 2 NC to a set CI , each role name r 2 NR to a binary
relation rI , and each individual name a 2 NI to an element aI 2 . It is
extended to complex concepts as follows: &gt;I = , ?I = ;, (C u D)I = CI \ DI and
(9r:C)I = fx 2 j 9y:(x; y) 2 rI and y 2 CI g:
The notions of satisfiability of a KB in an interpretation and of entailment are defined as
usual:
Definition 1 (Satisfiability and entailment). Given an E L? interpretation I = h ; I i:
- I satisfies an inclusion C v D if CI DI ;
- I satisfies an assertion C(a) if aI 2 CI and an assertion r(a; b) if (aI ; bI ) 2 rI .
Given a KB K = (T ; A), an interpretation I satisfies T (resp. A) if I satisfies all
inclusions in T (resp. all assertions in A); I is a model of K if I satisfies T and A.</p>
    </sec>
    <sec id="sec-2">
      <title>A concept-wise multi-preference semantics</title>
      <p>
        In this section we describe an extension of E L? with typicality inclusions, defined along
the lines of the extension of description logics with typicality [
        <xref ref-type="bibr" rid="ref15 ref17">15, 17</xref>
        ], but we exploit
a different multi-preference semantics [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In addition to standard E L? inclusions
C v D (called strict inclusions in the following), the TBox T will also contain typicality
inclusions of the form T(C) v D, where C and D are E L? concepts. A typicality
inclusion T(C) v D means that “typical C’s are D’s” or “normally C’s are D’s” and
corresponds to a conditional implication C j D in Kraus, Lehmann and Magidor’s (KLM)
preferential approach [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ]. Such inclusions are defeasible, i.e., admit exceptions,
while strict inclusions must be satisfied by all domain elements.
      </p>
      <p>
        Let C = fC1; : : : ; Ckg be a set of (arbitrary) E L? concepts, called distinguished
concepts. For each concept Ci 2 C, we introduce a modular preference relation &lt;Ci
which describes the preference among domain elements with respect to Ci. Each
preference relation &lt;Ci has the same properties of preference relations in KLM-style ranked
interpretations [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], is a modular and well-founded strict partial order, i.e., irreflexive
and transitive relation, where: &lt;Ci is well-founded if, for all S , if S 6= ;, then
min&lt;Ci (S) 6= ;; and &lt;Ci is modular if, for all x; y; z 2 , if x &lt;Cj y then x &lt;Cj z
or z &lt;Cj y.
      </p>
      <sec id="sec-2-1">
        <title>Definition 2 (Multipreference interpretation). A multipreference interpretation is a</title>
        <p>
          tuple M = h ; &lt;C1 ; : : : ; &lt;Ck ; I i, where:
(a) is a non-empty domain;
(b) &lt;Ci is an irreflexive, transitive, well-founded and modular relation over ;
(d) I is an interpretation function, drfined as in E L? interpretations (see Section 2).
Observe that, given a multipreference interpretation, a triple MCi = h ; &lt;Ci ; I i, which
can be associated to each concept Ci, is a ranked interpretation as those considered for
E L? plus typicality in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. The preference relation &lt;Ci allows the set of prototypical
Ci-elements to be defined as the set of Ci-elements which are minimal with respect to
&lt;Ci , i.e., the set min&lt;Ci (CiI ). As a consequence, the multipreference interpretation
above is able to single out the typical Ci-elements, for all distinguished concepts Ci 2 C.
        </p>
        <p>
          The multipreference structures above are at the basis of the semantics for ranked
E L? knowledge bases [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], which have been inspired to Brewka’s framework of basic
preference descriptions [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. A ranked TBox TCi is allowed for each concept Ci 2 C, and
contains all the defeasible inclusions, T(Ci) v D, specifying the typical properties of
Ci-elements. Ranks (non-negative integers) are assigned to such inclusions; the ones
with higher ranks are considered to be more important than the ones with lower ranks.
        </p>
        <p>Consider, for instance, the ranked knowledge base K = hTstrict; TEmployee; TStudent;
TP hDStudent; Ai, over the set of distinguished concepts C = fEmployee; Student ;
PhDStudent g, with empty ABox, and with Tstrict the set of strict inclusions:</p>
        <p>Employee v Adult Adult v 9has SSN :&gt; PhdStudent v Student</p>
        <p>
          Young u NotYoung v ? 9hasScholarship:&gt; u Has no Scholarship v ?;
where the ranked TBox TEmployee = f(d1; 0); (d2; 0)g contains the defeasible
inclusions:
(d1) T(Employee) v NotYoung
(d2) T(Employee) v 9has boss:Employee;
the ranked TBox TStudent = f(d3; 0); (d4; 1); (d5; 1)g contains the defeasible
inclusions:
(d3) T(Student ) v 9has classes:&gt;
(d4) T(Student ) v Young
(d5) T(Student ) v Has no Scholarship
and the ranked TBox TP hDStudent = f(d6; 0); (d7; 1)g contains the inclusions:
(d6) T(PhDStudent ) v 9hasScholarship:Amount
(d7) T(PhDStudent ) v Bright
Exploiting the fact that for an E L? knowledge base we can restrict our consideration
to finite domains [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], and considering canonical models for E L? [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] which are large
enough to contain a domain element for each possible consistent concept occurring in
K (and its complement), the ranked knowledge base K above gives rise to canonical
models, where the three preference relations &lt;Employee , &lt;Student , and &lt;PhDStudent
represent the preference among the elements of the domain according to concepts
Employee, Student , and PhDStudent , respectively.
        </p>
        <p>
          While we refer to [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] for the construction of the preference relations &lt;Ci ’s from a
ranked knowledge base K, in the following we will recall the notion of concept-wise
multi-preference interpretation which can be obtained by combining the preference
relations &lt;Ci into a global preference relation &lt;. This is needed for reasoning about
the typicality of arbitrary E L? concepts C, which do not belong to the set of
distinguished concepts C. For instance, we may want to verify whether typical employed
students are young, or whether they have a boss. To answer these query both preference
relations &lt;Employee and &lt;Student are relevant, and they might be conflicting for some
pair of domain elements as, for instance, tom is more typical than bob as a student
(tom &lt;Student bob), but more exceptional as an employee ( bob &lt;Employee tom).
        </p>
        <p>To define a global preference relation, we take into account the specificity relation
among concepts, such as, for instance, the fact that a concept like PhdStudent is more
specific than concept Student . The idea is that, in case of conflicts, the properties of
a more specific class (such as that PhD students normally have a scholarship) should
override the properties of less specific class (such as that students normally do not have
a scholarship).</p>
        <p>Definition 3 (Specificity). A specificity relation among concepts in C is a binary relation</p>
        <p>C C which is irreflexive and transitive.</p>
        <p>
          For Ch; Cj 2 C, Ch Cj means that Ch is more specific than Cj . The simplest notion
of specificity among concepts with respect to a knowledge base K is based on the
subsumption hierarchy: Ch Cj if Tstrict j=EL? Ch v Cj and Tstrict 6j=EL? Cj v
Ch. This is one of the notions of specificity considered for DLN [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Another one is
based on the ranking of concepts in the rational closure of K.
        </p>
        <p>
          Let us recall the notion of concept-wise multipreference interpretation [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
Definition 4 (concept-wise multipreference interpretation). A concept-wise
multipreference interpretation (or cwm-interpretation) is a tuple M = h ; &lt;C1 ; : : : ; &lt;Ck ; &lt;
; I i such that:
(a) is a non-empty domain;
(b) for each i = 1; : : : ; k, &lt;Ci is an irreflexive, transitive, well-founded and modular
relation over ;
(c) &lt; is a (global) preference relation over defined from &lt;C1 ; : : : ; &lt;Ck as follows:
x &lt; y iff (i) x &lt;Ci y; for some Ci 2 C; and
(ii) for all Cj 2 C; x
        </p>
        <p>Cj y or 9Ch(Ch</p>
        <p>Cj and x &lt;Ch y)
(d) I is an interpretation function, as defined for E L? interpretations (see Section 2),
with the addition that, for typicality concepts, we let:</p>
        <p>(T(C))I = min&lt;(CI )</p>
        <p>Relation &lt; is defined from &lt;C1 ; : : : ; &lt;Ck based on a modified Pareto condition: x &lt; y
holds if there is at least a Ci 2 C such that x &lt;Ci y and, for all Cj 2 C, either x Cj y
holds or, in case it does not, there is some Ch more specific than Cj such that x &lt;Ch y
(preference &lt;Ch in this case overrides &lt;Cj ). The idea is that, for two PhD students (who
are also students) Bob and Mary, if mary &lt;Student bob and bob &lt;PhDStudent mary ,
we will have bob &lt; mary , that is, Bob is regarded as being globally more typical than
Mary as he satisfies more properties of typical PhD students wrt Mary although Mary
may satisfy additional properties of typical students wrt Bob.</p>
        <p>
          It has been proven [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] that, given a cwm-interpretation M = h ; &lt;C1 ; : : : ; &lt;Ck ; &lt;
; I i, the relation &lt; is an irreflexive, transitive and well-founded relation. Hence, the triple
M0 = h ; &lt;; I i is a KLM-style preferential interpretation, as those introduced for E L?
with typicality [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] (and it is not necessarily a modular interpretation). A cwm-model of
a ranked E L? knowledge base K is then defined as a specific preferential interpretation
which builds over the preference relations &lt;Ci , constructed from the ranked TBoxes TCi ,
and satisfying all strict inclusions and assertions in K. The notion of cwm-entailment,
defined in the obvious way, satisfies the KLM postulates of a preferential consequence
relation, and does not suffer from the drowning problem, a well known problem of the
rational closure and System Z [
          <xref ref-type="bibr" rid="ref2 ref29">29, 2</xref>
          ], roughly speaking the problem that, if a subclass
of C is exceptional for a given aspect, it is exceptional tout court and does not inherit
any of the typical properties of C. We refer to [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] for a discussion on the properties of
entailment through some example. In the next section we motivate the plausibility of
this concept-wise multipreference semantics showing that it is well suited to provide a
semantic characterization of self-organising maps [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
4
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Self-organising maps</title>
      <p>
        Self-organising maps (SOMs, introduced by Kohonen [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]) are particularly plausible
neural network models that learn in a human-like manner. In particular: SOMs learn to
organize stimuli into categories in an unsupervised way, without the need of a teacher
providing a feedback; they can learn with just a few positive stimuli, without the need
for negative examples or contrastive information; they reflect basic constraints of a
plausible brain implementation in different areas of the cortex [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], and are therefore
biologically plausible models of category formation; have proven to be capable of
explaining experimental results.
      </p>
      <p>
        In this section we shortly describe the architecture of SOMs and report Gliozzi and
Plunkett’s similarity-based account of category generalization based on SOMs [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. In
brief, in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] the authors judge a new stimulus as belonging to a category by comparing
the distance of the stimulus from the category representation to the precision of the
category representation.
      </p>
      <p>
        SOMs consist of a set of neurons, or units, spatially organized in a grid [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], as in
Figure 1. Each map unit u is associated with a weight vector wu of the same
dimensionality as the input vectors. At the beginning of training, all weight vectors are initialized to
random values, outside the range of values of the input stimuli. During training, the input
elements are sequentially presented to all neurons of the map. After each presentation of
an input x, the best-matching unit (BMUx) is selected: this is the unit i whose weight
vector wi is closest to the stimulus x (i.e. i = arg minj kx wj k).
      </p>
      <p>
        The weights of the best matching unit and of its surrounding units are updated in order
to maximize the chances that the same unit (or its surrounding units) will be selected
as the best matching unit for the same stimulus or for similar stimuli on subsequent
presentations. In particular, it reduces the distance between the best matching unit’s
weights (and its surrounding neurons’ weights) and the incoming input. Furthermore, it
organizes the map topologically so that the weights of close-by neurons are updated in a
similar direction, and come to react to similar inputs. We refer to [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] for a complete
description.
      </p>
      <p>The learning process is incremental: after the presentation of each input, the map’s
representation of the input (and in particular the representation of its best-matching unit)
is updated in order to take into account the new incoming stimulus. At the end of the
whole process, the SOM has learned to organize the stimuli in a topologically significant
way: similar inputs (with respect to Euclidean distance) are mapped to close by areas in
the map, whereas inputs which are far apart from each other are mapped to distant areas
of the map.</p>
      <p>
        Once the SOM has learned to categorize, to assess category generalization, Gliozzi
and Plunkett [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] define the map’s disposition to consider a new stimulus y as a member
of a known category C as a function of the distance of y from the map’s representation
of C. They take a minimalist notion of what is the map’s category representation: this
is the ensemble of best-matching units corresponding to the known instances of the
category. They use BM UC to refer to the map’s representation of category C and define
category generalization as depending on two elements:
– the distance of the new stimulus y with respect to the category representation
– compared to the maximal distance from that representation of all known instances
of the category
This captured by the following notion of relative distance (rd for short) [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] :
rd(y; C) =
      </p>
      <p>minky
maxx2C kx</p>
      <p>BM UC k</p>
      <p>BM Uxk
(1)
where minky BM UC k is the (minimal) Euclidean distance between y and C’s
category representation, and maxx2C kx BM Uxk expresses the precision of category
representation, and is the (maximal) Euclidean distance between any known member of
the category and the category representation.</p>
      <p>With this definition, a given Euclidean distance from y to C0s category representation
will give rise to a higher relative distance rd if the maximal distance between C and its
known examples is low (and category representation is precise) than if it is high (and
category representation is coarse). As a function of the relative distance above, Gliozzi
and Plunkett then define the map’s Generalization Degree of category C membership to
a new stimulus y.</p>
      <p>It was observed that the above notion of relative distance (Equation 1) requires there
to be a memory of some of the known instances of the category being used (this is
needed to calculate the denominator in the equation). This gives rise to a sort of hybrid
model in which category representation and some exemplars coexist. An alternative
way of formulating the same notion of relative distance would be to calculate online the
distance between known category instance currently examined and the representation of
the category being formed.</p>
      <p>
        By judging a new stimulus as belonging to a category by comparing the distance of
the stimulus from the category representation to the precision of the category
representation, Gliozzi and Plunkett demonstrate [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] that the Numerosity and Variability effects
of category generalization, described by Griffiths and Tenenbaum [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], and usually
explained with Bayesian tools, can be accommodated within a simple and psychologically
plausible similarity-based account, which contrasts what was previously maintained. In
the next section, we show that their notion of relative distance can also be used as a basis
for a logical semantics for SOMs.
5
      </p>
    </sec>
    <sec id="sec-4">
      <title>Relating self-organising Maps and multi-preference models</title>
      <p>We aim at showing that, once the SOM has learned to categorize, we can regard the
result of the categorization as a multipreference interpretation. Let X be the set of input
stimuli from different categories, C1; : : : ; Ck, which have been considered during the
learning process.</p>
      <p>For each category Ci, we let BM UCi be the ensemble of best-matching units
corresponding to the input stimuli of category Ci, i.e., BM UCi = fBM Ux j x 2
X and x 2 Cig. We regard the learned categories C1; : : : ; Ck as being the concept
names (atomic concepts) in the description logic and we let them constitute our set of
distinguished concepts C = fC1; : : : ; Ckg.</p>
      <p>To construct a multi-preference interpretation we proceed as follows: first, we fix
the domain s to be the space of all possible stimuli; then, for each category (concept)
Ci, we define a preference relation &lt;Ci , exploiting the notion of relative distance of a
stimulus y from the map’s representation of Ci. Finally, we define the interpretation of
concepts.</p>
      <p>Let s be the set of all the possible stimuli, including all input stimuli (X s)
as well as the best matching units of input stimuli (i.e., fBM Ux j x 2 Xg s). For
simplicity, we will assume that the space of input stimuli is finite.</p>
      <p>Once the SOM has learned to categorize, the notion of relative distance rd(x; Ci)
of a stimulus x from a category Ci introduced above can be used to build a binary
preference relation &lt;Ci among the stimuli in s w.r.t. category Ci as follows: for all
x; x0 2 s,
x &lt;Ci x0 iff rd(x; Ci) &lt; rd(x0; Ci)
(2)
Each preference relation &lt;Ci is a strict partial order relation on s. The relation &lt;Ci is
also well-founded as we have assumed s to be finite.</p>
      <p>We exploit this notion of preference to define a multipreference interpretation
associated with the SOM, and than a cwm-model of the SOM. In the following we restrict
the DL language to the fragment of E L? (plus typicality) not admitting roles, as in the
self-organising map we do not have a representation of role names.</p>
      <sec id="sec-4-1">
        <title>Definition 5 (Multipreference-model of a SOM). The multipreference-model of the</title>
        <p>SOM is a multipreference interpretation Ms = h s; &lt;C1 ; : : : ; &lt;Ck ; I i such that:
(i) s is the set of all the possible stimuli, as introduced above;
(ii) for each Ci 2 C, &lt;Ci is the preference relation defined by equivalence (2).
(iii) the interpretation function I is defined for concept names (i.e. categories) Ci as
follows:</p>
        <p>CiI = fy 2
s j rd(y; Ci)
rdmax;Ci g
where rdmax;Ci is the maximal relative distance of an input stimulus x 2 Ci from
category Ci, that is, rdmax;Ci = maxx2Ci frd(x; Ci)g. The interpretation function
I is extended to complex concepts according to Definition 2.</p>
        <p>Informally, we interpret as Ci-elements those stimuli whose relative distance from
category Ci is not larger than the relative distance of any input exemplar belonging to
category Ci. Given &lt;Ci , we can identify the most typical Ci-elements wrt &lt;Ci as the
Ci-elements whose relative distance from category Ci is minimal, i.e., the elements in
min&lt;Ci (CiI ). Observe that the best matching unit BM Ux of an input stimulus x 2 Ci
is an element of s. Hence, for y = BM Ux, the relative distance rd(y; Ci) of y from
category Ci is 0, as min jj y BM UCi jj= 0. Therefore, min&lt;Ci (CiI ) = fy 2 s j
rd(y; Ci) = 0g and BM UCi min&lt;Ci (CiI ).
5.1</p>
      </sec>
      <sec id="sec-4-2">
        <title>Evaluation of concept inclusions by model checking</title>
        <p>We have defined a multipreference interpretation Ms where, in the domain s of the
possible stimuli, we are able to identify, for each category Ci, the Ci-elements as well as
the most typical Ci-elements wrt &lt;Ci . We can exploit Ms to verify which inclusions are
satisfied by the SOM by model checking, i.e., by checking the satisfiability of inclusions
over model Ms. This can be done both for strict concept inclusions of the form Ci v Cj
and for defeasible inclusions of the form T(Ci) v Cj , where Ci and Cj are concept
names (i.e., categories).</p>
        <p>For the verification that a typicality inclusion T(Ci) v Cj is satisfied in Ms
we have to check that the most typical Ci elements wrt &lt;Ci are Cj elements, that is
min&lt;Ci (CiI ) CjI . Note that, besides the elements in BM UCi , min&lt;Ci (CiI ) may
contain other elements of s having relative distance 0 from Ci. As we do not know, for
all the possible input stimuli in s, whether they belong to min&lt;Ci (CiI ) or to CjI , as an
approximation, we only check that all elements in BM UCi are Cj elements, that is:
for all input stimuli x 2 Ci, rd(BM Ux; Cj )
rdmax;Cj
(3)
Let the relative distance of BM CCi from Cj be defined as</p>
        <p>rd(BM CCi ; Cj ) = maxx2Ci frd(BM Ux; Cj )g
i.e., as the maximal relative distance of any BM Ux, for x 2 Ci, and Cj . Then we can
rewrite condition (3) simply as
rd(BM CCi ; Cj )
rdmax;Cj :
Observe that the relative distance rd(BM CCi ; Cj ) also gives a measure of plausibility
of the defeasible inclusion T(Ci) v Cj : the lower is the relative distance of BM UCi
from Cj , the more plausible is the defeasible inclusion T(Ci) v Cj .</p>
        <p>Verifying that a strict inclusion Ci v Cj is satisfied, requires to check that CiI is
included in CI . Exploiting the fact that the map is organized topologically, and using
j
the relative distance rd(BM CCi ; Cj ) of BM CCi from Cj , we verify that the relative
distance of BM CCi from Cj plus the maximal relative distance of a Ci-element from
Ci is not greater than the maximal relative distance of a Cj -element from Cj :
rd(BM CCi ; Cj ) + rdmax;Ci
rdmax;Cj
(4)
where rdmax;C = maxy2C frd(y; C)g. That is, the Ci-element most distant from Cj is
nearer to Cj than the most distant Cj -element.</p>
        <p>
          Computing conditions (3) and (4) on the SOM, may be non trivial, depending on
the number of input stimuli that have been considered in the learning phase (the size of
the set X of input exemplars). However, from a logical point of view, this is just model
checking. Gliozzi and Plunkett have considered self-organising maps that are able to
learn from a limited number of input stimuli, although this is not generally true for all
self-organising maps [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
5.2
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>Combining preferences into a preferential interpretation</title>
        <p>The multipreference interpretation Ms introduced in Definition 5 allows to determine
the set of Ci-elements for all learned categories Ci and to define the most typical
Cielements, exploiting the preference relation &lt;Ci . However, we are not able to define the
most typical Ci u Cj -elements just using a single preference. Starting from Ms, we
construct a concept-wise multipreference interpretation Msom that combines the
preferential relations in Ms into a global preference relation &lt;, and provides an intepretation
to all typicality concepts as, for instance, T(Ci u Cj u Ch). The interpretation Msom is
constructed from Ms according to Definition 4.</p>
        <p>The construction exploits a notion of specificity. Observe that the specificity relation
between two concepts Ci and Cj can be determined based on the single model Ms of
the SOM. Ci Cj if Ci v Cj is satisfied in Ms and Cj v Ci is not satisfied in Ms.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Definition 6 (cwm-model of a SOM). The cwm-model of a SOM is a cwm-interpretation</title>
      <p>Msom = h s; &lt;C1 ; : : : ; &lt;Ck ; &lt;; I i, such that the tuple h s; &lt;C1 ; : : : ; &lt;Ck ; I i is a
multipreference model of the SOM according to Definition 5, and &lt; is the global
preference relation defined from &lt;C1 ; : : : ; &lt;Ck ; according as in Definition 4, point (c).</p>
      <p>In particular, in Msom, as in all cwm-interpretations (see Definition 4), the
interpretation of typicality concepts T(C) is defined based on the global preference relation
&lt; as (T(C))I = min&lt;(CI ), for all concepts C. Here, we are considering concepts in
the fragment of E L? language without roles, which are built from the concept names
C1; : : : ; Cn (the learned categories). The model Msom can be considered a sort of
(unique) canonical model for the SOM, representing what holds in that state of the SOM
(e.g., after the learning phase). The logical inclusions that “follow from the SOM” are
therefore the inclusions that hold in the single model Msom. The situation is similar to
the case of Horn clauses, where there is a unique minimal canonical model describing
all the (atomic) logical consequences of the knowledge base.</p>
      <p>
        As Msom is a cwm-interpretation, the triple h s; &lt;; I i is a KLM style preferential
interpretation [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ]. It follows that the model Msom provides a logical semantics for
the SOM which is well-defined, as Msom determines a preferential interpretation and,
also, a preferential consequence relation, satisfying all KLM properties of a preferential
consequence relations.
      </p>
      <p>
        The verification of arbitrary defeasible inclusions on Msom can, in principle, be
done by model checking, but it might require considering all the possibly many input
stimuli, i.e., all domain elements in s, which may be unfeasible in practice. As an
alternative, the identification of the set of strict and defeasible inclusions satisfied by the
SOM over the learned categories C1; : : : ; Ck (as done in Section 5.1), allows to define an
E L? knowledge base K and to reason on it symbolically, using for instance an approach
similar to the one described in Section 3 for ranked knowledge bases. In particular,
Answer Set Programming, and asprin, have been used to achieve defeasible reasoning
+ [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
under the multipreference approach for the lightweight description logic E L?
Ranked knowledge bases have been considered, where the rank of defeasible inclusions
provides a measure of plausibility of the defeasible inclusion, and multipreference
entailment has been reformulated as a problem of computing preferred answer sets.
As we have seen, a measure of plausibility can as well be assigned to the defeasible
inclusions satisfied by the SOM.
5.3
      </p>
      <sec id="sec-5-1">
        <title>Category generalization process as iterated belief revision</title>
        <p>We have seen that one can give an interpretation of a self-organising map after the
learning phase, as a preferential model. However, the state of the SOM during the
learning phase can as well be represented as a multipreference model (precisely in
the same way). During training, the current state of the SOM corresponds to a model
representing the beliefs about the input stimuli considered so far (beliefs concerning the
category of the stimuli).</p>
        <p>The category generalization process can then be regarded as a model building process
and, in a way, as a belief revision process. Initially we do not know the category of the
stimuli in the domain s. In the initial model, call it Ms0om (over the domain s) the
som is the model of a knowledge base K0
interpretation of each concept Ci is empty. M0
containing a strict inclusion Ci v ?, for all Ci.</p>
        <p>Each time a new input stimulus (x 2 Ci) is considered, the model is revised adding
the stimulus x (and its best matching unit BM Ux) into the proper category (Ci). Not
only the category interpretation is revised by the addition of x and BM Ux in CiI (so
that Ci v ? does not hold any more), but also the associated preference relation &lt;Ci is
revised as the addition of BM Ux modifies the set of best matching units BM UCi for
category Ci, as well as the relative distance rd(y; Ci) of a stimulus y from Ci. That is, a
revision step may change the set of conditionals which are satisfied by the model.</p>
        <p>
          At the end of the training process, the final state of the SOM is captured by the
model Msom obtained by a sequence of revision steps which, starting from Ms0om,
gives rise to a sequence of models Ms0om,Mis1om; : : :, Misrom (with Msom = Misrosmom).
At each step the knowledge base is not represented explicitly, but the model Mij
of the knowledge base at step j is used to determine the model at step j + 1 as a
result of revision (Misjo+m1 = Misjom ? Cij (xij )). The knowledge base K (the set of
all the strict and defeasible inclusions satisfied in Msom) can then be regarded as the
knowledge base obtained from K0 through a sequence of revision steps, i.e., K =
K0 ? Ci1(xi1) ? : : : ? Cir(xir). In fact, from any state of the SOM we can construct
a corresponding model, which determines a knowledge base, the set of (strict and
defeasible) inclusions satisfied in that model. For future work, it would be interesting
to study the properties of this notion of revision and compare with the properties of the
notions of iterated belief revision studied in the literature [
          <xref ref-type="bibr" rid="ref14 ref21 ref8 ref9">9, 14, 21, 8</xref>
          ].
6
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>
        The concept-wise multipreference semantics has recently been introduced for dealing
with typicality in description logics [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], based on the idea that reasoning about
exceptions in ontologies requires taking into account preferences with respect to different
concepts and integrating them into a preferential semantics which allows a standard,
KLM style, interpretation of defeasible inclusions.
      </p>
      <p>In this paper, we have explored the relationships between a concept-wise
multipreference semantics and self-organising maps. On the one hand, we have seen that
self-organising maps can be given a logical semantics in terms of KLM-style preferential
interpretations. The model can be used to learn or to validate conditional knowledge
from the empirical data used in the category generalization process based on model
checking. The learning process in the self-organising map can be regarded as an iterated
belief revision process. On the other hand, the plausibility of concept-wise
multipreference semantics is supported by the fact that self-organising maps are considered as
psychologically and biologically plausible neural network models.</p>
      <p>
        Much work has been devoted, in recent years, to the combination of neural networks
and symbolic reasoning. Let us mention Neural Symbolic Computing [
        <xref ref-type="bibr" rid="ref10 ref11">11, 10</xref>
        ], Logic
Tensor Networks [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], and the approaches based on computational logic and logic
programming DeepProbLog [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], a probabilistic logic programming language which
incorporates deep learning by means of neural predicates, and NeurASP [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ], a simple
extension of answer set programs that embrace neural networks.
      </p>
      <p>The characterization of self-organising maps in terms of multipreference
interpretations, besides providing a logical interpretation to SOMs, which may be of interest from
the side of explainable AI, can potentially be exploited, as described above, as a basis
for an integrated use of self-organising maps and defeasible knowledge bases.</p>
      <p>Acknowledgement: This research is partially supported by INDAM-GNCS Projects
2019 and 2020.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brandt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          .
          <article-title>Pushing the E L envelope</article-title>
          . In L.P.
          <article-title>Kaelbling and A</article-title>
          . Saffiotti, editors,
          <source>Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI</source>
          <year>2005</year>
          ), pages
          <fpage>364</fpage>
          -
          <lpage>369</lpage>
          , Edinburgh, Scotland,
          <string-name>
            <surname>UK</surname>
          </string-name>
          ,
          <year>August 2005</year>
          . Professional Book Center.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>S.</given-names>
            <surname>Benferhat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dubois</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Prade</surname>
          </string-name>
          .
          <article-title>Possibilistic logic: From nonmonotonicity to logic programming</article-title>
          .
          <source>In Symbolic and Quantitative Approaches to Reasoning</source>
          and Uncertainty, European Conference, ECSQARU'93,
          <string-name>
            <surname>Granada</surname>
          </string-name>
          , Spain, November 8-
          <issue>10</issue>
          ,
          <year>1993</year>
          , Proceedings, pages
          <fpage>17</fpage>
          -
          <lpage>24</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Bonatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Faella</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Petrova</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Sauro</surname>
          </string-name>
          .
          <article-title>A new semantics for overriding in description logics</article-title>
          .
          <source>Artif</source>
          . Intell.,
          <volume>222</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>48</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Gerhard</given-names>
            <surname>Brewka</surname>
          </string-name>
          .
          <article-title>A rank based description language for qualitative preferences</article-title>
          .
          <source>In Proceedings of the 16th Eureopean Conference on Artificial Intelligence, ECAI'</source>
          <year>2004</year>
          , Valencia, Spain,
          <source>August 22-27</source>
          ,
          <year>2004</year>
          , pages
          <fpage>303</fpage>
          -
          <lpage>307</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , and T. Meyer.
          <article-title>Semantic preferential subsumption</article-title>
          . In G. Brewka and J. Lang, editors,
          <source>Principles of Knowledge Representation and Reasoning: Proceedings of the 11th International Conference (KR</source>
          <year>2008</year>
          ), pages
          <fpage>476</fpage>
          -
          <lpage>484</lpage>
          , Sidney, Australia,
          <year>September 2008</year>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , T. Meyer,
          <string-name>
            <given-names>I. J.</given-names>
            <surname>Varzinczak</surname>
          </string-name>
          , , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Moodley</surname>
          </string-name>
          .
          <article-title>Nonmonotonic Reasoning in Description Logics: Rational Closure for the ABox</article-title>
          .
          <source>In 26th International Workshop on Description Logics (DL</source>
          <year>2013</year>
          ), volume
          <volume>1014</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <fpage>600</fpage>
          -
          <lpage>615</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          and
          <string-name>
            <given-names>U.</given-names>
            <surname>Straccia</surname>
          </string-name>
          .
          <article-title>Rational Closure for Defeasible Description Logics</article-title>
          . In T. Janhunen and I. Niemela¨, editors,
          <source>Proc. 12th European Conf. on Logics in Artificial Intelligence (JELIA</source>
          <year>2010</year>
          ), volume
          <volume>6341</volume>
          <source>of LNCS</source>
          , pages
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
          , Helsinki, Finland,
          <year>September 2010</year>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>J.</given-names>
            <surname>Chandler</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Booth</surname>
          </string-name>
          .
          <article-title>Revision by conditionals: From hook to arrow</article-title>
          .
          <source>In Proc. KR</source>
          <year>2020</year>
          ,
          <article-title>17th International Conference on Principles of Knowledge Representation and Reasoning</article-title>
          . AAAI Press,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <article-title>On the logic of iterated belief revision</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>89</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>A. S. d'Avila Garcez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gori</surname>
            ,
            <given-names>L. C.</given-names>
          </string-name>
          <string-name>
            <surname>Lamb</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Serafini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Spranger</surname>
          </string-name>
          , and
          <string-name>
            <surname>Son</surname>
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Tran</surname>
          </string-name>
          .
          <article-title>Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning</article-title>
          .
          <source>FLAP</source>
          ,
          <volume>6</volume>
          (
          <issue>4</issue>
          ):
          <fpage>611</fpage>
          -
          <lpage>632</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>A. S. d'Avila Garcez</surname>
            ,
            <given-names>L. C.</given-names>
          </string-name>
          <string-name>
            <surname>Lamb</surname>
            , and
            <given-names>D. M.</given-names>
          </string-name>
          <string-name>
            <surname>Gabbay</surname>
          </string-name>
          .
          <source>Neural-Symbolic Cognitive Reasoning. Cognitive Technologies</source>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Delgrande</surname>
          </string-name>
          .
          <article-title>A first-order conditional logic for prototypical properties</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>33</volume>
          (
          <issue>1</issue>
          ):
          <fpage>105</fpage>
          -
          <lpage>130</lpage>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Theseider</surname>
          </string-name>
          <article-title>Dupre´. An ASP approach for reasoning in a concept-aware multipreferential lightweight DL. Theory and Practice of Logic programming</article-title>
          ,
          <source>TPLP</source>
          ,
          <volume>10</volume>
          (
          <issue>5</issue>
          ):
          <fpage>751</fpage>
          -
          <lpage>766</lpage>
          ,
          <year>2020</year>
          . https://doi.org/10.1017/S1471068420000381.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            , and
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            . Iterated Belief Revision and
            <given-names>Conditional</given-names>
          </string-name>
          <string-name>
            <surname>Logic</surname>
          </string-name>
          .
          <source>Studia Logica</source>
          ,
          <volume>70</volume>
          :
          <fpage>23</fpage>
          -
          <lpage>47</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G. L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Preferential Description Logics</article-title>
          . In Nachum Dershowitz and Andrei Voronkov, editors,
          <source>Proceedings of LPAR</source>
          <year>2007</year>
          (
          <article-title>14th Conference on Logic for Programming</article-title>
          ,
          <source>Artificial Intelligence, and Reasoning)</source>
          , volume
          <volume>4790</volume>
          <source>of LNAI</source>
          , pages
          <fpage>257</fpage>
          -
          <lpage>272</lpage>
          , Yerevan, Armenia,
          <year>October 2007</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G. L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Reasoning about typicality in low complexity DLs: the logics E L?Tmin and DL-Litec Tmin</article-title>
          .
          <source>In Proc. 22nd Int. Joint Conf. on Artificial Intelligence (IJCAI</source>
          <year>2011</year>
          ), pages
          <fpage>894</fpage>
          -
          <lpage>899</lpage>
          , Barcelona,
          <year>July 2011</year>
          . Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G. L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Semantic characterization of rational closure: From propositional logic to description logics</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>226</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G.L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Minimal Model Semantics and Rational Closure in Description Logics</article-title>
          .
          <source>In 26th International Workshop on Description Logics (DL</source>
          <year>2013</year>
          ), volume
          <volume>1014</volume>
          , pages
          <fpage>168</fpage>
          -
          <lpage>180</lpage>
          , 7
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Theseider</surname>
          </string-name>
          <article-title>Dupre´. ASP for minimal entailment in a rational extension of SROEL</article-title>
          . TPLP,
          <volume>16</volume>
          (
          <issue>5-6</issue>
          ):
          <fpage>738</fpage>
          -
          <lpage>754</lpage>
          ,
          <year>2016</year>
          . DOI:
          <volume>10</volume>
          .1017/S1471068416000399.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Plunkett</surname>
          </string-name>
          .
          <article-title>Grounding bayesian accounts of numerosity and variability effects in a similarity-based framework: the case of self-organising maps</article-title>
          .
          <source>Journal of Cognitive Psychology</source>
          ,
          <volume>31</volume>
          (
          <issue>5-6</issue>
          ),
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21. G.
          <string-name>
            <surname>Kern-Isberner</surname>
          </string-name>
          .
          <article-title>A thorough axiomatization of a principle of conditional preservation in belief revision</article-title>
          . Ann. Math. Artif. Intell.,
          <volume>40</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>127</fpage>
          -
          <lpage>164</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22. T. Kohonen,
          <string-name>
            <given-names>M.R.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          , and T.S. Huang, editors.
          <source>Self-Organizing Maps, Third Edition</source>
          . Springer Series in Information Sciences. Springer,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>S.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          .
          <article-title>Nonmonotonic reasoning, preferential models and cumulative logics</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>44</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>167</fpage>
          -
          <lpage>207</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          .
          <article-title>What does a conditional knowledge base entail?</article-title>
          <source>Artificial Intelligence</source>
          ,
          <volume>55</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>60</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          .
          <source>Counterfactuals. Basil Blackwell Ltd</source>
          ,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <given-names>R.</given-names>
            <surname>Manhaeve</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dumancic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kimmig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Demeester</surname>
          </string-name>
          , and L. De Raedt. Deepproblog:
          <article-title>Neural probabilistic logic programming</article-title>
          .
          <source>In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems</source>
          <year>2018</year>
          , NeurIPS
          <year>2018</year>
          ,
          <fpage>3</fpage>
          -8
          <source>December</source>
          <year>2018</year>
          , Montre´al, Canada, pages
          <fpage>3753</fpage>
          -
          <lpage>3763</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <given-names>R.</given-names>
            <surname>Miikkulainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bednar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choe</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Sirosh</surname>
          </string-name>
          .
          <article-title>Computational maps in the visual cortex</article-title>
          . Springer,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <given-names>D.</given-names>
            <surname>Nute</surname>
          </string-name>
          .
          <article-title>Topics in conditional logic</article-title>
          .
          <source>Reidel</source>
          , Dordrecht,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <article-title>System Z: A Natural Ordering of Defaults with Tractable Applications to Nonmonotonic Reasoning</article-title>
          . In R. Parikh, editor,
          <source>TARK (3rd Conference on Theoretical Aspects of Reasoning about Knowledge)</source>
          , pages
          <fpage>121</fpage>
          -
          <lpage>135</lpage>
          , Pacific Grove, CA, USA,
          <year>1990</year>
          . Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <given-names>M.</given-names>
            <surname>Pensel</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Turhan</surname>
          </string-name>
          .
          <article-title>Reasoning in the defeasible description logic EL? - computing standard inferences under rational and relevant semantics</article-title>
          .
          <source>Int. J. Approx. Reasoning</source>
          ,
          <volume>103</volume>
          :
          <fpage>28</fpage>
          -
          <lpage>70</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <article-title>Luciano Serafini and Artur S. d'Avila Garcez. Learning and reasoning with logic tensor networks</article-title>
          .
          <source>In AI*IA 2016: Advances in Artificial Intelligence - XVth Int. Conf. of the Italian Association for Artificial Intelligence</source>
          , Genova, Italy,
          <source>November 29 - December 1</source>
          ,
          <year>2016</year>
          , Proceedings, volume
          <volume>10037</volume>
          <source>of LNCS</source>
          , pages
          <fpage>334</fpage>
          -
          <lpage>348</lpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>J. B. Tenenbaum</surname>
            and
            <given-names>T. L.</given-names>
          </string-name>
          <string-name>
            <surname>Griffiths</surname>
          </string-name>
          .
          <article-title>Generalization, similarity, and bayesian inference</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          ,
          <volume>24</volume>
          :
          <fpage>629</fpage>
          -
          <lpage>641</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ishay</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          . Neurasp:
          <article-title>Embracing neural networks into answer set programming</article-title>
          . In C. Bessiere, editor,
          <source>Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI</source>
          <year>2020</year>
          , pages
          <fpage>1755</fpage>
          -
          <lpage>1762</lpage>
          . ijcai.org,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>