<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Conditional Interpretation ⋆ of Self Organizing Maps</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Giordano</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Gliozzi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Theseider Dupre´</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Logic, Language and Cognition &amp; Dipartimento di Informatica, Universita` di Torino</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DISIT - Universita` del Piemonte Orientale</institution>
          ,
          <addr-line>Alessandria</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we aim at establishing a link between the preferential semantics for conditionals and self-organising maps (SOMs). We show that a concept-wise multipreference semantics, recently proposed for defeasible description logics, which takes into account preferences with respect to different concepts, can be used to to provide a logical interpretation of SOMs.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Preferential approaches [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ] to common sense reasoning, having their roots in
conditional logics [
        <xref ref-type="bibr" rid="ref17">17, 19</xref>
        ], have been recently extended to description logics, to deal with
inheritance with exceptions in ontologies, allowing for non-strict forms of inclusions,
called typicality or defeasible inclusions (namely, conditionals), with different
preferential semantics [
        <xref ref-type="bibr" rid="ref10 ref3">10, 3</xref>
        ] and closure constructions [
        <xref ref-type="bibr" rid="ref12 ref4 ref5">5, 4, 12, 20</xref>
        ].
      </p>
      <p>
        In this paper we study the relationships between preferential semantics for
conditionals and self-organising maps (SOMs)[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], psychologically and biologically
plausible neural network models that can learn after limited exposure to positive category
examples, without any need of contrastive information. Self-organising maps have been
proposed as possible candidates to explain the psychological mechanisms underlying
category generalisation.
      </p>
      <p>
        We show that a “concept-wise” multipreference semantics [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], recently proposed for
a lightweight description logic of the E L⊥ family, can be used to provide a logical
semantics of SOMs. The result of the process of category generalization in self-organising
maps can be regarded as a multipreference model in which different preference relations
are associated to different concepts (the learned categories). The combination of these
preferences into a global preference, following the approach in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], defines a standard
KLM preferential interpretation. Such an interpretation can be used to learn or
validate conditional knowledge from the empirical data used in the category generalization
process. The evaluation of conditionals can be done by model checking, using the
information recorded in the SOM. We believe that the proposed semantic interpretation of
SOMs can be relevant in the context of explainable AI.
      </p>
      <p>
        These results have been first presented at CILC 2020 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>A concept-wise multi-preference semantics</title>
      <p>
        In this section we shortly describe an extension of E L⊥ with typicality inclusions,
defined along the lines of the extension of description logics with typicality [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ], and
its multi-preference semantics [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        We consider the description logic E L⊥ of the E L family [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Let NC be a set of
concept names, NR a set of role names and NI a set of individual names. The set of E L⊥
concepts can be defined as follows: C := A | ⊤ | ⊥ | C ⊓ C | ∃r.C, where a ∈ NI ,
A ∈ NC and r ∈ NR. Observe that union, complement and universal restriction are not
E L⊥ constructs. A knowledge base (KB) K is a pair (T , A), where T is a TBox and A
is an ABox. The TBox T is a set of concept inclusions (or subsumptions) of the form
C ⊑ D, where C, D are concepts. The ABox A is a set of assertions of the form C(a)
and r(a, b) where C is a concept, r ∈ NR, and a, b ∈ NI .
      </p>
      <p>
        In addition to standard E L⊥ inclusions C ⊑ D (called strict inclusions in the
following), the TBox T will also contain typicality inclusions of the form T(C) ⊑ D,
where C and D are E L⊥ concepts. A typicality inclusion T(C) ⊑ D means that
“typical C’s are D’s” or “normally C’s are D’s” and corresponds to a conditional implication
C |∼ D in Kraus, Lehmann and Magidor’s (KLM) preferential approach [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ]. Such
inclusions are defeasible, i.e., admit exceptions, while strict inclusions must be satisfied
by all domain elements.
      </p>
      <p>
        Let C = {C1, . . . , Ck} be a set of distinguished E L⊥ concepts. For each concept
Ci ∈ C, we introduce a modular preference relation &lt;Ci which describes the preference
among domain elements with respect to Ci. Each preference relation &lt;Ci has the same
properties of preference relations in KLM-style ranked interpretations [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], i.e., it is
a modular and well-founded strict partial order (an irreflexive and transitive relation),
where: &lt;Ci is well-founded if, for all S ⊆ Δ, if S 6= ∅, then min&lt;Ci (S) 6= ∅; and &lt;Ci
is modular if, for all x, y, z ∈ Δ, if x &lt;Cj y then (x &lt;Cj z or z &lt;Cj y).
Definition 1 (Multipreference interpretation). A multipreference interpretation is a
tuple M = hΔ, &lt;C1 , . . . , &lt;Ck , ·I i, where: (a) Δ is a non-empty domain;
(b) &lt;Ci is an irreflexive, transitive, well-founded and modular relation over Δ;
(d) ·I is an interpretation function, as in an E L⊥ interpretation that maps each concept
name C ∈ NC to a set CI ⊆ Δ, each role name r ∈ NR to a binary relation
rI ⊆ Δ × Δ, and each individual name a ∈ NI to an element aI ∈ Δ. It is
extended to complex concepts as follows: ⊤I = Δ, ⊥I = ∅, (C ⊓ D)I = CI ∩ DI
and (∃r.C)I = {x ∈ Δ | ∃y.(x, y) ∈ rI and y ∈ CI }.
      </p>
      <p>The preference relation &lt;Ci allows the set of prototypical Ci-elements to be defined
as the Ci-elements which are minimal with respect to &lt;Ci , i.e., min&lt;Ci (CiI ). As a
consequence, the multipreference interpretation above is able to single out the typical
Ci-elements, for all distinguished concepts Ci ∈ C.</p>
      <p>
        The multipreference structures above are at the basis of the semantics for ranked
E L⊥ knowledge bases [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which have been inspired by Brewka’s framework of basic
preference descriptions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. While we refer to [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for the construction of the
preference relations &lt;Ci ’s from a ranked knowledge base K , in the following we shortly
recall the notion of concept-wise multi-preference interpretation which can be obtained
by combining the preference relations &lt;Ci into a global preference relation &lt;. This
is needed for reasoning about the typicality of arbitrary E L⊥ concepts C, which do
not belong to the set of distinguished concepts C. For instance, we may want to
verify whether typical employed students are young, or whether they have a boss, starting
from a ranked KB containing inclusions T(Stud ) ⊑ Young, T(Emp) ⊑ Has Boss ,
T(Emp) ⊑ NonYoung, and Young ⊓ NonYoung ⊑ ⊥. To answer the questions above
both preference relations &lt;Emp and &lt;Stud are relevant, and they might be conflicting
as, for instance, Tom is more typical than Bob as a student (tom &lt;Stud bob), but more
exceptional as an employee ( bob &lt;Emp tom). By combining the preference relations
&lt;Ci into a single global preference relation &lt; we can exploit the global preference &lt;
for interpreting the typicality operator, which may be applied to arbitrary concepts, and
verify, for instance, whether T(Stud ⊓ Emp) ⊑ Has Boss .
      </p>
      <p>A natural definition of the notion of global preference &lt; exploits Pareto
combination of the relations &lt;C1 , . . . , &lt;Ck , as follows:
x &lt; y iff (i) x &lt;Ci y, for some Ci ∈ C, and</p>
      <p>
        (ii) for all Cj ∈ C, x ≤Cj y
where ≤Ci is the non-strict preference relation associated with &lt;Ci (≤Ci is a total
preorder). A slightly more sophisticated notion of preference combination, which exploits
a modified Pareto condition taking into account the specificity relation among concepts
(such as, for instance, the fact that concept PhdStudent is more specific than concept
Student), has been considered for ranked knowledge bases [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>The addition of the global preference relation allows for defining a notion of
conceptwise multipreference interpretation M = hΔ, &lt;C1 , . . . , &lt;Ck , &lt;, ·I i, where typicality
concept T(C) is interpreted as the set of the &lt;-minimal C elements, i.e., (T(C))I =
min&lt;(CI ), where M in&lt;(S) = {u : u ∈ S and ∄z ∈ S s.t. z &lt; u}.</p>
      <p>
        The notions of cwm-model of a ranked E L⊥ knowledge base K, and of
cwmentailment can be defined in the natural way. In particular, cwm-entailment has been
proved to satisfy the KLM postulates of a preferential consequence relation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Self-organising maps</title>
      <p>
        Self-organising maps (SOMs, introduced by Kohonen [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]) are particularly plausible
neural network models that learn in a human-like manner. In this section we shortly
describe the architecture of SOMs and report Gliozzi and Plunkett’s similarity-based
account of category generalization based on SOMs [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Roughly speaking, in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] the
authors judge a new stimulus as belonging to a category by comparing the distance of
the stimulus from the category representation to the precision of the category
representation.
      </p>
      <p>
        SOMs consist of a set of neurons, or units, spatially organized in a grid [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Each
map unit u is associated with a weight vector wu of the same dimensionality as the input
vectors. At the beginning of training, all weight vectors are initialized to random values,
outside the range of values of the input stimuli. During training, the input elements are
sequentially presented to all neurons of the map. After each presentation of an input x,
the best-matching unit (BMUx) is selected: this is the unit i whose weight vector wi is
closest to the stimulus x (i.e. i = arg minj kx − wj k).
      </p>
      <p>The weights of the best matching unit and of its surrounding units are updated in
order to maximize the chances that the same unit (or its surrounding units) will be
selected as the best matching unit for the same stimulus or for similar stimuli on
subsequent presentations. In particular, it reduces the distance between the best matching
unit’s weights (and its surrounding neurons’ weights) and the incoming input. The
learning process is incremental: after the presentation of each input, the map’s representation
of the input (in particular the representation of its best-matching unit) is updated in
order to take into account the new incoming stimulus. At the end of the whole process,
the SOM has learned to organize the stimuli in a topologically significant way: similar
inputs (with respect to Euclidean distance) are mapped to close by areas in the map,
whereas inputs which are far apart from each other are mapped to distant areas of the
map.</p>
      <p>
        Once the SOM has learned to categorize, to assess category generalization, Gliozzi
and Plunkett [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] define the map’s disposition to consider a new stimulus y as a member
of a known category C as a function of the distance of y from the map’s representation
of C. They take a minimalist notion of what is the map’s category representation: this
is the ensemble of best-matching units corresponding to the known instances of the
category. They use BM UC to refer to the map’s representation of category C and define
category generalization as depending on the distance of the new stimulus y with respect
to the category representation compared to the maximal distance from that
representation of all known instances of the category. This captured by the following notion of
relative distance (rd for short) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] :
rd(y, C) =
      </p>
      <p>minky − BM UC k
maxx∈C kx − BM Uxk
(1)
where minky − BM UC k is the (minimal) Euclidean distance between y and C’s
category representation, and maxx∈C kx − BM Uxk expresses the precision of category
representation, and is the (maximal) Euclidean distance between any known member of
the category and the category representation.</p>
      <p>
        By judging a new stimulus as belonging to a category by comparing the distance of
the stimulus from the category representation to the precision of the category
representation, Gliozzi and Plunkett demonstrate [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] that the Numerosity and Variability effects
of category generalization, described by Griffiths and Tenenbaum [22], and usually
explained with Bayesian tools, can be accommodated within a simple and psychologically
plausible similarity-based account, which contrasts what was previously maintained. In
the next section, we show that their notion of relative distance can also be used as a
basis for a logical semantics for SOMs.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Relating self-organising Maps and multi-preference models</title>
      <p>Once the SOM has learned to categorize, we can regard the result of the categorization
as a multipreference interpretation. Let X be the set of input stimuli from different
categories, C1, . . . , Ck, which have been considered during the learning process. For
each category Ci, we let BM UCi be the ensemble of best-matching units corresponding
to the input stimuli of category Ci, i.e., BM UCi = {BM Ux | x ∈ X and x ∈ Ci}. We
regard the learned categories C1, . . . , Ck as being the concept names (atomic concepts)
in the description logic and we let them constitute our set of distinguished concepts
C = {C1, . . . , Ck}.</p>
      <p>To construct a multi-preference interpretation, first we fix the domain Δs to be
the space of all possible stimuli; then, for each category (concept) Ci, we define a
preference relation &lt;Ci , exploiting the notion of relative distance of a stimulus y from
the map’s representation of Ci. Finally, we define the interpretation of concepts.</p>
      <p>Let Δs be the set of all the possible stimuli, including all input stimuli (X ⊆ Δs)
as well as the best matching units of input stimuli (i.e., {BM Ux | x ∈ X } ⊆ Δs). For
simplicity, we will assume the space of input stimuli to be finite.</p>
      <p>Once the SOM has learned to categorize, the notion of relative distance rd(x, Ci ) of
a stimulus x from a category Ci can be used to build a binary preference relation &lt;Ci
among the stimuli in Δs w.r.t. category Ci as follows: for all x, x′ ∈ Δs,
x &lt;Ci x′ iff rd(x, Ci) &lt; rd(x′, Ci)
(2)
Each preference relation &lt;Ci is a strict partial order relation on Δs. The relation &lt;Ci
is also well-founded, as we have assumed Δs to be finite.</p>
      <p>We exploit this notion of preference to define a concept-wise multipreference
interpretation associated with the SOM. We restrict the DL language to the fragment of E L⊥
(plus typicality) not admitting roles.</p>
      <sec id="sec-4-1">
        <title>Definition 2 (multipreference-model of a SOM). The multipreference-model of the</title>
        <p>SOM is a multipreference interpretation Ms = hΔs, &lt;C1 , . . . , &lt;Ck , ·I i such that:
(i) Δs is the set of all the possible stimuli, as introduced above;
(ii) for each Ci ∈ C, &lt;Ci is the preference relation defined by equivalence (2).
(iii) the interpretation function ·I is defined for concept names (i.e. categories) Ci as:</p>
        <p>CiI = {y ∈ Δs | rd(y, Ci) ≤ rdmax,Ci }
where rdmax,Ci is the maximal relative distance of an input stimulus x ∈ Ci from
category Ci, that is, rdmax,Ci = maxx∈Ci {rd(x, Ci )}. The interpretation function
·I is extended to complex concepts in the fragment of E L⊥ without roles according
to Definition 1.</p>
        <p>Informally, we interpret as Ci-elements those stimuli whose relative distance from
category Ci is not larger than the relative distance of any input exemplar belonging to
category Ci. Given &lt;Ci , we can identify the most typical Ci-elements wrt &lt;Ci as the
Ci-elements whose relative distance from category Ci is minimal, i.e., the elements in
min&lt;Ci (CiI ). Observe that the best matching unit BM Ux of an input stimulus x ∈ Ci
is an element of Δs. As, for y = BM Ux, rd(y, Ci) is 0, BM UCi ⊆ min&lt;Ci (CiI ).
4.1</p>
      </sec>
      <sec id="sec-4-2">
        <title>Evaluation of concept inclusions by model checking</title>
        <p>We have defined a multipreference interpretation Ms where, in the domain Δs of the
possible stimuli, we are able to identify, for each category Ci, the Ci-elements as well
as the most typical Ci-elements wrt &lt;Ci. We can exploit Ms to verify which inclusions
are satisfied by the SOM by model checking, i.e., by checking the satisfiability of
inclusions over model Ms. This can be done both for strict concept inclusions of the form
Ci ⊑ Cj and for defeasible inclusions of the form T(Ci) ⊑ Cj , where Ci and Cj are
concept names (i.e., categories), by exploiting a notion of maximal relative distance of
BM UCi from Cj , defined as rd(BM CCi , Cj ) = maxx∈Ci {rd(BM Ux, Cj )}.</p>
        <p>
          While we refer to [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] for details, let us observe that checking the satisfiability of
strict or defeasible inclusions on the SOM may be non trivial, depending on the number
of input stimuli that have been considered in the learning phase, although from a logical
point of view, this is just model checking. Gliozzi and Plunkett have considered
selforganising maps that are able to learn from a limited number of input stimuli, although
this is not generally true for all self-organising maps [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>Note also that the multipreference interpretation Ms introduced in Definition 2
allows to determine the set of Ci-elements for all learned categories Ci and to define the
most typical Ci-elements, exploiting the preference relation &lt;Ci . Although, we are not
able to define the most typical Ci ⊓ Cj -elements just using single preferences.
Starting from Ms, we can construct a concept-wise multipreference interpretation Msom
that combines the preferential relations in Ms into a global preference relation &lt;, and
provides an intepretation to all typicality concepts as, for instance, T(Ci ⊓ Cj ⊓ Ch).
The interpretation Msom can be constructed from Ms according to the definition of
the global preference in Section 2.</p>
        <p>We have focused on the multipreference interpretation of a self-organising map after
the learning phase. However, the state of the SOM during the learning phase can as
well be represented as a multipreference model (in the same way). During training, the
current state of the SOM corresponds to a model representing the beliefs about the input
stimuli considered so far (beliefs concerning the category of the stimuli). We can than
regard the category generalization process as a model building process and, in a way, as
a belief revision process.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>We have explored the relationships between a concept-wise multipreference semantics
and self-organising maps, showing that conditional logics can be used to provide a
logical explanation to self-organising maps. In particular, self-organising maps can be given
a logical semantics in terms of KLM-style preferential interpretations. In particular, the
model can be used to learn or to validate conditional knowledge from the empirical data
used in the category generalization process, based on model checking.</p>
      <p>
        Much work has been devoted, in recent years, to the combination of neural networks
and symbolic reasoning. Let us mention Neural Symbolic Computing [
        <xref ref-type="bibr" rid="ref6 ref7">7, 6</xref>
        ], Logic
Tensor Networks [21], and the approaches based on computational logic and logic
programming DeepProbLog [18], a probabilistic logic programming language which
incorporates deep learning by means of neural predicates, and NeurASP [23], a simple
extension of answer set programs that embrace neural networks.
      </p>
      <p>Acknowledgement: This research is partially supported by INDAM-GNCS Projects
2019 and 2020.
18. R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, and L. De Raedt. Deepproblog:
Neural probabilistic logic programming. In Advances in Neural Information Processing
Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS
2018, 3-8 December 2018, Montre´al, Canada, pages 3753–3763, 2018.
19. D. Nute. Topics in conditional logic. Reidel, Dordrecht, 1980.
20. M. Pensel and A. Turhan. Reasoning in the defeasible description logic EL⊥ - computing
standard inferences under rational and relevant semantics. Int. J. Approx. Reasoning, 103:28–
70, 2018.
21. L. Serafini and A. S. d’Avila Garcez. Learning and reasoning with logic tensor networks. In
AI*IA 2016: Advances in Artificial Intelligence - XVth Int. Conf. of the Italian Association
for Artificial Intelligence, Genova, Italy, November 29 - December 1, 2016, Proceedings,
volume 10037 of LNCS, pages 334–348. Springer.
22. J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and bayesian inference.
Behavioral and Brain Sciences, 24:629–641, 2001.
23. Z. Yang, A. Ishay, and J. Lee. Neurasp: Embracing neural networks into answer set
programming. In C. Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference
on Artificial Intelligence, IJCAI 2020, pages 1755–1762. ijcai.org, 2020.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brandt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          .
          <article-title>Pushing the E L envelope</article-title>
          . In L.P.
          <article-title>Kaelbling and A</article-title>
          . Saffiotti, editors,
          <source>Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI</source>
          <year>2005</year>
          ), pages
          <fpage>364</fpage>
          -
          <lpage>369</lpage>
          , Edinburgh, Scotland,
          <string-name>
            <surname>UK</surname>
          </string-name>
          ,
          <year>August 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>G.</given-names>
            <surname>Brewka</surname>
          </string-name>
          .
          <article-title>A rank based description language for qualitative preferences</article-title>
          .
          <source>In Proceedings of the 16th Eureopean Conference on Artificial Intelligence, ECAI'</source>
          <year>2004</year>
          , Valencia, Spain,
          <source>August 22-27</source>
          ,
          <year>2004</year>
          , pages
          <fpage>303</fpage>
          -
          <lpage>307</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , and T. Meyer.
          <article-title>Semantic preferential subsumption</article-title>
          . In G. Brewka and J. Lang, editors,
          <source>Principles of Knowledge Representation and Reasoning: Proceedings of the 11th International Conference (KR</source>
          <year>2008</year>
          ), pages
          <fpage>476</fpage>
          -
          <lpage>484</lpage>
          , Sidney, Australia,
          <year>September 2008</year>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , T. Meyer,
          <string-name>
            <given-names>I. J.</given-names>
            <surname>Varzinczak</surname>
          </string-name>
          , , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Moodley</surname>
          </string-name>
          .
          <article-title>Nonmonotonic Reasoning in Description Logics: Rational Closure for the ABox</article-title>
          .
          <source>In 26th International Workshop on Description Logics (DL</source>
          <year>2013</year>
          ), volume
          <volume>1014</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <fpage>600</fpage>
          -
          <lpage>615</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          and
          <string-name>
            <given-names>U.</given-names>
            <surname>Straccia</surname>
          </string-name>
          .
          <article-title>Rational Closure for Defeasible Description Logics</article-title>
          . In T. Janhunen and I. Niemela¨, editors,
          <source>Proc. 12th European Conf. on Logics in Artificial Intelligence (JELIA</source>
          <year>2010</year>
          ), volume
          <volume>6341</volume>
          <source>of LNCS</source>
          , pages
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
          , Helsinki, Finland,
          <year>September 2010</year>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>A. S. d'Avila Garcez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gori</surname>
            ,
            <given-names>L. C.</given-names>
          </string-name>
          <string-name>
            <surname>Lamb</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Serafini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Spranger</surname>
          </string-name>
          , and
          <string-name>
            <surname>Son</surname>
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Tran</surname>
          </string-name>
          .
          <article-title>Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning</article-title>
          .
          <source>FLAP</source>
          ,
          <volume>6</volume>
          (
          <issue>4</issue>
          ):
          <fpage>611</fpage>
          -
          <lpage>632</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>A. S. d'Avila Garcez</surname>
            ,
            <given-names>L. C.</given-names>
          </string-name>
          <string-name>
            <surname>Lamb</surname>
            , and
            <given-names>D. M.</given-names>
          </string-name>
          <string-name>
            <surname>Gabbay</surname>
          </string-name>
          .
          <source>Neural-Symbolic Cognitive Reasoning. Cognitive Technologies</source>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Theseider</surname>
          </string-name>
          <article-title>Dupre´. An ASP approach for reasoning in a conceptaware multipreferential lightweight DL. Theory and Practice of Logic programming</article-title>
          ,
          <source>TPLP</source>
          ,
          <volume>10</volume>
          (
          <issue>5</issue>
          ):
          <fpage>751</fpage>
          -
          <lpage>766</lpage>
          ,
          <year>2020</year>
          . https://doi.org/10.1017/S1471068420000381.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Theseider</surname>
          </string-name>
          <article-title>Dupre´</article-title>
          .
          <article-title>On a plausible concept-wise multipreference semantics and its relations with self-organising maps</article-title>
          .
          <source>CoRR</source>
          , abs/
          <year>2008</year>
          .13278,
          <year>2020</year>
          . To appear
          <source>in CILC (Italian Conference on Computational Logic)</source>
          ,
          <fpage>13</fpage>
          -15
          <source>October</source>
          <year>2020</year>
          , Rende.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G. L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Preferential Description Logics</article-title>
          . In Nachum Dershowitz and Andrei Voronkov, editors,
          <source>Proceedings of LPAR</source>
          <year>2007</year>
          (
          <article-title>14th Conference on Logic for Programming</article-title>
          ,
          <source>Artificial Intelligence, and Reasoning)</source>
          , volume
          <volume>4790</volume>
          <source>of LNAI</source>
          , pages
          <fpage>257</fpage>
          -
          <lpage>272</lpage>
          , Yerevan, Armenia,
          <year>October 2007</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G. L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Semantic characterization of rational closure: From propositional logic to description logics</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>226</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. L.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Olivetti</surname>
            , and
            <given-names>G.L.</given-names>
          </string-name>
          <string-name>
            <surname>Pozzato</surname>
          </string-name>
          .
          <article-title>Minimal Model Semantics and Rational Closure in Description Logics</article-title>
          .
          <source>In 26th International Workshop on Description Logics (DL</source>
          <year>2013</year>
          ), volume
          <volume>1014</volume>
          , pages
          <fpage>168</fpage>
          -
          <lpage>180</lpage>
          , 7
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Plunkett</surname>
          </string-name>
          .
          <article-title>Grounding bayesian accounts of numerosity and variability effects in a similarity-based framework: the case of self-organising maps</article-title>
          .
          <source>Journal of Cognitive Psychology</source>
          ,
          <volume>31</volume>
          (
          <issue>5-6</issue>
          ),
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. T. Kohonen,
          <string-name>
            <given-names>M.R.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          , and T.S. Huang, editors.
          <source>Self-Organizing Maps, Third Edition</source>
          . Springer Series in Information Sciences. Springer,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>S.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          .
          <article-title>Nonmonotonic reasoning, preferential models and cumulative logics</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>44</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>167</fpage>
          -
          <lpage>207</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          .
          <article-title>What does a conditional knowledge base entail?</article-title>
          <source>Artificial Intelligence</source>
          ,
          <volume>55</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>60</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          .
          <source>Counterfactuals. Basil Blackwell Ltd</source>
          ,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>