<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preferential Reasoning with Typicality and Neural Network Models (Extended Abstract)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Giordano</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Gliozzi</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Theseider Dupré</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>DISIT - Università del Piemonte Orientale</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Italy</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Università degli Studi di Torino</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>In this extended abstract we report some results concerning the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and some neural network models, namely Self-Organising Maps and Multilayer Perceptrons. OVERLAY 2021: 3rd Workshop on Artificial Intelligence and Formal Verification, Logic, Automata, and Synthesis, September 22, 2021, Padova, Italy " laura.giordano@uniupo.it (L. Giordano); valentina.gliozzi@unito.it (V. Gliozzi); dtd@uniupo.it (D. Theseider Dupré)</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Common Sense Reasoning</kwd>
        <kwd>Preferential semantics</kwd>
        <kwd>Weighted Conditionals</kwd>
        <kwd>Neural Networks</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>We report some results concerning the relationships between a multipreference semantics for
defeasible reasoning in knowledge representation and some neural network models, namely,
Self-Organising Maps (SOMs) and Multilayer Perceptrons (MLPs). In particular, weighted
knowledge bases for description logics are considered under a “concept-wise" multipreference
semantics and, in the fuzzy case, provide a preferential interpretation of MLPs.</p>
      <p>
        Preferential approaches have been used to provide axiomatic foundations of non-monotonic and
common sense reasoning [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4</xref>
        ], and, more recently, they have been extended to description
logics (DLs) to deal with inheritance with exceptions in ontologies, by allowing for non-strict
forms of inclusions, called typicality or defeasible inclusions, with different preferential semantics
[
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ] and closure constructions [
        <xref ref-type="bibr" rid="ref10 ref11 ref7 ref8 ref9">7, 8, 9, 10, 11</xref>
        ]. In this abstract, we consider a concept-wise
multipreference semantics as a semantics for weighted knowledge bases, i.e. knowledge bases
in which defeasible or typicality inclusions of the form T() ⊑  (meaning “the typical ’s
are ’s" or “normally ’s are ’s") are given a positive or negative weight. A multipreference
semantics taking into account preferences with respect to different concepts, was first introduced
by the authors as a semantics for ranked DL knowledge bases [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. For weighted knowledge
bases, a different semantic closure construction is developed, still in the spirit of other semantic
constructions in the literature, and is further extended to the fuzzy case.
      </p>
      <p>
        The concept-wise multipreference semantics has been used to develop a semantic
interpretations for some neural network models. Both an unsupervised model, Self-organising maps
(SOMs)[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], which is regarded as psychologically and biologically plausible neural network
models, and a supervised one, Multilayer Perceptrons (MLPs) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] have been considered. In both
cases, considering the domain of all input stimuli presented to the network during training (or
in the generalization phase), one can build a semantic interpretation describing the input-output
behavior of the network as a multi-preference interpretation, where preferences are associated to
concepts. For SOMs, the learned categories are regarded as concepts 1, . . . ,  so that a
preference relation (over the domain of input stimuli) is associated to each category [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ]. In case of
MLPs, each neuron in the deep network (including hidden neurons) is associated to a concept, so
that preference relations are associated to neurons [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. For MLPs, the relationship between these
logics of commonsense reasoning and deep neural networks is even stronger, as a deep neural
network can be regarded as a conditional knowledge base, i.e., a set weighted conditionals. This
has been achieved by developing a concept-wise fuzzy multipreference semantics for a DL with
weighted defeasible inclusions. In the following we shortly recall these results and discuss some
challenges from the standpoint of explainable AI [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ].
2. A Multi-preferential interpretation for SOMs and MPLs
The multipreference semantics (cw-semantics) has been first developed as a semantics for
strengthening rational closure [20] and then it has been made concept-wise to provide a semantics
for ranked ℰ ℒ knowledge bases [21] by capturing, with different preference relationships, the
preferences among domain elements with respect to different concepts.
      </p>
      <p>
        In weighted knowledge bases [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], besides standard inclusions (called strict inclusions),
defeasible inclusions of the form T() ⊑  are allowed with a weight , whose meaning is that
“typical s are s" (or “normally s are s") with weight . Such inclusions correspond to
conditionals  |∼  in Kraus, Lehmann and Magidor (KLM) preferential logics [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], while the
positive or negative weights of defeasible inclusions represent their plausibility (implausibility).
      </p>
      <p>
        A cw-interpretation is defined by adding to a standard DL interpretation, a set of preference
relations &lt;1 , . . . , &lt; each one associated with a distinguished concept . A DL interpretation
is a pair ⟨∆ , ·  ⟩, where ∆ is a domain, and ·  an interpretation function, and each preference
&lt; captures the relative typicality of domain individuals with respect to . Preferences with
respect to different concepts do not need to agree, as a domain element  may be more typical
than  as a horse but less typical as a zebra. A global preference relation &lt; can be defined, by
Pareto combination of the preference relations &lt;1 , . . . , &lt; , but a more sophisticated notion of
preference combination has also been considered [21], which takes into account the specificity
relation among concepts. A typicality concept T() is then interpreted as the set of all minimal
 elements with respect to &lt;. It has been proven [21] that cw-entailment satisfies the KLM
postulates of a preferential consequence relation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Self-organising maps are psychologically and biologically plausible neural network models [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
that can learn after limited exposure to positive category examples, without need of contrastive
information. They have been proposed as possible candidates to explain the psychological
mechanisms underlying category generalisation. Multilayer Perceptrons [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] are deep networks.
Learning algorithms in the two cases are quite different but our approach aims to capture, through
a semantic interpretation, the behavior of the network obtained after training and not to model
learning. We have seen that this can be accomplished in both cases in a similar way, based on the
multi-preferential semantics above and its fuzzy extension.
      </p>
      <p>
        The result of the training phase is represented very differently in the two models: for SOMs
it is given by a set of units spatially organized in a grid (where each unit  in the map is
associated with a weight vector  of the same dimensionality as the input vectors); for MLPs,
as a result of training, the weights of the synaptic connections have been learned. In both
cases, considering the domain of all input stimuli presented to the network during training (or
in the generalization phase), one can build a semantic interpretation describing the input-output
behavior of the network as a multi-preference interpretation, where preferences are associated
to concepts. For SOMs, the learned categories are regarded as concepts 1, . . . ,  so that a
preference relation (over the domain of input stimuli) is associated to each category by a notion
of relative distance of a stimulus from its Best Matching Unit [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ]. For MLPs, units in the
deep network (including hidden units), or a subset thereof, can be associated to concepts, each
related to a preference (a well-founded modular partial order). In both cases, a multipreference
interpretation can be constructed from the network after training, describing the input-output
behavior of the network on the input stimuli considered. Such preferential interpretation can be
used for checking properties like: are the instances of a category 1 also instances of category
2? are typical instances of a category 1 also instances of category 2? The verification can
be done by model-checking on the multipreference interpretation describing the input-output
behavior of the network [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ].
      </p>
      <p>This kind of construction establishes strong relationships between the logics of commonsense
reasoning and the neural network models, as the first ones are able to reason about the properties
of the second ones. These relationships can be made even stronger for MLPs, as the neural
network itself can be regarded as a conditional knowledge base.</p>
      <p>
        Under a fuzzy extension of the multipreference semantics, it has been proven [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] that MLPs
can be regarded as weighted conditional knowledge bases. The multipreference interpretation
constructed over the set of input stimuli to describe the input-output behavior of the deep network,
in the fuzzy case, exploits the activation value of each unit ℎ for a stimulus , which can be
interpreted as the degree of membership of  in concept ℎ. The fuzzy interpretation also induces
a preference on the domain for each concept ℎ. Such an interpretation can be proven to be a
fuzzy multipreference model of the knowledge base extracted from the network.
      </p>
      <p>Let  be the concept name associated to unit  and 1 , . . . ,  be the concept names
associated to units 1, . . . , , whose output signals are the input signals for unit , with synaptic
weights ,1 , . . . , , . One can define for each unit  a set  of typicality inclusions, with
their associated weights, as follows: T() ⊑ 1 with ,1 , . . . , T() ⊑  with , .
The collection of the defeasible inclusions  for all concepts (units) defines the weighted
conditional KB associated to the network.</p>
      <p>
        The definition of the -interpretation for a weighted conditional knowledge base exploits
a closure construction in the same spirit of the one considered by Lehmann [22] to define the
lexicographic closure, but more similar to Kern-Isberner’s c-representations [23, 24]. As a
difference, our construction in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is concept-wise, thus considering the modular structure of the
knowledge base (and of the network). In the fuzzy case, to guarantee that the weights computed
from the KB are coherent with the fuzzy interpretation of concepts, a notion of coherent (fuzzy)
multipreference interpretation is introduced. We refer to [25] for a study of its KLM properties.
      </p>
    </sec>
    <sec id="sec-2">
      <title>3. Conclusions</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref15 ref16 ref17">15, 17, 16</xref>
        ] we have studied the relationships between multi-preferential (and fuzzy) logics of
common sense reasoning and two different neural network models, Self-Organising Maps and
Multilayer Perceptrons, showing that a multi-preferential semantics can be used to provide a
logical model of a neural network behavior after training. Such a model can be used to learn or
to validate conditional knowledge from the empirical data used for training and generalization,
by model checking of logical properties. A two-valued KLM-style preferential interpretation
with multiple preferences and a fuzzy semantics have been considered, based on the idea of
associating preference relations to categories (in the case of SOMs) or to neurons (for Multilayer
Perceptrons). Due to the diversity of the two models we expect that a similar approach might be
extended to other neural network models and learning approaches.
      </p>
      <p>Much work has been devoted, in recent years, to the combination of neural networks and
symbolic reasoning [26, 27, 28], leading to the definition of new computational models [ 29,
30, 31, 32] and to extensions of logic programming languages with neural predicates [33, 34].
Among the earliest systems combining logical reasoning and neural learning are the KBANN
[35] and the CLIP [36] systems and Penalty Logic [37]. The relationships between normal logic
programs and connectionist network have been investigated by Garcez and Gabbay [36, 26] and
by Hitzler et al. [38]. The correspondence between neural network models and fuzzy systems
has been first investigated by Kosko in his seminal work [ 39]. A fuzzy extension of preferential
logics has been studied by Casini and Straccia [40] based on a Rational Closure construction.</p>
      <p>
        For Multilayer Perceptrons, it has been proven [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] that a deep network can itself be regarded
as a weighted conditional knowledge base. This opens to the possibility of adopting a conditional
logics as a basis for neuro-symbolic integration. While a neural network, once trained, is able and
fast in classifying the new stimuli (that is, it is able to do instance checking), all other reasoning
services such as satisfiability, entailment and model-checking are missing. These capabilities
would be needed for dealing with tasks combining empirical and symbolic knowledge, such
as, for instance: proving whether the network satisfies some (strict or conditional) properties;
learning the weights of a conditional knowledge base from empirical data; combining defeasible
inclusions extracted from a neural network with other defeasible or strict inclusions for inference.
      </p>
      <p>
        To make these tasks possible, the development of proof methods for such logics is a
preliminary step. In the two-valued case multipreference entailment is decidable for weighted ℰ ℒ⊥
knowledge bases [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. In the fuzzy case, whether the notion of coherent fuzzy multipreference
entailment is decidable is an open problem even for the small fragment of ℰ ℒ⊥ without roles.
Undecidability results for fuzzy description logics with general inclusion axioms [41, 42] motivate
the investigation of decidable approximations of fuzzy-multipreference entailment.
      </p>
      <p>
        An issue is whether the mapping of deep neural networks to weighted conditional knowledge
bases can be extended to more complex neural network models, such as Graph neural networks
[29]. or whether different logical formalisms and semantics would be needed. Another issue is
whether the fuzzy-preferential interpretation of neural networks can be related with the
probabilistic interpretation of neural networks based on statistical AI. Indeed, interpreting concepts as
fuzzy sets suggests a probabilistic account based on Zadeh’s probability of fuzzy events [43], an
approach exploited for SOMs [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. We refer to [44] for a preliminary account for MLPs.
      </p>
      <p>Acknowledgments We thank the anonymous referees for their helpful comments.
[20] V. Gliozzi, Reasoning about multiple aspects in rational closure for DLs, in: Proc. AI*IA
2016 - XVth International Conference of the Italian Association for Artificial Intelligence,
Genova, Italy, November 29 - December 1, 2016, 2016, pp. 392–405.
[21] L. Giordano, D. Theseider Dupré, An ASP approach for reasoning in a concept-aware
multipreferential lightweight DL, Theory and Practice of Logic programming, TPLP 10(5)
(2020) 751–766.
[22] D. J. Lehmann, Another perspective on default reasoning, Ann. Math. Artif. Intell. 15
(1995) 61–82.
[23] G. Kern-Isberner, Conditionals in Nonmonotonic Reasoning and Belief Revision -
Considering Conditionals as Agents, volume 2087 of LNCS, Springer, 2001.
[24] G. Kern-Isberner, C. Eichhorn, Structural inference from conditional knowledge bases,</p>
      <p>Stud Logica 102 (2014) 751–769.
[25] L. Giordano, On the KLM properties of a fuzzy DL with Typicality, 2021.</p>
      <p>arXiv:2106.00390, to appear in ECSQARU 2021.
[26] A. S. d’Avila Garcez, K. Broda, D. M. Gabbay, Symbolic knowledge extraction from
trained neural networks: A sound approach, Artif. Intell. 125 (2001) 155–207.
[27] A. S. d’Avila Garcez, L. C. Lamb, D. M. Gabbay, Neural-Symbolic Cognitive Reasoning,</p>
      <p>Cognitive Technologies, Springer, 2009.
[28] A. S. d’Avila Garcez, M. Gori, L. C. Lamb, L. Serafini, M. Spranger, S. N. Tran,
Neuralsymbolic computing: An effective methodology for principled integration of machine
learning and reasoning, FLAP 6 (2019) 611–632.
[29] L. C. Lamb, A. S. d’Avila Garcez, M. Gori, M. O. R. Prates, P. H. C. Avelar, M. Y. Vardi,
Graph neural networks meet neural-symbolic computing: A survey and perspective, in:
C. Bessiere (Ed.), IJCAI 2020, ijcai.org, 2020, pp. 4877–4884.
[30] L. Serafini, A. S. d’Avila Garcez, Learning and reasoning with logic tensor networks, in:
Proc. AI*IA 2016, Genova, Italy, November 29 - December 1, 2016, volume 10037 of
LNCS, Springer, 2016, pp. 334–348.
[31] P. Hohenecker, T. Lukasiewicz, Ontology reasoning with deep neural networks, J. Artif.</p>
      <p>Intell. Res. 68 (2020) 503–540.
[32] D. Le-Phuoc, T. Eiter, A. Le-Tuan, A scalable reasoning and learning approach for
neuralsymbolic stream fusion, in: AAAI 2021, February 2-9, AAAI Press, 2021, pp. 4996–5005.
[33] R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, L. D. Raedt, Deepproblog: Neural
probabilistic logic programming, in: NeurIPS 2018, 3-8 December 2018, Montréal, Canada,
2018, pp. 3753–3763.
[34] Z. Yang, A. Ishay, J. Lee, Neurasp: Embracing neural networks into answer set
programming, in: C. Bessiere (Ed.), IJCAI 2020, ijcai.org, 2020, pp. 1755–1762.
[35] G. G. Towell, J. W. Shavlik, Knowledge-based artificial neural networks, Artif. Intell. 70
(1994) 119–165.
[36] A. S. d’Avila Garcez, G. Zaverucha, The connectionist inductive learning and logic
programming system, Appl. Intell. 11 (1999) 59–77.
[37] G. Pinkas, Reasoning, nonmonotonicity and learning in connectionist networks that capture
propositional knowledge, Artif. Intell. 77 (1995) 203–247.
[38] P. Hitzler, S. Hölldobler, A. K. Seda, Logic programs and connectionist networks, J. Appl.</p>
      <p>Log. 2 (2004) 245–272.
[39] B. Kosko, Neural networks and fuzzy systems: a dynamical systems approach to machine
intelligence, Prentice Hall, 1992.
[40] G. Casini, U. Straccia, Towards rational closure for fuzzy logic: The case of propositional
gödel logic, in: Logic for Programming, Artificial Intelligence, and Reasoning - 19th Int.
Conf., LPAR-19, Stellenbosch, South Africa, December 14-19, 2013. Proceedings, volume
8312 of LNCS, Springer, 2013, pp. 213–227.
[41] F. Baader, R. Peñaloza, Are fuzzy description logics with general concept inclusion axioms
decidable?, in: FUZZ-IEEE 2011, Taipei, 27-30 June, 2011, IEEE, 2011, pp. 1735–1742.
[42] M. Cerami, U. Straccia, On the undecidability of fuzzy description logics with gcis with
lukasiewicz t-norm, CoRR abs/1107.4212 (2011). URL: http://arxiv.org/abs/1107.4212.
[43] L. Zadeh, Probability measures of fuzzy events, J.Math.Anal.Appl 23 (1968) 421–427.
[44] L. Giordano, D. Theseider Dupré, Weighted defeasible knowledge bases and a
multipreference semantics for a deep neural network model, CoRR abs/2012.13421 (2020). URL:
https://arxiv.org/abs/2012.13421. arXiv:2012.13421.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Delgrande</surname>
          </string-name>
          ,
          <article-title>A first-order conditional logic for prototypical properties</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>33</volume>
          (
          <year>1987</year>
          )
          <fpage>105</fpage>
          -
          <lpage>130</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          ,
          <article-title>Probabilistic Reasoning in Intelligent Systems Networks of Plausible Inference</article-title>
          , Morgan Kaufmann,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          ,
          <article-title>Nonmonotonic reasoning, preferential models and cumulative logics</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>44</volume>
          (
          <year>1990</year>
          )
          <fpage>167</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          ,
          <article-title>What does a conditional knowledge base entail?</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>55</volume>
          (
          <year>1992</year>
          )
          <fpage>1</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          , Preferential Description Logics,
          <source>in: LPAR</source>
          <year>2007</year>
          , volume
          <volume>4790</volume>
          <source>of LNAI</source>
          , Springer, Yerevan, Armenia,
          <year>2007</year>
          , pp.
          <fpage>257</fpage>
          -
          <lpage>272</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , T. Meyer, Semantic preferential subsumption, in: G. Brewka, J. Lang (Eds.),
          <source>KR</source>
          <year>2008</year>
          , AAAI Press, Sidney, Australia,
          <year>2008</year>
          , pp.
          <fpage>476</fpage>
          -
          <lpage>484</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , U. Straccia,
          <article-title>Rational Closure for Defeasible Description Logics</article-title>
          , in: T. Janhunen, I. Niemelä (Eds.),
          <source>JELIA</source>
          <year>2010</year>
          , volume
          <volume>6341</volume>
          <source>of LNCS</source>
          , Springer, Helsinki,
          <year>2010</year>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , T. Meyer,
          <string-name>
            <given-names>I. J.</given-names>
            <surname>Varzinczak</surname>
          </string-name>
          , ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Moodley</surname>
          </string-name>
          ,
          <article-title>Nonmonotonic Reasoning in Description Logics: Rational Closure for the ABox</article-title>
          ,
          <source>in: DL</source>
          <year>2013</year>
          , volume
          <volume>1014</volume>
          <source>of CEUR Workshop Proceedings</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>600</fpage>
          -
          <lpage>615</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          ,
          <article-title>Semantic characterization of rational closure: From propositional logic to description logics, Artif</article-title>
          . Intell.
          <volume>226</volume>
          (
          <year>2015</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          , G. Casini, T. Meyer, K. Moodley,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sattler</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Varzinczak</surname>
          </string-name>
          ,
          <article-title>Principles of KLM-style defeasible description logics</article-title>
          ,
          <source>ACM Trans. Comput. Log</source>
          .
          <volume>22</volume>
          (
          <year>2021</year>
          ) 1:
          <fpage>1</fpage>
          -
          <lpage>1</lpage>
          :
          <fpage>46</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <article-title>A reconstruction of multipreference closure</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>290</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>An ASP approach for reasoning in a concept-aware multipreferential lightweight DL, Theory Pract</article-title>
          . Log. Program.
          <volume>20</volume>
          (
          <year>2020</year>
          )
          <fpage>751</fpage>
          -
          <lpage>766</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kohonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          , T. Huang (Eds.),
          <string-name>
            <surname>Self-Organizing</surname>
            <given-names>Maps</given-names>
          </string-name>
          ,
          <source>Third Edition</source>
          , Springer Series in Information Sciences, Springer,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Haykin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Neural Networks - A Comprehensive Foundation</surname>
          </string-name>
          , Pearson,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>On a plausible concept-wise multipreference semantics and its relations with self-organising maps</article-title>
          ,
          <source>in: CILC</source>
          <year>2020</year>
          , Rende, Italy,
          <source>October 13-15</source>
          ,
          <year>2020</year>
          , volume
          <volume>2710</volume>
          <source>of CEUR</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>A conditional, a fuzzy and a probabilistic interpretation of self-organising maps</article-title>
          ,
          <source>CoRR abs/2103</source>
          .06854 (
          <year>2021</year>
          ). URL: https://arxiv. org/abs/2103.06854.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model</article-title>
          ,
          <source>in: Proc17th European Conf. on Logics in AI</source>
          ,
          <source>JELIA 2021, May 17-20</source>
          , volume
          <volume>12678</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>242</lpage>
          . URL: https://arxiv.org/abs/
          <year>2012</year>
          .13421, extended version.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          ,
          <article-title>Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>52138</fpage>
          -
          <lpage>52160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A survey of methods for explaining black box models</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <volume>93</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>93</lpage>
          :
          <fpage>42</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>