<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Conditional and Multi-preferential Approach to Explainability of Neural Network Models in Computational Logic (Extended Abstract)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mario Alviano</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Bartoli</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Botta</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Esposito</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Giordano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Gliozzi</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Theseider Dupré</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Università del Piemonte Orientale</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università della Calabria</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Università di Torino</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This short paper reports on a line of research exploiting a conditional logic of commonsense reasoning to provide a semantic interpretation to neural network models. A “concept-wise" multi-preferential semantics for conditionals is exploited to build a preferential interpretation of a trained neural network starting from its input-output behavior. The approach is a general one; it has first been proposed for Self-Organising Maps (SOMs), and exploited for MultiLayer Perceptrons (MLPs) in the verification of properties of a network by model-checking. An MLPs can be regarded as a (fuzzy) conditional knowledge base (KB), in which the synaptic connections correspond to weighted conditionals. Reasoners for manyvalued weighted conditional KBs are under development based on Answer Set solving to deal with entailment and model-checking.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Preferential Description Logics</kwd>
        <kwd>Typicality</kwd>
        <kwd>Neural Networks</kwd>
        <kwd>Explainability</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In this short paper we report on an approach to exploit the logic of commonsense reasoning for
the explainability of some neural network models. We also report on preliminary experiments
in the verification of properties of feedforward neural networks by model checking.</p>
      <p>
        Preferential approaches to commonsense reasoning (e.g., [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) have their roots in conditional
logics [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], and have been more recently extended to Description Logics (DLs), to deal with
defeasible reasoning in ontologies, by allowing non-strict form of inclusions, called defeasible
or typicality inclusions. Diferent preferential semantics [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ] and closure constructions
(e.g., [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]) have been proposed for defeasible DLs. Among these, the concept-wise
multipreferential semantics [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], which allows to account for preferences with respect to diferent
concepts. It has been introduced first as a semantics of ranked knowledge bases in a lightweight
description logic (DL) and then for weighted conditional DL knowledge bases, and proposed as
a semantics for some neural network models [
        <xref ref-type="bibr" rid="ref12 ref13 ref14">12, 13, 14</xref>
        ].
      </p>
      <p>
        We have considered both an unsupervised model, Self-organising maps (SOMs) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], which are
considered a psychologically and biologically plausible neural network model, and a supervised
one, MultiLayer Perceptrons (MLPs) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Learning algorithms in the two cases are quite
diferent, but our aim was to capture in a semantic interpretation the behavior of the network
after training. Considering a domain of input stimuli presented to a network e.g., during training
or generalization, a semantic interpretation describing the input-output behavior of the network
can be provided as a multi-preferential interpretation, where preferences are associated to
concepts. For SOMs, the learned categories C1, . . . , Cn are regarded as concepts so that a
preference relation over the domain of input stimuli is associated with each category [
        <xref ref-type="bibr" rid="ref12 ref14">12, 14</xref>
        ].
For MLPs, each unit of interest in the deep network (including hidden units) can be associated
with a concept and with a preference relation on the domain [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        For MLPs, the relationship between the logic of commonsense reasoning and deep neural
networks is even stronger, as the network can itself be regarded as a conditional knowledge
base, i.e., as a set weighted conditionals. This has been achieved by developing a
conceptwise fuzzy multi-preferential semantics for DLs with weighted defeasible inclusions. Some
diferent preferential closure constructions have been considered for weighted knowledge bases
(the coherent [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], faithful [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and φ-coherent [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] multi-preferential semantics), and their
relationships with MLPs have been investigated (see [
        <xref ref-type="bibr" rid="ref13 ref18">13, 18</xref>
        ]). Undecidability results for fuzzy
DLs with general inclusion axioms [
        <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
        ] have motivated the investigation of the (finitely)
many-valued case. An ASP-based approach has been proposed for reasoning with weighted
conditional KBs under φ-coherent entailment [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], and Datalog with weakly stratified negation
has been used for developing a model-checking approach for MLPs in the many-valued case
[
        <xref ref-type="bibr" rid="ref22 ref23">22, 23</xref>
        ]. Both the entailment and the model-checking approaches have been experimented in
the verification of properties of some trained multilayer feedforward networks. The preliminary
results can be the basis for further solutions for the multi-valued φ-coherent entailment, which
exploit state of the art ASP solving, including custom propagation based on the clingo API [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]
and fuzzy ASP solving [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], in the verification of properties of neural networks.
      </p>
      <p>
        The strong relationships between neural networks and conditional logics of commonsense
reasoning suggest that conditional logics can be used for the verification of properties of
neural networks to explain their behavior, in the direction of a trustworthy and explainable AI
[
        <xref ref-type="bibr" rid="ref26 ref27 ref28">26, 27, 28</xref>
        ]. The possibility of combining learned knowledge with elicited knowledge in the
same formalism is also a step towards neuro-symbolic integration.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The concept-wise multi-preferential semantics</title>
      <p>The idea underlying the multi-preferential semantics is that, for two domain elements x and y
and two concepts, e.g., Horse and Zebra, x can be regarded as being more typical than y as a
horse (x &lt;Horse y ), while x could be less typical than y as a zebra (y &lt;Zebra x ).</p>
      <p>
        This idea has been exploited in the definition of concept-wise multi-preferential interpretations
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] for a description logic with typicality concepts (e.g., T(Horse), representing the class
of typical horses), and defeasible inclusions (e.g., T(Horse) ⊑ T all, meaning that “normally
horses are tall"). Typicality inclusions T(C) ⊑ D correspond to Kraus-Lehmann-Magidor
(KLM) conditionals C |∼ D [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Concept-wise multi-preferential interpretations are defined by adding to standard DL
interpretations (pairs I = ⟨∆ , · I ⟩, where ∆ is a domain, and · I an interpretation function) the
preference relations &lt;C1 , . . . , &lt;Cn associated with a set of distinguished concepts C1, . . . , Cn,
representing the typicality of individuals in ∆ with respect to such concepts. Each &lt;Ci is a
modular and well-founded strict partial order on ∆ , like preferences in KLM rational models.</p>
      <p>
        The preference relations are used to define the meaning of typicality concepts. In the
twovalued case, a global preference relation &lt; can be defined from the &lt;Ci ’s, and concept T(C)
is interpreted as the set of all &lt;-minimal C elements. In the fuzzy case [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the preference
relation &lt;C of a concept C is induced by the fuzzy interpretation CI of the concept, a function
mapping each domain element in ∆ to a value in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], that is x &lt;C y if CI (x) &gt; CI (y).
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. A preferential interpretation of Self-Organising Maps</title>
      <p>
        Once a SOM has learned to categorize, the result of the categorization can be seen as a
conceptwise multi-preferential interpretation over a domain of input stimuli, in which a preference
relation is associated with each concept (learned category). Once the SOM has learned to
categorize, to assess category generalization, Gliozzi and Plunkett [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] define the map’s disposition to
consider a new stimulus y as a member of a known category C as a function of the distance of y
from the map’s representation of C. The distance d(x, Ci) of stimulus x from category Ci can be
used to build a binary preference relation &lt;Ci among the stimuli in ∆ with respect to category
Ci [
        <xref ref-type="bibr" rid="ref12 ref14">14, 12</xref>
        ], by letting x &lt;Ci y if and only if d(x, Ci) &gt; d(y, Ci). Based on the assumption that
the abstraction process in the SOM identifies the most typical exemplars for a given category,
in the semantic representation of a category, some specific stimuli (corresponding to the best
matching units) are identified as the typical exemplars of the category.
      </p>
      <p>
        The notion of generalization degree introduced by Gliozzi and Plunkett [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] can be used
to define a fuzzy multi-preferential interpretation of SOMs. This is done by interpreting each
category (concept) as a function mapping each input stimulus to a value in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], based on the
map’s generalization degree of category membership to the stimulus [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
      <p>
        In both the two-valued and fuzzy case, the preferential model can be exploited to learn or
validate conditional knowledge from empirical data, by verifying conditional formulas over the
preferential interpretation constructed from the SOM. In both cases, model checking can be
used for the verification of inclusions (either defeasible inclusions or fuzzy inclusion axioms)
over the respective models of the SOM (for instance, do the most typical penguins belong to the
category Bird with at least a degree of membership 0.8?). Starting from the fuzzy interpretation
of the SOM, a probabilistic interpretation of this neural network model is also provided [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
based on Zadeh’s probability of fuzzy events [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. A preferential interpretation of MultiLayer Perceptrons</title>
      <p>
        The input-output behaviour of MLPs can be captured in a similar way as for SOMs by
constructing a preferential interpretation over a domain ∆ of input stimuli, e.g., those stimuli considered
during training or generalization [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Each neuron k of interest for property verification can
be associated to a distinguished concept Ck. For each concept Ck, a preference relation &lt;Ck
is defined over the domain ∆ based on the activity values, yk(v), of neuron k for each input
v ∈ ∆ . In this way, a fuzzy multi-preferential interpretation of the network can be constructed
over the domain ∆ .
      </p>
      <p>
        In a fuzzy multi-preferential interpretation, the activation value yk(x) of neuron k for a
stimulus x in the network (assumed to be in the interval [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]) is taken to be the degree of
membership of x in concept Ck. The interpretation of boolean concepts is defined by fuzzy
combination functions, as usual in fuzzy DLs [
        <xref ref-type="bibr" rid="ref31">31, 32</xref>
        ]. This also allows a preference relation
&lt;C to be associated to any concept C, and the typical C-elements to be identified, provided
the interpretation is well-founded (an assumption which clearly holds when the domain ∆ is
ifnite, as in this case). Let us call MfN,∆ the fuzzy multi-preferential interpretation built from
network N over a domain ∆ . Logical properties of the network (including fuzzy typicality
inclusions) can then be verified by model checking over such an interpretation. Evaluating
properties involving hidden units might as well be of interest.
      </p>
      <p>
        A Datalog-based approach has been developed [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], which builds a multi-valued preferential
f,∆
interpretation MN ,n of a trained feedforward network N and, then, verifies the properties of
the network for post-hoc explanation. A multi-valued truth space Cn = {0, n1 , . . . , n−n 1 , nn } is
considered, for some n ≥ 1.
      </p>
      <p>The model checking approach has been experimented in the verification of properties of
neural networks for the recognition of basic emotions using the Facial Action Coding System
(FACS) [33], which involves Action Units (AUs), i.e., facial muscle contractions. From the
RAF-DB [34] data set, we selected the subset of the images that were labelled using only one
emotion in the set {suprise, f ear, happiness, anger}. A processed dataset containing 5 975
images was input to OpenFace 2.0; the output intensities of AUs were rescaled in order to make
their distribution conformant to the expected one in case AUs were recognized by humans [33].
The resulting AUs were used as input to a neural network trained to classify its input as an
instance of the four emotions. The neural network model we used is a fully connected feed
forward neural network with three hidden layers having 1 800, 1 200, and 600 nodes (all hidden
layers use ReLU activation functions, while the softmax function is used in the output layer).</p>
      <p>
        The relations between such AUs and emotions, studied by psychologists [35], have been used
as a reference for formulae to be verified on neural networks trained to learn such relations. The
model checking approach was applied, using the Clingo ASP solver as Datalog engine, taking as
set of input stimuli ∆ the test set, containing 1194 images, and n = 5 (given that AU intensities,
when assigned by humans, are on a scale of five values). Table 1 reports some results for the
verification of typicality inclusions T(E) ⊑ F ≥ k/n, with the number of typical individuals
for the emotion E, the number of counterexamples for diferent values of k (form 1 to n), as
well as the value of the conditional probabilities p(F |T(E)) of concept F given concept T(E),
based on Zadeh’s probability of fuzzy events [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ].
      </p>
      <p>The typicality inclusions relate instances with a high degree of membership in the output
class (the one for the output node) with combinations of AU values. In this case, the results
can be compared with expectations from domain experts [35]; in general, they can be used to
point out knowledge the network has learned, where the attention on typical instances of the
output class may be useful to concentrate on cases that are far from borderline. The probability
measure provides complementary information. Fuzzy DL inclusions may also include fuzzy
modifiers ( very, slightly, etc.), which have been considered in fuzzy DLs [36] (e.g., are slightly
happy people instances of au6 ⊔ au12 with a degree ≥ 2/5?).</p>
      <p>
        Concerning Table 1, for example, the formula T(happiness) ⊑ au1 ⊔ au6 ⊔ au12 ⊔ au14 ≥
3/5 holds for all individuals, while T(happiness) ⊑ au12 ≥ 3/5 (where au12 is the activation
of the lip corner puller muscle, that is, smiling) has 1 counterexample out of 255 instances of
T(happiness). The value of P (au12/T(happiness)) is larger than 4/5, even though there
are 35 counterexamples for T(happiness) ⊑ au12 ≥ 4/5.
5. MultiLayer Perceptrons as Weighted conditional knowledge bases
f,∆ , built from a network N for a given set of
The fuzzy multi-preferential interpretation MN
input stimuli (a domain ∆ ) as described above, can be proven to be a model of the neural
network N in a logical sense, by mapping the multilayer network into a weighted conditional
knowledge base KN [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        The weighted conditional knowledge base KN contains, for each neuron k, a set of weighted
defeasible inclusions. If Ck is the concept name associated to unit k and Cj1 , . . . , Cjm are
the concept names associated to units j1, . . . , jm, whose output signals are the input signals
for unit k, with synaptic weights wk,j1 , . . . , wk,jm , then unit k can be associated a set TCk of
weighted typicality inclusions: T(Ck) ⊑ Cj1 with wk,j1 , . . . , T(Ck) ⊑ Cjm with wk,jm . The
fuzzy multipreference interpretation built from a network N over a domain ∆ can be proven
to be a model of the knowledge base KN based on a fuzzy multipreferential semantics, and
specifically based on the notions of coherent [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], faithful [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and φ-coherent [
        <xref ref-type="bibr" rid="ref18">18, 37</xref>
        ] (fuzzy)
multi-preferential semantics.
      </p>
      <p>
        In general a weighted conditional KB KN [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], besides a set of weighted conditional
inclusions, also contains a TBox and an ABox as in standard (and in fuzzy) description logics.
Multipreferential semantics for weighted conditional KBs have been defined through a semantic
closure construction in the spirit of Lehmann’s lexicographic closure [38] and Kern-Isberner’s
c-representations [39], but adopting a concept-wise approach, so that diferent preference
relations are defined.
      </p>
      <p>Specifically, a coherent multi-preferential model of a weighted KB is defined as a fuzzy
interpretation I = ⟨∆ , · I ⟩, which satisfies all DL axioms in TBox and ABox, as well as a coherence
condition which requires that each preference relation &lt;Ci , induced from the fuzzy
interpretation over the domain ∆ , is coherent with the the weights Wi(x) of all domain individuals x with
respect to concept Ci. For each distinguished concept Ci, and domain element x ∈ ∆ , the weight
Wi(x) of x wrt Ci in a fuzzy interpretation I = ⟨∆ , · I ⟩ is the sum: Wi(x) = ∑︁h whi DiI,h(x).</p>
      <p>
        For instance, in the φ-coherence semantics a function φi : R → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] is associated to each
distinguished concepts Ci. An interpretation I = ⟨∆ , · I ⟩ is φ-coherent if, for all concepts Ci ∈ C
and x ∈ ∆ ,
      </p>
      <p>CiI (x) = φi(∑︂ whi DiI,h(x))</p>
      <p>h
where TCi = {(T(Ci) ⊑ Di,h, whi)} is the set of weighted conditionals for Ci.</p>
      <p>
        Once a trained neural network can be seen as a weighted defeasible KB KN , φ-entailment
can then be used to prove properties of the network for post-hoc explanation. Some preliminary
experiments have been done based on finitely many-valued Gödel description logic with
typicality GnLCT [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], by defining an ASP encoding of entailment. As a proof of concept, in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]
the entailment approach has been experimented for the weighted GnLCT KBs corresponding
to two of the trained multilayer feedforward network for the MONK’s problems ([40]).
      </p>
      <p>
        The model-checking approach does not require to consider the activity of all units, but only
of the units involved in the property to be verified. In the entailment-based approach, on the
other hand, all units are considered. This requires advanced solving techniques for reasoning
about large networks, which may include state of the art ASP solving, and fuzzy ASP solving
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], as well as other techniques.
      </p>
    </sec>
    <sec id="sec-5">
      <title>6. Conclusions</title>
      <p>Conditional logics of commonsense reasoning can be used for interpreting and verifying the
knowledge learned by a neural network for post-hoc explanation and, for MLPs, a trained
network can itself be seen as a conditional knowledge base.</p>
      <p>Much work has been devoted to the combination of neural networks and symbolic reasoning
(e.g., the work by d’Avila Garcez et al. [41, 42, 43] and Setzu et al. [44]), as well as to the definition
of new computational models [45, 46, 47, 48]. The work summarized in this paper opens up
the possibility of adopting conditional logics as a basis for neuro-symbolic integration, e.g.,
learning the weights of a conditional knowledge base from empirical data, and combining the
defeasible inclusions extracted from a neural network with other defeasible or strict inclusions
for inference.</p>
      <p>
        Using a multi-preferential logic for the verification of typicality properties of a neural network
by model-checking is a general (model agnostic) approach. It can be used for SOMs, as in [
        <xref ref-type="bibr" rid="ref12 ref14">12, 14</xref>
        ],
by exploiting a notion of distance of a stimulus from a category to define a preferential structure,
as well as for MLPs, by exploiting units activity to build a fuzzy preferential interpretation.
Given the simplicity of the approach, a similar construction can be adapted to other network
models and learning approaches, and used in applications combining diferent network models
(as in the mentioned experiment to the recognition of basic emotions [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]).
      </p>
      <p>Both the model-checking approach and the entailment-based approach are global approaches
(see, e.g., [44] for the notions of local and global approaches), as they consider the behavior
of the network over a set ∆ of input stimuli. Indeed, the evaluation of typicality inclusions
considers all the individuals in the domain to establish preference relations among them, with
respect to diferent aspects. For MLPs, given the associated weighted KB, properties of single
individuals can as well be verified through entailment (by instance checking, in DL terminology),
and an interesting direction of investigation is the study of counterfactual explanation [49].</p>
      <p>
        The entailment-based approach is based on the idea of regarding a multilayer network as
weighted conditional knowledge base, and is specific for this network model. For MLPs, it has
been proven that, in the fuzzy case, the interpretation built for model-checking is indeed a
model of the weighted conditional KB corresponding to the network [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Whether it is possible
to extend the logical encoding of MLPs as weighted KBs to other neural network models is a
subject for future investigation. The development of a temporal extension of this formalism to
capture the transient behavior of MLPs is also an interesting direction to extend this work.
      </p>
      <p>Acknowledgement: We thank the anonymous referees for their helpful suggestions. This
research was partially supported by the Università del Piemonte Orientale, by the LAIA lab
(part of the SILA labs), and by INDAM-GNCS Project 2022 ‘LESLIE”.
[32] T. Lukasiewicz, U. Straccia, Managing uncertainty and vagueness in description logics for
the Semantic Web, J. Web Semant. 6 (2008) 291–308.
[33] P. Ekman, W. Friesen, J. Hager, Facial Action Coding System, Research Nexus, 2002.
[34] S. Li, W. Deng, J. Du, Reliable crowdsourcing and deep locality-preserving learning for
expression recognition in the wild, in: 2017 IEEE Conference on Computer Vision and
Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 2017, pp. 2584–2593.
[35] B. Waller, J. C. Jr., A. Burrows, Selection for universal facial emotion, Emotion 8 (2008)
435–439.
[36] T. Lukasiewicz, U. Straccia, Description logic programs under probabilistic uncertainty
and fuzzy vagueness, Int. J. Approx. Reason. 50 (2009) 837–853.
[37] L. Giordano, From weighted conditionals with typicality to a gradual argumentation
semantics and back, in: Proc. 20th International Workshop on Non-Monotonic Reasoning,
NMR 2022, Part of FLoC 2022, Haifa, Israel, August 7-9, 2022, volume 3197 of CEUR
Workshop Proceedings, CEUR-WS.org, 2022, pp. 127–138.
[38] D. J. Lehmann, Another perspective on default reasoning, Ann. Math. Artif. Intell. 15
(1995) 61–82.
[39] G. Kern-Isberner, Conditionals in Nonmonotonic Reasoning and Belief Revision -
Considering Conditionals as Agents, volume 2087 of LNCS, Springer, 2001.
[40] Thrun, S. et al., A Performance Comparison of Diferent Learning Algorithms, Technical</p>
      <p>Report CMU-CS-91-197, Carnegie Mellon University, 1991.
[41] A. S. d’Avila Garcez, K. Broda, D. M. Gabbay, Symbolic knowledge extraction from trained
neural networks: A sound approach, Artif. Intell. 125 (2001) 155–207.
[42] A. S. d’Avila Garcez, L. C. Lamb, D. M. Gabbay, Neural-Symbolic Cognitive Reasoning,</p>
      <p>Cognitive Technologies, Springer, 2009.
[43] A. S. d’Avila Garcez, M. Gori, L. C. Lamb, L. Serafini, M. Spranger, S. N. Tran,
Neuralsymbolic computing: An efective methodology for principled integration of machine
learning and reasoning, FLAP 6 (2019) 611–632.
[44] M. Setzu, R. Guidotti, A. Monreale, F. Turini, D. Pedreschi, F. Giannotti, GlocalX - from
local to global explanations of black box AI models, Artif. Intell. 294 (2021) 103457.
[45] L. C. Lamb, A. S. d’Avila Garcez, M. Gori, M. O. R. Prates, P. H. C. Avelar, M. Y. Vardi,
Graph neural networks meet neural-symbolic computing: A survey and perspective, in:
C. Bessiere (Ed.), Proc. IJCAI 2020, ijcai.org, 2020, pp. 4877–4884.
[46] L. Serafini, A. S. d’Avila Garcez, Learning and reasoning with logic tensor networks, in:
XVth Int. Conf. of the Italian Association for Artificial Intelligence, AI*IA 2016, Genova,
Italy, Nov 29 - Dec 1, volume 10037 of LNCS, Springer, 2016, pp. 334–348.
[47] P. Hohenecker, T. Lukasiewicz, Ontology reasoning with deep neural networks, J. Artif.</p>
      <p>Intell. Res. 68 (2020) 503–540.
[48] D. Le-Phuoc, T. Eiter, A. Le-Tuan, A scalable reasoning and learning approach for
neuralsymbolic stream fusion, in: AAAI 2021, February 2-9, AAAI Press, 2021, pp. 4996–5005.
[49] R. Mothilal, A. Sharma, C. Tan, Explaining machine learning classifiers through diverse
counterfactual explanations, in: FAT* ’20: Conference on Fairness, Accountability, and
Transparency, Barcelona, Spain, January 27-30, 2020, ACM, 2020, pp. 607–617.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Magidor</surname>
          </string-name>
          ,
          <article-title>Nonmonotonic reasoning, preferential models and cumulative logics</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>44</volume>
          (
          <year>1990</year>
          )
          <fpage>167</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , Counterfactuals, Basil Blackwell Ltd,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Nute</surname>
          </string-name>
          , Topics in conditional logic, Reidel, Dordrecht (
          <year>1980</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          , Preferential Description Logics,
          <source>in: LPAR</source>
          <year>2007</year>
          , volume
          <volume>4790</volume>
          <source>of LNAI</source>
          , Springer, Yerevan, Armenia,
          <year>2007</year>
          , pp.
          <fpage>257</fpage>
          -
          <lpage>272</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , T. Meyer, Semantic preferential subsumption, in: G. Brewka, J. Lang (Eds.),
          <source>KR</source>
          <year>2008</year>
          , AAAI Press, Sidney, Australia,
          <year>2008</year>
          , pp.
          <fpage>476</fpage>
          -
          <lpage>484</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , T. A. Meyer, I. Varzinczak,
          <article-title>Contextual conditional reasoning</article-title>
          ,
          <source>in: AAAI-21</source>
          ,
          <string-name>
            <surname>Virtual</surname>
            <given-names>Event</given-names>
          </string-name>
          ,
          <source>February 2-9</source>
          ,
          <year>2021</year>
          , AAAI Press,
          <year>2021</year>
          , pp.
          <fpage>6254</fpage>
          -
          <lpage>6261</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <article-title>A reconstruction of multipreference closure</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>290</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , U. Straccia,
          <article-title>Rational Closure for Defeasible Description Logics</article-title>
          , in: T. Janhunen, I. Niemelä (Eds.),
          <source>JELIA</source>
          <year>2010</year>
          , volume
          <volume>6341</volume>
          <source>of LNCS</source>
          , Springer, Helsinki,
          <year>2010</year>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casini</surname>
          </string-name>
          , T. Meyer, K. Moodley,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nortje</surname>
          </string-name>
          ,
          <article-title>Relevant closure: A new form of defeasible reasoning for description logics</article-title>
          ,
          <source>in: JELIA</source>
          <year>2014</year>
          , LNCS 8761, Springer,
          <year>2014</year>
          , pp.
          <fpage>92</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          ,
          <article-title>Semantic characterization of rational closure: From propositional logic to description logics</article-title>
          ,
          <source>Art. Int</source>
          .
          <volume>226</volume>
          (
          <year>2015</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>An ASP approach for reasoning in a concept-aware multipreferential lightweight DL</article-title>
          , TPLP
          <volume>10</volume>
          (
          <issue>5</issue>
          ) (
          <year>2020</year>
          )
          <fpage>751</fpage>
          -
          <lpage>766</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>On a plausible concept-wise multipreference semantics and its relations with self-organising maps</article-title>
          , in: F.
          <string-name>
            <surname>Calimeri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Perri</surname>
          </string-name>
          , E. Zumpano (Eds.),
          <source>CILC</source>
          <year>2020</year>
          ,
          <article-title>Rende</article-title>
          , IT, Oct.
          <volume>13</volume>
          -
          <fpage>15</fpage>
          ,
          <year>2020</year>
          , volume
          <volume>2710</volume>
          <source>of CEUR</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model</article-title>
          ,
          <source>in: Proc. JELIA</source>
          <year>2021</year>
          , May 17-20, volume
          <volume>12678</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>A conditional, a fuzzy and a probabilistic interpretation of self-organizing maps</article-title>
          ,
          <source>J. Log. Comput</source>
          .
          <volume>32</volume>
          (
          <year>2022</year>
          )
          <fpage>178</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kohonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          , T. Huang (Eds.),
          <string-name>
            <surname>Self-Organizing</surname>
            <given-names>Maps</given-names>
          </string-name>
          ,
          <source>Third Edition</source>
          , Springer Series in Information Sciences, Springer,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Haykin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Neural Networks - A Comprehensive Foundation</surname>
          </string-name>
          , Pearson,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <article-title>On the KLM properties of a fuzzy DL with Typicality</article-title>
          ,
          <source>in: Proc. ECSQARU</source>
          <year>2021</year>
          , Prague, Sept.
          <fpage>21</fpage>
          -
          <lpage>24</lpage>
          ,
          <year>2021</year>
          , volume
          <volume>12897</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>557</fpage>
          -
          <lpage>571</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <article-title>From weighted conditionals of multilayer perceptrons to a gradual argumentation semantics</article-title>
          ,
          <source>in: 5th Workshop on Advances in Argumentation in Artif. Intell.</source>
          ,
          <year>2021</year>
          , Milan, Italy, Nov.
          <volume>29</volume>
          , volume
          <volume>3086</volume>
          <source>of CEUR Workshop Proc</source>
          .,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cerami</surname>
          </string-name>
          , U. Straccia,
          <article-title>On the undecidability of fuzzy description logics with GCIs with Lukasiewicz t-norm</article-title>
          ,
          <source>CoRR abs/1107</source>
          .4212 (
          <year>2011</year>
          ). URL: http://arxiv.org/abs/1107.4212.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgwardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Undecidability of fuzzy description logics</article-title>
          , in: G. Brewka,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A</given-names>
            .
            <surname>McIlraith</surname>
          </string-name>
          (Eds.),
          <source>Proc. KR</source>
          <year>2012</year>
          , Rome, Italy, June 10-14,
          <year>2012</year>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>An ASP approach for reasoning on neural networks under a finitely many-valued semantics for weighted conditional knowledge bases, Theory Pract</article-title>
          . Log. Program.
          <volume>22</volume>
          (
          <year>2022</year>
          )
          <fpage>589</fpage>
          -
          <lpage>605</lpage>
          . doi:
          <volume>10</volume>
          .1017/S1471068422000163.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bartoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Botta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>An ASP approach for reasoning about the conditional properties of neural networks: an experiment in the recognition of basic emotions</article-title>
          ,
          <source>in: Datalog 2.0</source>
          <year>2022</year>
          , volume
          <volume>3203</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>54</fpage>
          -
          <lpage>67</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3203</volume>
          /paper4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bartoli</surname>
          </string-name>
          ,
          <article-title>A Typicality-based Interpretation of Neural Networks: an Experiment on Facial Emotion Recognition</article-title>
          ,
          <source>Master Thesis in Stochastics and Data Science</source>
          , University of Torino,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gebser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kaminski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kaufmann</surname>
          </string-name>
          , M. Ostrowski,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schaub</surname>
          </string-name>
          , P. Wanko,
          <article-title>Theory solving made easy with Clingo 5</article-title>
          , in:
          <source>Technical Commun. of the 32nd International Conference on Logic Programming</source>
          ,
          <source>ICLP 2016 TCs, October 16-21</source>
          ,
          <year>2016</year>
          , New York City, USA,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M.</given-names>
            <surname>Alviano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Fuzzy answer set computation via satisfiability modulo theories</article-title>
          ,
          <source>Theory Pract. Log. Program</source>
          .
          <volume>15</volume>
          (
          <year>2015</year>
          )
          <fpage>588</fpage>
          -
          <lpage>603</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          ,
          <article-title>Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>52138</fpage>
          -
          <lpage>52160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A survey of methods for explaining black box models</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <volume>93</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>93</lpage>
          :
          <fpage>42</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>A. B. Arrieta</surname>
            ,
            <given-names>N. D.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          <string-name>
            <surname>Ser</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bennetot</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Tabik</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Barbado</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>García</surname>
            , S. GilLopez, D. Molina,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Benjamins</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Chatila</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Herrera</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI, Inf</article-title>
          .
          <source>Fusion</source>
          <volume>58</volume>
          (
          <year>2020</year>
          )
          <fpage>82</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Plunkett</surname>
          </string-name>
          ,
          <article-title>Grounding bayesian accounts of numerosity and variability efects in a similarity-based framework: the case of self-organising maps</article-title>
          ,
          <source>Cogn. Sci</source>
          .
          <volume>31</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          ,
          <article-title>Probability measures of fuzzy events</article-title>
          ,
          <source>J.Math.Anal.Appl</source>
          <volume>23</volume>
          (
          <year>1968</year>
          )
          <fpage>421</fpage>
          -
          <lpage>427</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>G.</given-names>
            <surname>Stoilos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. B.</given-names>
            <surname>Stamou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Tzouvaras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. Horrocks</given-names>
            ,
            <surname>Fuzzy</surname>
          </string-name>
          <string-name>
            <surname>OWL</surname>
          </string-name>
          <article-title>: uncertainty and the semantic web</article-title>
          ,
          <source>in: OWLED*05 Workshop</source>
          , volume
          <volume>188</volume>
          <source>of CEUR Workshop Proc</source>
          .,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>