<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evaluating the Interpretability of Tooth Expressions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Extended Abstract</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guendalina Righetti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Porello</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Confalonieri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Free University of Bozen-Bolzano, Faculty of Computer Science</institution>
          ,
          <addr-line>39100, Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università di Genova, Dipartimento di Antichità, Filosofia e Storia</institution>
          ,
          <addr-line>16126, Genova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In Knowledge Representation, Tooth expressions have been shown to behave like linear classification models. Thus, they can provide a powerful yet natural tool to represent local explanations of black box classifiers in the context of Explainable AI. In this extended abstract, we present the result of a user study in which we evaluated the interpretability of Tooth expressions compared to Disjunctive Normal Forms (DNF). In the user study, we asked respondents to carry out two classification tasks using concepts represented either as Tooth expressions or as diferent types of DNF formulas. We evaluated interpretability through accuracy, response time, confidence, and perceived understandability by human users. In line with our hypothesis, the study revealed that Tooth expressions are generally faster to use, and that they are perceived more understandable by users who are less familiar with logic.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Tooth expressions</kwd>
        <kwd>Explainable AI</kwd>
        <kwd>Interpretability</kwd>
        <kwd>User study</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Symbolic knowledge plays a key role for the creation of intelligible explanations. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], it has
been shown that the integration of DL ontologies in the creation of explanations can enhance
the perceived interpretability1 of post-hoc explanations by human users.
      </p>
      <p>
        Motivated by the conventional wisdom that disjunctive normal form (DNF) is considered
as a benchmark in terms of both expressivity and interpretability of logic-based knowledge
representations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], we assume to have local explanations of black box models modeled as a DNF
formula. An example explanation from a loan agent could be: ‘I grant a loan when the subject has
no children and is married or when he has high income range’ (i.e., (¬  ⊓  ) ⊔
ℎ). Prior works raised the questions of whether DNF is always the most interpretable
representation, and whether alternate representation forms enable better interpretability [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
In particular, [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] evaluated several forms of DNFs in terms of their interpretability when
presented to human users as logical explanations for diferent domains of application. In this
work we aim at comparing the intepretability of DNFs and Tooth expressions.
      </p>
      <p>
        Tooth expressions have been studied in the context of Knowledge Representation and
integrated within DLs in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], by adding a novel concept constructor, the “Tooth” operator (∇∇).
Tooth operators allow for introducing weights into standard DL languages to assess the
importance of the features in the definition of the concepts. For instance, as we shall see, the concept
∇∇1(( , − 1), (ℎ, 2), ( , 1)) classifies those instances for which the sum of
the satisfied weighted concepts reaches the threshold 1.
      </p>
      <p>
        In the context of XAI, Tooth expressions provide a powerful yet natural tool to represent
local explanations of black box classifiers. In [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ] a link between Tooth-expressions and linear
classifiers has been established, where it is shown that Tooth-operators behave like perceptrons.
Furthermore, adding Tooth operators to any language including the booleans does not increase
the expressivity and complexity of the language. Tooth expressions are indeed equivalent
to standard DNFs2 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]: they are ‘syntactic sugar’ for languages that include the booleans.
They allow, however, for crisper formulas, being thus less error-prone and, putatively, more
understandable. Moreover, the representation of Tooth expressions is inspired by the design
of Prototype Theory [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Tooth operators are thus more cognitively grounded than standard
logic languages, allowing for a representation of concepts that is, arguably, more in line with
the way humans think of them [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ].
      </p>
      <p>
        In this paper, we present the results of a user study we conducted to measure the
interpretability of Tooth expressions versus their translation into standard DNFs. In the user study,
respondents were asked to carry out diferent classification tasks using concepts represented
both as a Tooth-expressions and as DNFs. In line with previous works evaluating the
interpretability of explanation formats (e.g., [
        <xref ref-type="bibr" rid="ref1 ref11 ref12 ref13 ref4">11, 1, 4, 12, 13</xref>
        ]), we used the metrics of accuracy, time
of response, and confidence in the answers as a proxy for evaluating the interpretability of the
two representations. We expected that Tooth expressions could be perceived more interpretable.
In line with our hypothesis, our study revealed that the type of task, the background of the
respondents, and the size of the DNF formula afect the interpretability of the formalism used.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        Tooth Operator - Preliminary Definitions. In this section, we delineate the formal
framework necessary to introduce ∇∇ (Tooth) expressions. Following the work done in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we extend
standard DL languages with a class of -ary operators denoted by the symbol ∇∇ (spoken
‘Tooth’). Each operator works as follows: () it takes a list of concepts, () it associates a weight
(i.e., a number) to each of them, and () it returns a complex concept that applies to those
instances that satisfy a certain combination of concepts, i.e., those instances for which, by
summing up the weights of the satisfied concepts, a certain threshold is met. More precisely, we
assume a vector of  weights ⃗ ∈ R and a threshold value  ∈ R. If 1, . . . ,  are concepts
of ℒ, then ∇∇⃗(1, . . . , ) is a concept of ℒ∇∇. To better visualise the weights an
operator associates to the concepts, we often use the notation ∇∇((1, 1), . . . , (, ))
instead of ∇∇⃗(1, . . . , ). The semantics of ℒ∇∇ just extends the usual semantics of
ℒ to account for the interpretation of the Tooth operator, as follows. Let  = (∆  , ·  ) be an
2More precisely, non-nested Tooth-expressions are not able to represent the XOR. Nested Tooth can however
overcome this dificulty.
interpretation of ℒ. The interpretation of a ∇∇-concept  = ∇∇((1, 1), . . . , (, ))
is:  = { ∈ ∆  |  () ≥ } where  () is the value of  ∈ ∆  under the concept , i.e.:
 () = ∑︀∈{1,...,}{ |  ∈  }.
      </p>
      <p>
        We refer the interested reader to [
        <xref ref-type="bibr" rid="ref14 ref5 ref6">5, 6, 14</xref>
        ] for a more precise account of the properties of the
operator.
      </p>
      <p>Disjunctive Normal Forms - Preliminary Definitions. A disjunctive normal form (DNF)
is a logical formula consisting of a disjunction of one or more conjunctions, of one or more
literals. In our study, we used DL symbols (⊓, ⊔) to interpret conjunctions and disjunctions.</p>
      <p>
        We will follow the definitions from [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Accordingly, DNF is a strict subset of the Negation
Normal Form language. An NNF formula is characterised as a rooted, directed, acyclic graph,
where each leaf node is labeled with a propositional variable or its negation, and each internal
node is labeled with a conjunction or a disjunction. A DNF is a flat NNF, i.e., an NNF whose
maximum number of edges from the root to some leaf is 2. Moreover, DNFs satisfies the property
of simple conjunction, i.e., each propositional variable occurs at most once in each conjunction.
      </p>
      <p>One can consider diferent NNF subsets by imposing one or more of the following conditions
on the formulas: (i) Decomposability: an NNF is decomposable (DNNF) if for each conjunction
in the NNF, the conjuncts do not share variables. Each DNF is decomposable by definition. (ii)
Determinism: an NNF is deterministic (d-NNF) if for each disjunction in the NNF, every two
disjuncts are logically contradictory. (iii) Smoothness: NNFs satisfy smoothness (sd-NNF) if for
each disjunction formula, each disjunct mentions the same variables. When looking at DNF,
the class of formulas satisfying determinism and smoothness is called MODS.</p>
      <p>In what follows, we will consider three sets of DNF, obtained by adding diferent conditions
on the formulae (and leading to formulas of diferent sizes): (i) DNF1: Simple (decomposable)
DNFs (  1 ⊊    ), corresponding to the shorter formulas. The only requirement
for the formulas is to satisfy the property of simple conjunction. (ii) DNF2: Deterministic
DNFs (  2 ⊊  −    ), for which each couple of disjuncts is required to be logically
contradictory. (iii) DNF3: Deterministic, smooth DNFs (  3 ⊊  ), corresponding to
the longest possible DNFs. DNF3 collect all the formula models.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental Evaluation</title>
      <p>Method. We had 6 concept definitions of diferent complexities. For each concept, we
constructed four formulas, one for each of the formats (DNF1, DNF2, DNF3 and Tooth expression).
We thus obtained 24 distinct concept definitions. Each participant was shown twelve formulas
corresponding to concept definitions, randomly shufled.</p>
      <p>We had two distinct online questionnaires, one for the DNFs and one for Tooth expressions.
The questionnaire was run in a controlled environment (i.e., in a classroom) and contained
an introductory and an experimental phase. In the introductory phase, subjects were shown
a short description of either DNFs or Tooth operator, and how its semantics is determined.
The experimental phase was subdivided into two tasks: classification, and inspection. In
these tasks the participants were presented with six formulas corresponding to one of the two
representations. In the classification task, subjects were asked to decide if a certain combination
of literals is an instance of a given formula (e.g., Given the formula 1 := (¬ ⊓ ) ⊔ . If
 is ¬, , and ¬, then  is an instance of 1). In the inspection task, participants had to
decide on the truth value of a particular statement, referring to if some given conditions of an
instance are necessary for the instance to belong to a given class (e.g., Given the formula 1 :=
(¬ ⊓ ) ⊔ . Having  is necessary for being classified as 1). While the former task provides
all details necessary for performing the decision, the latter only specifies whether a subset
of the features influence the decision. In these two tasks, for each formula, we recorded: (i)
Correctness of the response; (ii) Confidence in the response, as provided on a Likert scale from
1 to 7; (iii) Response time measured from the moment the formula was presented; (iv) Perceived
formula understandability, as provided on a Likert scale from 1 to 7.</p>
      <p>58 participants volunteered to take part in the experiment. We had two groups of students,
33 students with a background in computer science and 25 students with a background in
philosophy. Each group repeated the questionnaire twice, once using DNFs and once using Tooth
expressions. In the analysis, we will denote these groups as GroupI and GroupII respectively.
Results. When looking at the two groups together, we observed a significant influence
( &lt; .0001) of Tooth expressions on the time of response within both tasks, showing that when
using Tooth operators respondents carried out the tasks in a quicker way. This suggests that
Tooth expressions are more cognitively friendly than standard DNFs.</p>
      <p>When looking at the two groups separately (Table 1), the percentages of correct answers are
slightly diferent when using DNFs and Tooth operators, but this diference is not statistically
significant. Thus, we can conclude that the type of formula used does not have any significant
efects or interactions on the accuracy of responses. Tooth operators yielded faster responses
in both groups. This seems to suggest that having more compact information, like in the case
of Tooth operators, could speed up the human decision-making process. Interestingly, faster
decision making can yield more correct responses, but surprisingly faster decision-making is not
always associated with highest perceived understandability and highest confidence. Respondents
with computer science background were more confident with DNFs and perceived them as
more understandable than Tooth operators. On the contrary, respondents with a background
in philosophy found Tooth operators more understandable and were more confident with
their answers when using Tooth operators. This behaviour can be motivated by the fact that
computer scientists were introduced to logic and DNF formulas in their curricula, but not to
Tooth operators. Thus, being more proficient in DNFs, they did not face the ‘learning curve’
in understanding a new representation formalism such Tooth operators. Respondents with a
background in philosophy, on the other hand, studied neither DNFs nor Tooth operators. From
this study, we can conclude that Tooth operators are better representation for users who are
not familiar with logic, and with DNFs in particular.</p>
      <p>When looking at results of diferent DNFs vs Tooth operator we observed that simpler DNF
formats, namely DNF1 and DNF2, yielded more accurate responses. Tooth operators perform
better compared to DNF3. This is expected since formulas in DNF3 format tend to be very long.
DNF1 and DNF2 performed similarly in our study. This is expected, since they are quite similar
in lengths. As far as time is concerned, we still observe that Tooth operators are faster than any
of the DNF formats. The ‘interformat’ analysis seems to suggest that DNF1 and Tooth operator
have quite similar understandability from the performance point of view and also from the
subjective point of view. On the other hand, DFN2 and DNF3 require longer time of response
and were perceived less understandable than Tooth operators.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and Future Works</title>
      <p>
        In this paper, we compared the intepretability of Tooth expressions and a standard logical
formalism, i.e., the DNFs through a user study. In the study, we asked users to carry out two
distinct tasks, namely classification and inspection (see Section 3), using Tooth expressions
and DNFs. The interpretability of Tooth expressions and DNFs was measured through
humangrounded metrics [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], namely accuracy in the responses, time of response, confidence in the
responses, and perceived understandability.
      </p>
      <p>We observed that Tooth expressions were generally faster to process, leading to a lower time of
response. This was observed across all diferent DNFs formats considered in the study. Moreover,
Tooth expressions were perceived as more understandable than DNFs in the inspection task
(suggesting that they are better suited to tasks that benefit from a more compact representation
of knowledge). The same was not generally observed in the classification task. Whilst the
time of response was much lower for Tooth expressions than DNFs and the percentage of
correct responses was almost the same for Tooth expressions and DNFs, the confidence in
the reply and the perceived understandability were higher in the case of DNF formulas. By
distinguishing diferent DNF formats, we observed that longer DNFs (e.g., DNF3) were perceived
as less understandable than Tooth expressions. This result was also afected by the background
of the respondents. Tooth operators, in particular, resulted in better performances and in a
higher level of perceived understandability for users who were not familiar with logic.</p>
      <p>The results obtained open several directions for future work. Firstly, we plan a second user
study, where both Tooth expressions and DNFs are translated into natural language. Secondly,
we plan to compare decision trees and Tooth expressions [15].
Slovakia, September 19th to 22nd, 2021, volume 2954 of CEUR Workshop Proceedings,
CEURWS.org, 2021.
[15] R. Confalonieri, P. Galliani, O. Kutz, D. Porello, G. Righetti, N. Troquard, Towards
knowledge-driven distillation and explanation of black-box models, in: R. Confalonieri,
O. Kutz, D. Calvanese (Eds.), Proceedings of the Workshop on Data meets Applied
Ontologies in Explainable AI (DAO-XAI 2021) part of Bratislava Knowledge September (BAKS
2021), Bratislava, Slovakia, September 18th to 19th, 2021, volume 2998 of CEUR Workshop
Proceedings, CEUR-WS.org, 2021.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Weyde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Besold</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>Moscoso del Prado Martín, Using ontologies to enhance human understandability of global post-hoc explanations of black-box models</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>296</volume>
          (
          <year>2021</year>
          ). doi: https://doi.org/10.1016/j.artint.
          <year>2021</year>
          .
          <volume>103471</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Doshi-Velez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Towards a rigorous science of interpretable machine learning</article-title>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Marquis</surname>
          </string-name>
          ,
          <article-title>A knowledge compilation map</article-title>
          ,
          <source>J. Artif. Intell. Res</source>
          .
          <volume>17</volume>
          (
          <year>2002</year>
          )
          <fpage>229</fpage>
          -
          <lpage>264</lpage>
          . doi:
          <volume>10</volume>
          .1613/jair.989.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Booth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Muise</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Evaluating the interpretability of the knowledge compilation map</article-title>
          , in: S. Kraus (Ed.),
          <source>Proc. of IJCAI</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>5801</fpage>
          -
          <lpage>5807</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          , G. Righetti,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <article-title>A toothful of concepts: Towards a theory of weighted concept combination</article-title>
          , in: M.
          <string-name>
            <surname>Simkus</surname>
          </string-name>
          , G. E. Weddell (Eds.),
          <source>Proceedings of the 32nd International Workshop on Description Logics</source>
          , Oslo, Norway, June 18-21,
          <year>2019</year>
          , volume
          <volume>2373</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          , G. Righetti,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>On knowledge dependence in weighted description logic</article-title>
          , in: D.
          <string-name>
            <surname>Calvanese</surname>
          </string-name>
          , L. Iocchi (Eds.),
          <source>GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence</source>
          , Bozen/Bolzano, Italy,
          <fpage>17</fpage>
          -19
          <source>September</source>
          <year>2019</year>
          , volume
          <volume>65</volume>
          of EPiC Series in Computing, EasyChair,
          <year>2019</year>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>Perceptron connectives in knowledge representation</article-title>
          , in: C.
          <string-name>
            <surname>M. Keet</surname>
          </string-name>
          , M. Dumontier (Eds.),
          <source>Knowledge Engineering and Knowledge Management - 22nd International Conference, EKAW</source>
          <year>2020</year>
          , Bolzano, Italy,
          <source>September 16-20</source>
          ,
          <year>2020</year>
          , Proceedings, volume
          <volume>12387</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2020</year>
          , pp.
          <fpage>183</fpage>
          -
          <lpage>193</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -61244-3\_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rosch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. B.</given-names>
            <surname>Lloyd</surname>
          </string-name>
          ,
          <article-title>Cognition and categorization (</article-title>
          <year>1978</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <article-title>Pink panthers and toothless tigers: three problems in classification</article-title>
          , in: A.
          <string-name>
            <surname>Cangelosi</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Lieto (Eds.),
          <source>Proc. of the 7th International Workshop on Artificial Intelligence and Cognition</source>
          , volume
          <volume>2483</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <article-title>Concept combination in weighted logic</article-title>
          , in: E. M. Sanfilippo, al. (Eds.),
          <source>Proc. of the Joint Ontology Workshops 2021 Episode VII</source>
          , volume
          <volume>2969</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Weyde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Besold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>del Prado</surname>
          </string-name>
          <string-name>
            <surname>Martín</surname>
          </string-name>
          , Trepan Reloaded:
          <article-title>A Knowledge-driven Approach to Explaining Black-box Models</article-title>
          ,
          <source>in: Proceedings of the 24th European Conference on Artificial Intelligence</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>2457</fpage>
          -
          <lpage>2464</lpage>
          . doi:
          <volume>10</volume>
          .3233/ FAIA200378.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Huysmans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dejaeger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanthienen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Baesens</surname>
          </string-name>
          ,
          <article-title>An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models</article-title>
          ,
          <source>Decis. Support Syst</source>
          .
          <volume>51</volume>
          (
          <year>2011</year>
          )
          <fpage>141</fpage>
          -
          <lpage>154</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Allahyari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lavesson</surname>
          </string-name>
          ,
          <article-title>User-oriented assessment of classification model understandability</article-title>
          ,
          <source>in: SCAI 2011 Proc.,</source>
          volume
          <volume>227</volume>
          , IOS Press,
          <year>2011</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>Perceptron operators that count</article-title>
          , in: M.
          <string-name>
            <surname>Homola</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Ryzhikov</surname>
            ,
            <given-names>R. A</given-names>
          </string-name>
          .
          <string-name>
            <surname>Schmidt</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 34th International Workshop on Description Logics (DL</source>
          <year>2021</year>
          )
          <article-title>part of Bratislava Knowledge September (BAKS</article-title>
          <year>2021</year>
          ), Bratislava,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>