=Paper= {{Paper |id=Vol-3739/abstract-22 |storemode=property |title=Semantic Explanations of Classifiers through the Ontology-Based Data Management Paradigm (Extended Abstract) |pdfUrl=https://ceur-ws.org/Vol-3739/abstract-22.pdf |volume=Vol-3739 |authors=Laura Papi,Gianluca Cima,Marco Console,Maurizio Lenzerini |dblpUrl=https://dblp.org/rec/conf/dlog/PapiCCL24 }} ==Semantic Explanations of Classifiers through the Ontology-Based Data Management Paradigm (Extended Abstract)== https://ceur-ws.org/Vol-3739/abstract-22.pdf
                         Semantic Explanations of Classifiers through the
                         Ontology-Based Data Management Paradigm
                         (Extended Abstract)
                         Laura Papi, Gianluca Cima, Marco Console and Maurizio Lenzerini
                         Department of Computer, Control and Management Engineering, 25 Via Ariosto, Rome, 00185


                                     Abstract
                                     One of the main challenges in modern AI systems is to explain the decisions of complex machine learning models,
                                     and recent years have seen a burgeoning of novel approaches. These approaches often rely on some structural
                                     components of the models under consideration, e.g., the set of features used for the classification task. As a
                                     result, explanations provided by these approaches are expressed in terms of the sub-symbolic information and,
                                     therefore, they are hard to interpret for users. In this paper, we argue that, in order to foster interpretability, these
                                     explanations should be expressed in terms of the knowledge that the users posses on the underlying application
                                     domain rather than on the sub-symbolic components of the model. To this end, our first contribution is the
                                     illustration of a novel formal framework for explaining the decisions of machine learning classifiers grounded on
                                     the Ontology-Based Data Management paradigm. Within this framework, explanations are defined by logical
                                     formulae using the symbols that an ontology defines and, as such, they posses a well-defined semantics. As a
                                     second contribution, we provide an algorithm that computes the best explanations that can be expressed in the
                                     class of conjunctive queries.

                                     Keywords
                                     Ontology-Based Data Management, Machine Learning Classifiers, Explainable AI.




                         1. Introduction
                         Classifiers form a prominent family of modern AI systems. Intuitively, a classifier is a systems used to
                         predict whether an object belongs to a specific class given a set of its relevant attributes [1]. Due to the
                         nature of the techniques involved, the behavior of classifiers is often regarded as opaque by end users
                         [2] and several techniques have been proposed to elucidate it [2]. An important notion in this context
                         is that of local explanations, i.e., answers for the question why a given object is assigned to a specific class.
                         Concretely, these explanations usually consist of a set of properties of the given object that dictate the
                         behavior of the classifier expressed in terms of the raw data attributes used to operate it [3, 4, 5, 6].
                            While explanations based on raw data attributes may convey some information to AI Experts, it
                         is often hard for general users to understand their meaning. This is especially true in the typical
                         machine learning scenario where attributes are the results of a complex process of feature selection
                         and carry little to no meaning by themselves. The goal of our work is to define a novel framework to
                         express explanations using conceptual properties of the scenario of interest that are not limited by the
                         data attributes used by the classifier.
                            Our framework is based on the notion of mappings, well-known by the AI community and widely
                         used in the context of Information Integration [7] and Ontology-Based Data Management [8]. These
                         mappings define the relation between the objects of the world that are relevant for a classifier and a
                         set of conceptual notions that are relevant for the application domain. To formalize these conceptual
                         notions, our framework makes use of ontologies that formalize the application domain. Combining
                         domain ontologies and mappings is a well-established approach to lift information about raw data to the

                             DL 2024: 37th International Workshop on Description Logics, June 18–21, 2024, Bergen, Norway
                          $ laup.97@gmail.com (L. Papi); cima@diag.uniroma1.it (G. Cima); console@diag.uniroma1.it (M. Console);
                          lenzerini@diag.uniroma1.it (M. Lenzerini)
                           0009-0003-2281-9500 (L. Papi); 0000-0003-1783-5605 (G. Cima); 0009-0004-5526-019X (M. Console); 0000-0003-2875-6187
                          (M. Lenzerini)
                                     Β© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
conceptual level [9, 10, 11, 12]. In our framework, these combinations, called ontological specifications,
are used to formalize the relation between the classifier whose behavior we want to explain and the
notions that users understand. We then use ontological specifications to provide a local explanation of a
classifier expressed at the conceptual level of their ontologies via their mappings. In this way, we obtain
explanations expressed as logical formulae over the symbols of the ontology and grounded on a formal
semantics.
   In this context, the contribution of this paper is the following. Firstly, we present the framework of
ontological specifications together with a suitable notion of explanation (Section 2). Secondly, when
ontologies and mappings are expressed in reasonably expressive languages, we study the computational
complexity of verifying whether a given formula is an explanation. Finally, we present a general
algorithm for the computation of best explanations (Section 3).


2. Formal Framework
We proceed to present our framework for semantic explanations of ML models. Assume a possibly
infinite set βˆ† of elements that we call instances. Intuitively, βˆ† is the set of all possible elements that the
instance space of an ML model in our framework may possibly contain. Observe that instances are not
yet characterized by their attributes as it is customary in learning algorithms. To bridge this gap, we
further assume a countably infinite set A of unary function symbols that we call the set of attribute
symbols. To each π‘Žπ‘– ∈ A, we associate a surjective function π‘Žsem   𝑖    : βˆ† β†’ π’Ÿπ‘– that we call the semantics
of π‘Žπ‘– . Whenever the co-domain of π‘Žsem  𝑖  is  finite, we say that  π‘Žπ‘– is a finite attribute. Intuitively, a pair
𝒦 = βŸ¨βˆ†, A⟩ provides a formal background to instance space elements and, for this reason, we refer to
it as a data layer.
   A classifier for βˆ† is a function 𝛾 from βˆ† to {0, 1}. Usually, classifiers operate on a restricted set of
attributes of the input instances. To capture this property, we say that a classifier 𝛾 operates over a set
of attributes 𝐴 βŠ† A if, for every pair 𝑑, 𝑑′ ∈ βˆ†, the fact that π‘Žπ‘– (𝑑) = π‘Žπ‘– (𝑑′ ), for each π‘Žπ‘– ∈ 𝐴, implies
𝛾(𝑑) = 𝛾(𝑑′ ). We will call 𝐴 relevant attributes for 𝛾. A classifier 𝛾 for βˆ† is a 𝒦-classifier if there exists
a unique and finite set of relevant attributes 𝐴 βŠ† A for 𝛾.                                 ⋃︀
   Let π’Ÿ be the set of all possible values that an attribute in A may take, i.e., π’Ÿ = 𝑖 π’Ÿπ‘– . We assume
two countably infinite sets F and C of function symbols and relation symbols, respectively. For each
𝑓𝑖 ∈ F with arity 𝑛, the semantics of 𝑓𝑖 is a function 𝑓𝑖sem : π’Ÿπ‘› β†’ π’Ÿ. Similarly, for each 𝑅𝑖 ∈ C with
arity 𝑛, the semantics of 𝑅𝑖 is a relation 𝑅𝑖sem βŠ† π’Ÿπ‘› . Intuitively, A, F, and C will form the terms of our
declarative language. Assume a countably infinite set of variables 𝒱, the set 𝑇 π‘’π‘Ÿπ‘šπ‘ π’¦ (F, C) is the set of
all the expressions of the following forms: 𝑑, with 𝑑 ∈ βˆ†, π‘Ž(π‘₯), with π‘Ž ∈ A and π‘₯ ∈ 𝒱, or 𝑓 (𝑑1 , . . . , 𝑑𝑛 ),
with 𝑓 a function symbol of arity 𝑛 in F and 𝑑1 , . . . , 𝑑𝑛 ∈ 𝑇 π‘’π‘Ÿπ‘šπ‘ π’¦ (F, C). The language ℒ𝒦 (F, C) is
defined as the set of all first-order formulae that can be expressed using terms in 𝑇 π‘’π‘Ÿπ‘šπ‘ π’¦ (F, C). The
semantics of ℒ𝒦 (F, C) is defined as customary using 𝑑, π‘Žsem , and 𝑓 sem to interpret 𝑑 ∈ βˆ†, π‘Ž ∈ A and
𝑓 ∈ F, respectively. Given πœ™ ∈ ℒ𝒦 (F, C) with free variables π‘₯       Β― and a function 𝑣 : 𝒱 β†’ βˆ†, we write
𝑣 |= πœ™ to say that the formula obtained from πœ™ by replacing each π‘₯ ∈ π‘₯        Β― with 𝑣(π‘₯) is true.
   Assume a countably infinite set of predicate symbols P disjoint from C. A mapping assertion from
ℒ𝒦 (F, C) to P is an expression of the form βŸ¨πœ™(π‘₯), πœ“(π‘₯)⟩ where πœ™(π‘₯) is a formula in ℒ𝒦 (F, C) with one
free variable π‘₯ and πœ“(π‘₯) is a first-order formula over P with the single free variable π‘₯. A mapping from
ℒ𝒦 (F, C) to P is a finite set of mapping assertions from ℒ𝒦 (F, C) to P. Intuitively, mappings define the
connection between the instances in the data layer and the predicates in P. To express such connection,
we use ontological specifications.
   Formally, an ontological specification for ℒ𝒦 (F, C) to P (simply, ontological specification) is a pair
π’ͺ = βŸ¨π‘€, 𝑇 ⟩ where 𝑇 is a first-order theory over P and 𝑀 is a mapping from ℒ𝒦 (F, C) to P. The
semantics of an ontological specification is defined in terms of its models. An interpretation for P (simply,
interpretation) is a first-order logic interpretation ℐ for the symbols of P whose domain is βˆ†. Given
a mapping assertion π‘š = βŸ¨πœ™, πœ“βŸ©, we say that ℐ satisfies π‘š, if, for every function 𝑣 : 𝒱 β†’ βˆ†, 𝑣 |= πœ™
implies 𝑣, ℐ |= πœ“. A model for π’ͺ is an interpretation ℐ such that ℐ satisfies 𝑇 and ℐ satisfies π‘š, for
each π‘š ∈ 𝑀 . We use π‘šπ‘œπ‘‘(π’ͺ) for the set of all models of ℐ.
  With ontological specifications in place, we are now ready to formalize our notion of explanation.
Let π’ͺ be an ontological specification as above and πœ™ a first-order formula over P. We use π‘π‘’π‘Ÿπ‘‘(πœ™, π’ͺ)
for the set {𝑗 ∈ βˆ† | 𝑗 ∈ πœ™β„ , for each ℐ ∈ π‘šπ‘œπ‘‘(π’ͺ)}. Assume now a classifier 𝛾 for the data layer
𝒦 and an instance 𝑖 ∈ βˆ†. A Weak Ontology-Based eXplanations (w-OBX) for the decision of 𝛾 over 𝑖
based on π’ͺ is a first-order formula πœ‚(π‘₯) over the alphabet P and one free variable π‘₯ with the following
properties: 𝑖 ∈ π‘π‘’π‘Ÿπ‘‘(πœ‚, π’ͺ), and, 𝛾(𝑖) = 𝛾(𝑗), for each 𝑗 ∈ π‘π‘’π‘Ÿπ‘‘(πœ‚, π’ͺ). The next definition formalizes
the notion of explanation we are looking for.

Definition 1. Let 𝐿 be a language of first-order formulae over P. A w-OBX πœ‚ for the decision of 𝛾 over 𝑖
based on π’ͺ is the best Ontology-Based Explanation in 𝐿 (𝐿-OBX) if πœ‚ ∈ 𝐿 and there exists no w-OBX πœ‚ β€²
for the decision of 𝛾 over 𝑖 based on π’ͺ such that πœ‚ β€² ∈ 𝐿 and π‘π‘’π‘Ÿπ‘‘(πœ‚, π’ͺ) ⊊ π‘π‘’π‘Ÿπ‘‘(πœ‚ β€² , π’ͺ).

Example 1. Consider a scenario where a classifier 𝛾 is used to provide movie recommendations. The
relevant attributes for 𝛾 are π‘π‘Ÿ (Critic Rating) and π‘π‘Ÿ (Public Rating) with domain [0, 10]; and 𝑙𝑏 (Low
                      (︁(οΈ€ Cast) with domain )οΈ€{𝑦, 𝑛}.
Budget) and 𝑓 𝑐 (Famous                             )︁ Moreover, 𝛾(𝑖) = 1 if and only if 𝑖 satisfies the following
                          1
ℒ𝒦 (F, C) formula: 2 Β· (π‘π‘Ÿ(π‘₯) + π‘π‘Ÿ(π‘₯)) β‰₯ 5 ∧ (𝑓 𝑐(π‘₯) = 𝑛). Intuitively, 𝛾 recommends a movie if it
received a good average score from critics and public and it stars non-famous actors. Suppose that we want
to explain the decision 𝛾(𝑖) = 1 taken by 𝛾 for the movie 𝑖 such that π‘π‘Ÿ(𝑖) = 10, π‘π‘Ÿ(𝑖) = 10, 𝑙𝑏(𝑖) = 𝑦𝑒𝑠,
𝑓 𝑐(𝑖) = π‘›π‘œ. For the explanation, we want to use the ontological symbols 𝑃 𝐴 (Publicly Acclaimed),
𝐢𝐴 (Critically Acclaimed), 𝐡𝑀 (B-Movie), and 𝐢𝑀 (Cult Movie). Let 𝑇 and 𝑀 be, respectively, the
TBox {𝑃 𝐴 βŠ‘ 𝐢𝑀 ; 𝐢𝐴 βŠ‘ 𝐢𝑀 ; } and(οΈ€ mapping {π‘š1 , π‘š2 , π‘š3 } with π‘š              )οΈ€ 1 = {(π‘π‘Ÿ(π‘₯) = 10), 𝑃 𝐴(π‘₯)},
π‘š2 = {(π‘π‘Ÿ(π‘₯) = 10), 𝐢𝐴(π‘₯); π‘š3 = { (𝑙𝑏(π‘₯) = 𝑦𝑒𝑠) ∧ (𝑓 𝑐(π‘₯) = π‘›π‘œ) , 𝐡𝑀 (π‘₯)}. Let π’ͺ = βŸ¨π‘€, 𝑇 ⟩. It
is easy to verify that the following are all w-OBX for the decision of 𝛾 over 𝑖 based on π’ͺ: (𝑃 𝐴(π‘₯)βˆ§π΅π‘€ (π‘₯)),
(𝐢𝐴(π‘₯) ∧ 𝐡𝑀 (π‘₯)), and (𝐢𝑀 (π‘₯) ∧ 𝐡𝑀 (π‘₯)). However, 𝐢𝑀 (π‘₯) ∧ 𝐡𝑀 (π‘₯) is the only CQ-OBX for the
decision of 𝛾 over 𝑖 based on π’ͺ, where CQ is the language of conjunctive queries.


3. Some Preliminary Technical Results
Let β„’βˆ’π’¦ (F, C) be the quantifier-free subset of ℒ𝒦 (F, C) that uses only finite attributes. In what follows,
we assume that 𝑖) classifiers and formulae πœ™(π‘₯) in the left-hand side of mapping assertions are defined
in β„’βˆ’π’¦ (F, C); 𝑖𝑖) the right-hand side of mapping assertions allows only for formulae of the form 𝐡(π‘₯),
βˆƒπ‘¦.𝑅(π‘₯, 𝑦), and βˆƒπ‘¦.𝑅(𝑦, π‘₯); 𝑖𝑖𝑖) theories over P are formulated in DL-Liteβ„› [13]; and 𝑖𝑣) the language
for expressing explanations is the class of conjunctive queries CQ. In this scenario, we consider the
following computational problems. Verification: given also a CQ πœ‚(π‘₯) over the alphabet P, check
whether πœ‚ is a w-OBX of the decision of 𝛾 over 𝑖 based on π’ͺ; Computation: compute all the CQ-OBXs
of the decision of 𝛾 over 𝑖 based on π’ͺ.

Theorem 1. Verification is coNP-complete.

   Next, we provide a technique to return the set of all CQ-OBXs of the decision of 𝛾 over 𝑖 based on π’ͺ
(clearly, if two formulae π‘ž(π‘₯) and π‘ž β€² (π‘₯) are such that π‘π‘’π‘Ÿπ‘‘(π‘ž, π’ͺ) = π‘π‘’π‘Ÿπ‘‘(π‘ž β€² , π’ͺ), then we say that they
are equivalent w.r.t. π’ͺ and treat them as the same formula).
   Given an instance 𝑖 ∈ βˆ† and a mapping 𝑀 from ℒ𝒦 (F, C) to P in our considered scenario, we
denote by 𝑀 (𝑖) the set of atoms obtained by chasing the instance 𝑖 w.r.t. 𝑀 , i.e: 𝑀 (𝑖) contains the atom
𝐡(𝑖) (resp. βˆƒπ‘…(𝑖), βˆƒπ‘…βˆ’ (𝑖)) if and only if there exists a mapping assertion of the form βŸ¨πœ™(π‘₯), 𝐡(π‘₯)⟩
(resp. βŸ¨πœ™(π‘₯), βˆƒπ‘¦.𝑅(π‘₯, 𝑦)⟩, βŸ¨πœ™(π‘₯), βˆƒπ‘¦.𝑅(𝑦, π‘₯)⟩) in 𝑀 such that πœ™(𝑖) is true. Furthermore, given a set
𝑀 (𝑖) of atoms as above, we denote by πœ‚π‘€     𝑖 (π‘₯) the CQ obtained by conjoining all the atoms in 𝑀 (𝑖),

where we select a free variable π‘₯ and each atom of the form 𝐡(𝑖) is replaced with 𝐡(π‘₯), and each atom
of the form βˆƒπ‘…(𝑖) (resp. βˆƒπ‘…βˆ’ (𝑖)) is replaced with βˆƒπ‘¦.𝑅(π‘₯, 𝑦) (resp. βˆƒπ‘¦.𝑅(𝑦, π‘₯)) in which 𝑦 is always a
fresh existential variable. Given an instance 𝑖 ∈ βˆ† and an ontology π’ͺ = βŸ¨π‘€, 𝑇 ⟩ in our scenario, we
now prove that πœ‚π‘€  𝑖 (π‘₯) is actually the smallest (up to equivalence w.r.t. π’ͺ) CQ such that 𝑖 ∈ π‘π‘’π‘Ÿπ‘‘(πœ‚ 𝑖 , π’ͺ),
                                                                                                      𝑀
in the sense that there exists no other CQ π‘ž(π‘₯) for which 𝑖 ∈ π‘π‘’π‘Ÿπ‘‘(π‘ž, π’ͺ) and there is an instance 𝑗 ∈ βˆ†
satisfying 𝑗 ∈ π‘π‘’π‘Ÿπ‘‘(πœ‚π‘€π‘– , π’ͺ) and 𝑗 ̸∈ π‘π‘’π‘Ÿπ‘‘(π‘ž, π’ͺ).

                                                                                  𝑖 (π‘₯) is the
Proposition 1. Given an instance 𝑖 ∈ βˆ† and an ontology π’ͺ = βŸ¨π‘€, 𝑇 ⟩, we have that πœ‚π‘€
                                                             𝑖
smallest (up to equivalence w.r.t. π’ͺ) CQ such that 𝑖 ∈ π‘π‘’π‘Ÿπ‘‘(πœ‚π‘€ , π’ͺ).

   Given an instance 𝑖 ∈ βˆ† and an ontology π’ͺ = βŸ¨π‘€, 𝑇 ⟩ in our considered scenario, we denote by
𝑀π’ͺ (𝑖) the set of atoms obtained from 𝑀 (𝑖) by adding the atom 𝐢(𝑖) if and only if there exists an atom
of the form 𝐢 β€² (𝑖) ∈ 𝑀 (𝑖) and 𝑇 |= 𝐢 β€² βŠ‘ 𝐢, where both 𝐢 and 𝐢 β€² can be any basic DL-Liteβ„› concept,
i.e. concepts of the form 𝐡, βˆƒπ‘…, and βˆƒπ‘…βˆ’ with 𝐡 and 𝑅 in P.

Theorem 2. Let 𝛾 be a classifier, 𝑖 ∈ βˆ† be an instance, π’ͺ = βŸ¨π‘€, 𝑇 ⟩ be an ontology specification, and
πœ‚(π‘₯) be a CQ-OBX of the decision 𝛾 over 𝑖 w.r.t. π’ͺ. We have that πœ‚(π‘₯) is equivalent w.r.t. π’ͺ to a query of
          𝑖 (π‘₯), where 𝑀 β€² βŠ† 𝑀 (𝑖).
the form πœ‚π‘€ β€²                   π’ͺ

   Actually, the above results suggest a naive algorithm to compute the set of all the CQ-OBXs. Specifi-
                                                  𝑖 (π‘₯), where 𝑀 β€² βŠ† 𝑀 (𝑖), and check that 1) πœ‚ 𝑖 (π‘₯) is a
cally, it is enough to consider all the possible πœ‚π‘€ β€²                 π’ͺ                        𝑀′
w-OBX of the decision 𝛾 over 𝑖 based on π’ͺ, and 2) there is no other 𝑀 β€²β€² βŠ† 𝑀π’ͺ (𝑖) for which πœ‚π‘€   𝑖 (π‘₯) is
                                                                                                   β€²β€²
                                                                              𝑖
a w-OBX of the decision 𝛾 over 𝑖 based on π’ͺ and the formula PerfectRef(πœ‚π‘€ β€² , π’ͺ) is strictly contained
in the formula PerfectRef(πœ‚π‘€     𝑖 , π’ͺ), meaning that it can be the case that π‘π‘’π‘Ÿπ‘‘(πœ‚ 𝑖 , π’ͺ) ⊊ (πœ‚ 𝑖 , π’ͺ).
                                    β€²β€²                                               𝑀′           𝑀 β€²β€²
Here, PerfectRef denotes the algorithm used for rewriting CQs w.r.t. DL-Liteβ„› TBoxes [13].


Acknowledgments
This work has been supported by MUR under the PNRR project FAIR (PE0000013) and by the EU under
the H2020-EU.2.1.1 project TAILOR (grant id. 952215).


References
 [1] S. Shalev-Shwartz, S. Ben-David, Understanding Machine Learning - From Theory
     to Algorithms, Cambridge University Press, 2014. URL: http://www.cambridge.org/
     de/academic/subjects/computer-science/pattern-recognition-and-machine-learning/
     understanding-machine-learning-theory-algorithms.
 [2] A. B. Arrieta, N. D. RodrΓ­guez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. GarcΓ­a, S. Gil-Lopez,
     D. Molina, R. Benjamins, R. Chatila, F. Herrera, Explainable artificial intelligence (XAI): concepts,
     taxonomies, opportunities and challenges toward responsible AI, Inf. Fusion 58 (2020) 82–115.
     URL: https://doi.org/10.1016/j.inffus.2019.12.012. doi:10.1016/J.INFFUS.2019.12.012.
 [3] M. C. Cooper, J. Marques-Silva, Tractability of explaining classifier decisions, Artif. Intell. 316
     (2023) 103841. URL: https://doi.org/10.1016/j.artint.2022.103841. doi:10.1016/J.ARTINT.2022.
     103841.
 [4] A. Shih, A. Choi, A. Darwiche, A symbolic approach to explaining bayesian network classifiers,
     in: J. Lang (Ed.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial
     Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, ijcai.org, 2018, pp. 5103–5111. URL:
     https://doi.org/10.24963/ijcai.2018/708. doi:10.24963/IJCAI.2018/708.
 [5] A. Darwiche, Three modern roles for logic in AI, in: D. Suciu, Y. Tao, Z. Wei (Eds.), Proceedings of
     the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS
     2020, Portland, OR, USA, June 14-19, 2020, ACM, 2020, pp. 229–243. URL: https://doi.org/10.1145/
     3375395.3389131. doi:10.1145/3375395.3389131.
 [6] Y. Izza, J. Marques-Silva, On explaining random forests with SAT, in: Z. Zhou (Ed.), Proceedings
     of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event
     / Montreal, Canada, 19-27 August 2021, ijcai.org, 2021, pp. 2584–2591. URL: https://doi.org/10.
     24963/ijcai.2021/356. doi:10.24963/IJCAI.2021/356.
 [7] M. Lenzerini, Data integration: A theoretical perspective., in: Proceedings of the Twenty-First
     ACM SIGACT SIGMOD SIGART Symposium on Principles of Database Systems (PODS 2002),
     2002, pp. 233–246.
 [8] M. Lenzerini, Ontology-based data management, in: Proceedings of the Twentieth International
     Conference on Information and Knowledge Management (CIKM 2011), 2011, pp. 5–6. doi:10.
     1145/2063576.2063582.
 [9] G. Cima, M. Console, M. Lenzerini, A. Poggi, A review of data abstraction, Frontiers Artif. Intell. 6
     (2023). URL: https://doi.org/10.3389/frai.2023.1085754. doi:10.3389/FRAI.2023.1085754.
[10] F. Croce, G. Cima, M. Lenzerini, T. Catarci, Ontology-based explanation of classifiers, in: A. Poulo-
     vassilis, D. Auber, N. Bikakis, P. K. Chrysanthis, G. Papastefanatos, M. A. Sharaf, N. Pelekis,
     C. Renso, Y. Theodoridis, K. Zeitouni, T. Cerquitelli, S. Chiusano, G. Vargas-Solar, B. Omidvar-
     Tehrani, K. Morik, J. Renders, D. Firmani, L. Tanca, D. Mottin, M. Lissandrini, Y. Velegrakis (Eds.),
     Proceedings of the Workshops of the EDBT/ICDT 2020 Joint Conference, Copenhagen, Den-
     mark, March 30, 2020, volume 2578 of CEUR Workshop Proceedings, CEUR-WS.org, 2020. URL:
     https://ceur-ws.org/Vol-2578/PIE3.pdf.
[11] T. Catarci, M. Scannapieco, M. Console, C. Demetrescu, My (fair) big data, in: J. Nie, Z. Obradovic,
     T. Suzumura, R. Ghosh, R. Nambiar, C. Wang, H. Zang, R. Baeza-Yates, X. Hu, J. Kepner, A. Cuz-
     zocrea, J. Tang, M. Toyoda (Eds.), 2017 IEEE International Conference on Big Data (IEEE BigData
     2017), Boston, MA, USA, December 11-14, 2017, IEEE Computer Society, 2017, pp. 2974–2979. URL:
     https://doi.org/10.1109/BigData.2017.8258267. doi:10.1109/BIGDATA.2017.8258267.
[12] G. Cima, A. Poggi, M. Lenzerini, The notion of abstraction in ontology-based data management,
     Artificial Intelligence 323 (2023) 103976.
[13] D. Calvanese, G. De Giacomo, D. Lembo, M. Lenzerini, R. Rosati, Tractable reasoning and efficient
     query answering in description logics: The DL-Lite family, Journal of Automated Reasoning 39
     (2007) 385–429.