=Paper=
{{Paper
|id=Vol-2969/paper32-CAOS
|storemode=property
|title=A Multipreference Semantics from Common Sense Reasoning to Neural Network Models: An Overview
|pdfUrl=https://ceur-ws.org/Vol-2969/paper32-CAOS.pdf
|volume=Vol-2969
|authors=Laura Giordano,Valentina Gliozzi,Daniele Theseider Dupré
|dblpUrl=https://dblp.org/rec/conf/jowo/0001GD21
}}
==A Multipreference Semantics from Common Sense Reasoning to Neural Network Models: An Overview==
A Multipreference Semantics from Common Sense Reasoning to Neural Network Models: an Overview Laura Giordano1 , Valentina Gliozzi2 and Daniele Theseider Dupré1 1 DISIT - Università del Piemonte Orientale, Viale Michel 11, I-15121, Alessandria, Italy 2 Dipartimento di Informatica, Università degli Studi di Torino, Corso Svizzera 185, I-10149,Torino, Italy Abstract In this short paper we report about a “concept-wise" multipreference semantics for weighted condition- als and its use to provide a logical interpretation to some neural network models, Self-Organising Maps (SOMs) and Multilayer Perceptrons (MLPs). For MLPs, a deep network can be regarded as a conditional knowledge base, in which the synaptic connections correspond to weighted conditionals. Keywords Common Sense Reasoning, Preferential semantics, Typicality in Description Logics, Neural Network models 1. Introduction Preferential approaches to common sense reasoning [1, 2, 3, 4, 5, 6, 7] have their roots in conditional logics [8, 9], and have been recently extended to Description Logics (DLs), to deal with inheritance with exceptions in ontologies, by allowing non-strict form of inclusions, called defeasible or typicality inclusions. Different preferential semantics [10, 11] and closure constructions [12, 13, 14, 15, 16, 17, 18] have been proposed for such defeasible DLs and, in this paper, we report about a concept- wise multipreference semantics [19], which has been recently introduced as a semantics of ranked knowledge bases in a lightweight DL to account for preferences with respect to different concepts, and has been proposed as a semantics for some neural network models. We have considered both an unsupervised model, Self-organising maps (SOMs)[20], which is considered as a psychologically and biologically plausible neural network model, and a supervised one, Multilayer Perceptrons (MLPs) [21]. Learning algorithms in the two cases are quite different but our aim is to capture, through a semantic interpretation, the behavior of the network resulting after training and not to deal with the learning process. We will see that this can be accomplished in both cases in a similar way, based on the multi-preferential semantics. In both cases, considering the domain of all input stimuli presented to the network during training (or in the generalization phase), one can build a semantic interpretation describing the input-output behavior of the network as a multi-preference interpretation, where preferences are CAOS 2021: 5th Workshop on Cognition And OntologieS, held at JOWO 2021: Episode VII The Bolzano Summer of Knowledge, September 11-18, 2021, Bolzano, Italy " laura.giordano@uniupo.it (L. Giordano); valentina.gliozzi@unito.it (V. Gliozzi); dtd@uniupo.it (D. Theseider Dupré) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) associated to concepts. For SOMs, the learned categories 𝐶1 , . . . , 𝐶𝑛 are regarded as concepts so that a preference relation (over the domain of input stimuli) is associated to each category [22, 23]. For MLPs, each neuron in the deep network (including hidden neurons) can be associated with a concept and with a preference relation on the domain [24]. The idea is that, given two input stimuli 𝑥 and 𝑦, and two categories/concepts, e.g., Horse and Zebra, the neural model can assign to 𝑥 a degree of membership in the category Horse which is higher than the degree of membership of 𝑦, so that 𝑥 can be regarded as a being more typical than 𝑦 as a horse (x𝑟𝑑(𝑦, 𝐶𝑖 ) (𝑥 is more typical than 𝑦 with respect to category 𝐶𝑖 if its relative distance from category 𝐶𝑖 is lower than the relative distance of 𝑦). This preferential model can be exploited to learn or validate conditional knowledge from empirical data, by verifying conditional formulas over the preferential interpretation constructed from the SOM. Both a two-valued and a fuzzy semantics have been considered [23]. In both cases, model checking can be used for the verification of inclusions (either defeasible inclusions or fuzzy inclusion axioms) over the respective models of the SOM (for instance, do the most typical penguins belong to the category Bird with at least a degree of membership 0.8?). Starting from the fuzzy interpretation of the SOM, a probabilistic account can also be given based on Zadeh’s probability of fuzzy events [30]. 4. A Preferential Interpretation of Multilayer Perceptrons For MLPs, a deep network is considered after the training phase, when the synaptic weights have been learned. The input-output behaviour of the network can be captured in a similar way as for SOMs by constructing a preferential interpretation over the domain ∆ of the input stimuli considered during training (or generalization) [24]. Each neuron 𝑘 of interest can be associated to a concept 𝐶𝑘 and, for each distinguished concept 𝐶𝑗 , a preference relation <𝐶𝑗 is defined over the domain ∆ based on the activity values, 𝑦𝑗 (𝑣), of neuron 𝑗 for each input 𝑣 ∈ ∆. In a similar way, a fuzzy interpretation of the network can be constructed over the domain ∆, as well as a fuzzy-multipreference semantics. All the three semantics allow the input-output behavior of the network to be captured by interpretations built over a set of input stimuli through simple constructions, which exploits the activity level of neurons for the stimuli. In particular, for the fuzzy-multipreference inter- pretations, the idea [24] is to extend a fuzzy DL interpretation with a set of induced preferences. In a fuzzy DL interpretation 𝐼, the interpretation of a concept 𝐶ℎ is a mapping 𝐶𝑖𝐼 : ∆ → [0, 1], associating to each 𝑥 ∈ ∆ the degree of membership of 𝑥 in 𝐶ℎ . The activation value of unit ℎ for a stimulus 𝑥 in the network (assumed to be in the interval [0, 1]) is taken as the degree of membership of 𝑥 in concept 𝐶ℎ . The fuzzy interpretation also induces an ordering <𝐶ℎ on the domain ∆, for each 𝐶ℎ , to be regarded as the preference relation associated to concept 𝐶ℎ . This allows a notion of typicality to be defined in a fuzzy interpretation. Let us call ℳ𝑓,Δ 𝒩 the fuzzy multipreference interpretation built from the network 𝒩 over a domain ∆ of input stimuli. As for SOMs, logical properties of the neural network (both typicality properties and fuzzy axioms) can then be verified by model checking over such an interpretation. Evaluating proper- ties involving hidden units might be of interest, although their meaning is usually unknown. In the well known Hinton’s family example [31], one may want to verify whether, normally, given an old Person 1 and relationship Husband, Person 2 would also be old, i.e., whether T(𝑂𝑙𝑑1 ⊓ 𝐻𝑢𝑠𝑏𝑎𝑛𝑑) ⊑ 𝑂𝑙𝑑2 is satisfied. Here, concept 𝑂𝑙𝑑1 (resp., 𝑂𝑙𝑑2 ) is associated to a (known, in this case) hidden unit for Person 1 (and Person 2), while Husband is associated to an input unit. If the properties of interest involve some specific units, only the concepts associated to those units may be considered in the language to build the interpretation. All the three kinds of interpretations considered above for MLPs describe the input-output behavior of the network. However, the fuzzy multipreference interpretation ℳ𝑓,Δ 𝒩 described above can be also proven to be a model of the neural network 𝒩 in a logical sense, by mapping the multilayer network into a weighted conditional knowledge base. 4.1. Weighted 𝒜ℒ𝒞 Knowledge Bases In this section, we shortly recall the definition of weighted conditional knowledge bases through an example, and give some hints about the two-valued and fuzzy multipreference semantics, referring to [24] for a detailed description for ℰℒ. A weighted 𝒜ℒ𝒞 knowledge base 𝐾 over a set 𝒞 = {𝐶1 , . . . , 𝐶𝑘 } of distinguished 𝒜ℒ𝒞 concepts is a tuple ⟨𝒯 , 𝒯𝐶1 , . . . , 𝒯𝐶𝑘 , 𝒜⟩, where the TBOX 𝒯 is a set of 𝒜ℒ𝒞 inclusion axiom, the ABox 𝒜 is a set of 𝒜ℒ𝒞 assertions and, for each distinguished concept 𝐶𝑖 ∈ 𝒞, 𝒯𝐶𝑖 is a set of weighted typicality inclusions of the form T(𝐶𝑖 ) ⊑ 𝐷, with a positive or negative weight (a real number). In the fuzzy case, 𝒯 and 𝒜 contain fuzzy axioms. Consider the weighted knowledge base 𝐾 = ⟨𝒯 , 𝒯𝐵𝑖𝑟𝑑 , 𝒯𝑃 𝑒𝑛𝑔𝑢𝑖𝑛 , 𝒜⟩, over the set of distin- guished concepts 𝒞 = {Bird , Penguin}, with empty ABox and with 𝒯 containing the inclusions Penguin ⊑ Bird and Black ⊓ Grey ⊑ ⊥. The weighted TBox 𝒯𝐵𝑖𝑟𝑑 contains the following weighted defeasible inclusions: (𝑑1 ) T(Bird ) ⊑ Fly, +20 (𝑑2 ) T(Bird ) ⊑ ∃has_Wings.⊤, +50 (𝑑3 ) T(Bird ) ⊑ ∃has_Feathers.⊤, +50; 𝒯𝑃 𝑒𝑛𝑔𝑢𝑖𝑛 contains the defeasible inclusions: (𝑑4 ) T(Penguin) ⊑ Fly, - 70 (𝑑5 ) T(Penguin) ⊑ Black , +50; (𝑑6 ) T(Penguin) ⊑ Grey, +10; The meaning is that a bird normally has wings, has feathers and flies, but having wings and feathers (both with weight 50) for a bird is more plausible than flying (weight 20), although flying is regarded as being plausible. For a penguin, flying is not plausible (inclusion 𝑑4 has a negative weight -70), while being black or being grey are plausible properties of prototypical penguins, in fact, 𝑑5 and 𝑑6 have positive weights, resp. 50 and 10, so that being black is more plausible than being grey. A two-valued semantics for weighted 𝒜ℒ𝒞 knowledge bases can be defined by developing a semantic closure construction in the same spirit as Lehmann’s lexicographic closure [32], but more similar to Kern-Isberner’s semantics of c-representations [7, 33], in which the world ranks are generated as a sum of impacts of falsified conditionals. Here, the (positive or negative) weights of the satisfied defaults are summed, but in a concept-wise manner, so to determine the plausibility of a domain elements with respect to certain concepts. In this way, the modular structure of the knowledge base can be considered. More precisely, for a domain element 𝑥 in ∆, and a distinguished concept 𝐶𝑖 , the weight 𝑊𝑖 (𝑥) of 𝑥 wrt 𝐶𝑖 is defined as the sum of the weights 𝑤ℎ𝑖 of the typicality inclusions T(𝐶𝑖 ) ⊑ 𝐷𝑖,ℎ in 𝒯𝐶𝑖 verified by 𝑥 (and is −∞ when 𝑥 is not an instance of 𝐶𝑖 ). From the weights 𝑊𝑖 (𝑥) the preference relation ≤𝐶𝑖 can be defined by letting: for 𝑥, 𝑦 ∈ ∆, 𝑥 ≤𝐶𝑖 𝑦 iff 𝑊𝑖 (𝑥) ≥ 𝑊𝑖 (𝑦). The higher the weight of 𝑥 wrt 𝐶𝑖 the higher its typicality relative to 𝐶𝑖 . This closure construction defines preferences <𝐶𝑖 (strict modular partial orders) and allows for the definition of concept-wise multipreference interpretations as in Section 2. In the fuzzy case, the fuzzy logic combination functions are used for complex concepts to compute the 𝑊𝑖 (𝑥)’s and to determine the associated preference relations. To guarantee that the preferences determined from the knowledge base are coherent with the fuzzy interpretation of concepts, a notions of coherent (fuzzy) multipreference interpretation (cf𝑚 -interpretation) is also introduced [24]. 4.2. MLPs as Conditional Knowledge Bases Let us describe how the multilayer network 𝒩 can be mapped to a weighted conditional knowledge base 𝐾 𝒩 , i.e., to a set of weighted typicality inclusions. The idea is to consider, for each unit 𝑘, all the units 𝑗1 , . . . , 𝑗𝑚 , whose output signals are the input signals of unit 𝑘, with synaptic weights 𝑤𝑘,𝑗1 , . . . , 𝑤𝑘,𝑗𝑚 . Let 𝐶𝑘 be the concept name associated to unit 𝑘 and 𝐶𝑗1 , . . . , 𝐶𝑗𝑚 be the concept names associated to units 𝑗1 , . . . , 𝑗𝑚 . One can define, for unit 𝑘, a set 𝒯𝐶𝑘 of 𝑚 typicality inclusions, with their associated weights, as follows: T(𝐶𝑘 ) ⊑ 𝐶𝑗1 with 𝑤𝑘,𝑗1 , . . . , T(𝐶𝑘 ) ⊑ 𝐶𝑗𝑚 with 𝑤𝑘,𝑗𝑚 . The network 𝒩 can than be mapped to a conditional knowledge base 𝐾 𝒩 containing, for each neuron 𝑘, a set of typicality inclusions 𝒯𝐶𝑘 as defined above. Let us consider the fuzzy multipreference interpretation ℳ𝑓,Δ 𝒩 built from 𝒩 over a domain ∆ of input stimuli, as described above. Let us further assume that, in the construction, all units are considered and a concept 𝐶𝑘 is introduced in the language for each unit 𝑘. It has been proven [24] that the interpretation ℳ𝑓,Δ 𝒩 is a cf -model of the knowledge base 𝐾 , under 𝑚 𝒩 some condition on the activation functions in 𝒩 . In particular, the properties that are entailed from 𝐾 𝒩 are properties satisfied by ℳ𝑓,Δ 𝒩 , for any choice of the input stimuli in the domain ∆. 5. Discussion and Conclusions In [22, 23, 24] we have studied the relationships between a preferential logic of common sense reasoning and two different neural network models, Self-Organising Maps and Multilayer Perceptrons, showing that a multi-preferential semantics can be used to provide a logical model of the neural network behavior after training. Such a model can be used to learn or to validate conditional knowledge from the empirical data used for training and generalization, by model checking of logical properties. A two-valued KLM-style preferential interpretation with multiple preferences and a fuzzy semantics have been considered, based on the idea of associating preference relations to categories (in the case of SOMs) or to neurons (for Multilayer Perceptrons). Due to the diversity of the two models we would expect that a similar approach might be extended to other neural network models and learning approaches. The plausibility of concept-wise multipreference semantics is supported by the fact that self-organising maps are considered as psychologically and biologically plausible neural network models. This multipreference semantics has been shown to satisfy the KLM properties in the two-valued case [19], and most of the KLM properties in the fuzzy case, depending on their reformulation and on the fuzzy combination functions considered [34]. Much work has been devoted, in recent years, to the combination of neural networks and symbolic reasoning [35, 36, 37], leading to the definition of new computational models [38, 39, 40, 41] and to extensions of logic programming languages with neural predicates [42, 43]. Among the earliest systems combining logical reasoning and neural learning are the Knowledge- Based Artificial Neural Network (KBANN) [44] and the Connectionist Inductive Learning and Logic Programming (CILP) [45] systems and Penalty Logic [46], a non-monotonic reasoning formalism used to establish a correspondence with symmetric connectionist networks. The relationships between normal logic programs under the stable model semantics [47] and neural networks have been investigated by Garcez and Gabbay [45, 35] and by Hitzler et al. [48]. The correspondence between neural network models and fuzzy systems has been first in- vestigated by Kosko in his seminal work [49]. We have adopted the usual way of viewing concepts in fuzzy DLs [50, 51, 52], and we have used fuzzy concepts within a multipreference semantics, based on a semantic closure construction in the line of Lehmann’s semantics for lexicographic closure [32] and strictly related to Kern-Isberner’s c-representations [7, 33]. Fur- thermore, we have adopted a preferential semantics with multiple preferences, in order to make it concept-wise: each distinguished concept 𝐶𝑖 has its own set 𝒯𝐶𝑖 of (weighted) typicality inclusions, and an associated preference relation <𝐶𝑖 . This allows a preference relation to be associated to each category (e.g., in the preferential interpretation of SOMs) or to neurons (in a deep network). A combination of fuzzy logic with the preferential semantics of conditional knowledge bases has been first studied by Casini and Straccia [53], who have developed a rational closure construction for propositional Gödel logic. For Multilayer Perceptrons, it has been proven [24] that a deep network can itself be regarded as a weighted conditional knowledge base (under some conditions on the activation function). This opens to the possibility of adopting a conditional logics as a basis for neuro-symbolic integration. While a neural network, once trained, is able and fast in classifying the new stimuli (that is, it is able to do instance checking), all other reasoning services such as satisfiability, entailment and model-checking are missing. These capabilities would be needed for dealing with tasks combining empirical and symbolic knowledge, such as, for instance: proving whether the network satisfies some (strict or conditional) properties; learning the weights of a conditional knowledge base from empirical data, and combine the defeasible inclusions extracted from a neural network with other defeasible or strict inclusions for inference. To make these tasks possible, the development of proof methods for such logics is a prelimi- nary step. In the two-valued case multipreference entailment is decidable for weighted ℰℒ⊥ knowledge bases, and proof methods for reasoning with weighted conditional knowledge bases in ℰℒ⊥ can, for instance, exploit Answer Set Programming (ASP) encodings of the concept-wise multipreference semantics [54], using asprin [55] to achieve defeasible reasoning, an approach already considered for ranked ℰℒ+ ⊥ knowledge bases [19]. In the fuzzy case, an open problem is whether the notion of fuzzy-multipreference entailment is decidable (even for the small fragment of ℰℒ without roles), and under which choice of fuzzy logic combination functions. Undecidability results for fuzzy description logics with general inclusion axioms [56, 57, 58] motivate the investigation of decidable approximations of fuzzy-multipreference entailment. An interesting issue is whether the mapping of deep neural networks to weighted conditional knowledge bases can be extended to more complex neural network models, such as Graph neural networks [38], or whether different logical formalisms and semantics would be needed. Another issue is whether the fuzzy-preferential interpretation of neural networks can be related with the probabilistic interpretation of neural networks based on statistical AI. This is an interesting issue, as the fuzzy DL interpretations we have considered in [24], where concepts are regarded as fuzzy sets, also suggests a probabilistic account based on Zadeh’s probability of fuzzy events [30]. We refer to [23] for some results concerning a probabilistic interpretation of SOMs and to [59] for a preliminary account for MLPs. Acknowledgments We thank the anonymous referees for their helpful comments and suggestions. This research has been partially supported by INDAM-GNCS Projects 2019 and 2020. References [1] D. Gabbay, Theoretical foundations for non-monotonic reasoning in expert systems, in: A. K.R. (Ed.), Logics and Models of Concurrent Systems, volume 13 of NATO ASI Series (Series F: Computer and Systems Sciences), Springer, 1985. [2] D. Makinson, General theory of cumulative inference, in: Non-Monotonic Reasoning, 2nd International Workshop, Grassau, FRG, June 13-15, 1988, Proceedings, 1988, pp. 1–18. [3] J. Pearl, Probabilistic Reasoning in Intelligent Systems Networks of Plausible Inference, Morgan Kaufmann, 1988. [4] S. Kraus, D. Lehmann, M. Magidor, Nonmonotonic reasoning, preferential models and cumulative logics, Artificial Intelligence 44 (1990) 167–207. [5] D. Lehmann, M. Magidor, What does a conditional knowledge base entail?, Artificial Intelligence 55 (1992) 1–60. [6] S. Benferhat, C. Cayrol, D. Dubois, J. Lang, H. Prade, Inconsistency management and prioritized syntax-based entailment, in: Proc. IJCAI’93, Chambéry, France, August 28 - September 3, Morgan Kaufmann, 1993, pp. 640–647. [7] G. Kern-Isberner, Conditionals in Nonmonotonic Reasoning and Belief Revision - Consid- ering Conditionals as Agents, volume 2087 of LNCS, Springer, 2001. [8] D. Lewis, Counterfactuals, Basil Blackwell Ltd, 1973. [9] D. Nute, Topics in conditional logic, Reidel, Dordrecht (1980). [10] L. Giordano, V. Gliozzi, N. Olivetti, G. L. Pozzato, Preferential Description Logics, in: LPAR 2007, volume 4790 of LNAI, Springer, Yerevan, Armenia, 2007, pp. 257–272. [11] K. Britz, J. Heidema, T. Meyer, Semantic preferential subsumption, in: G. Brewka, J. Lang (Eds.), KR 2008, AAAI Press, Sidney, Australia, 2008, pp. 476–484. [12] G. Casini, U. Straccia, Rational Closure for Defeasible Description Logics, in: T. Janhunen, I. Niemelä (Eds.), JELIA 2010, volume 6341 of LNCS, Springer, Helsinki, 2010, pp. 77–90. [13] G. Casini, T. Meyer, I. J. Varzinczak, , K. Moodley, Nonmonotonic Reasoning in Description Logics: Rational Closure for the ABox, in: DL 2013, volume 1014 of CEUR Workshop Proceedings, 2013, pp. 600–615. [14] L. Giordano, V. Gliozzi, N. Olivetti, G. L. Pozzato, Semantic characterization of rational closure: From propositional logic to description logics, Artif. Intell. 226 (2015) 1–33. [15] P. A. Bonatti, L. Sauro, On the logical properties of the nonmonotonic description logic DLN , Artif. Intell. 248 (2017) 85–111. [16] G. Casini, U. Straccia, T. Meyer, A polynomial time subsumption algorithm for nominal safe ELO⊥ under rational closure, Inf. Sci. 501 (2019) 588–620. [17] K. Britz, G. Casini, T. Meyer, K. Moodley, U. Sattler, I. Varzinczak, Principles of KLM-style defeasible description logics, ACM Trans. Comput. Log. 22 (2021) 1:1–1:46. [18] L. Giordano, V. Gliozzi, A reconstruction of multipreference closure, Artif. Intell. 290 (2021). [19] L. Giordano, D. Theseider Dupré, An ASP approach for reasoning in a concept-aware multipreferential lightweight DL, Theory and Practice of Logic programming, TPLP 10(5) (2020) 751–766. [20] T. Kohonen, M. Schroeder, T. Huang (Eds.), Self-Organizing Maps, Third Edition, Springer Series in Information Sciences, Springer, 2001. [21] S. Haykin, Neural Networks - A Comprehensive Foundation, Pearson, 1999. [22] L. Giordano, V. Gliozzi, D. Theseider Dupré, On a plausible concept-wise multipreference semantics and its relations with self-organising maps, in: F. Calimeri, S. Perri, E. Zumpano (Eds.), CILC 2020, Rende, Italy, October 13-15, 2020, volume 2710 of CEUR, 2020, pp. 127–140. [23] L. Giordano, V. Gliozzi, D. Theseider Dupré, A conditional, a fuzzy and a probabilistic interpretation of self-organising maps, CoRR abs/2103.06854 (2021). URL: https://arxiv. org/abs/2103.06854. [24] L. Giordano, D. Theseider Dupré, Weighted defeasible knowledge bases and a multiprefer- ence semantics for a deep neural network model, in: Proc17th European Conf. on Logics in AI, JELIA 2021, May 17-20, volume 12678 of LNCS, Springer, 2021, pp. 225–242. [25] A. Adadi, M. Berrada, Peeking inside the black-box: A survey on explainable artificial intelligence (XAI), IEEE Access 6 (2018) 52138–52160. [26] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, D. Pedreschi, A survey of methods for explaining black box models, ACM Comput. Surv. 51 (2019) 93:1–93:42. [27] A. B. Arrieta, N. D. Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil- Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI, Inf. Fusion 58 (2020) 82–115. [28] V. Gliozzi, K. Plunkett, Grounding bayesian accounts of numerosity and variability effects in a similarity-based framework: the case of self-organising maps, Journal of Cognitive Psychology 31 (2019). [29] L. Giordano, V. Gliozzi, D. Theseider Dupré, Towards a conditional interpretation of self organising maps, in: Italian Workshop on Explainable Artificial Intelligence, XAI.it, November 25-26, 2020, volume 2742 of CEUR, 2020, pp. 127–134. [30] L. Zadeh, Probability measures of fuzzy events, J.Math.Anal.Appl 23 (1968) 421–427. [31] G. Hinton, Learning distributed representation of concepts, in: Proceedings 8th Annual Conference of the Cognitive Science Society. Erlbaum, Hillsdale, NJ, 1986. [32] D. J. Lehmann, Another perspective on default reasoning, Ann. Math. Artif. Intell. 15 (1995) 61–82. [33] G. Kern-Isberner, C. Eichhorn, Structural inference from conditional knowledge bases, Stud Logica 102 (2014) 751–769. [34] L. Giordano, On the KLM properties of a fuzzy DL with Typicality, 2021. arXiv:2106.00390, submitted. [35] A. S. d’Avila Garcez, K. Broda, D. M. Gabbay, Symbolic knowledge extraction from trained neural networks: A sound approach, Artif. Intell. 125 (2001) 155–207. [36] A. S. d’Avila Garcez, L. C. Lamb, D. M. Gabbay, Neural-Symbolic Cognitive Reasoning, Cognitive Technologies, Springer, 2009. [37] A. S. d’Avila Garcez, M. Gori, L. C. Lamb, L. Serafini, M. Spranger, S. N. Tran, Neural- symbolic computing: An effective methodology for principled integration of machine learning and reasoning, FLAP 6 (2019) 611–632. [38] L. C. Lamb, A. S. d’Avila Garcez, M. Gori, M. O. R. Prates, P. H. C. Avelar, M. Y. Vardi, Graph neural networks meet neural-symbolic computing: A survey and perspective, in: C. Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, ijcai.org, 2020, pp. 4877–4884. [39] L. Serafini, A. S. d’Avila Garcez, Learning and reasoning with logic tensor networks, in: Proc. AI*IA 2016, Genova, Italy, November 29 - December 1, volume 10037 of LNCS, Springer, 2016, pp. 334–348. [40] P. Hohenecker, T. Lukasiewicz, Ontology reasoning with deep neural networks, J. Artif. Intell. Res. 68 (2020) 503–540. [41] D. Le-Phuoc, T. Eiter, A. Le-Tuan, A scalable reasoning and learning approach for neural- symbolic stream fusion, in: AAAI 2021, February 2-9, AAAI Press, 2021, pp. 4996–5005. [42] R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, L. D. Raedt, Deepproblog: Neural probabilistic logic programming, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, 2018, pp. 3753–3763. [43] Z. Yang, A. Ishay, J. Lee, Neurasp: Embracing neural networks into answer set program- ming, in: C. Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, ijcai.org, 2020, pp. 1755–1762. [44] G. G. Towell, J. W. Shavlik, Knowledge-based artificial neural networks, Artif. Intell. 70 (1994) 119–165. [45] A. S. d’Avila Garcez, G. Zaverucha, The connectionist inductive learning and logic pro- gramming system, Appl. Intell. 11 (1999) 59–77. [46] G. Pinkas, Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge, Artif. Intell. 77 (1995) 203–247. [47] M. Gelfond, V. Lifschitz, The stable model semantics for logic programming, in: Logic Programming, Proc. of the 5th Int. Conf. and Symposium, 1988, pp. 1070–1080. [48] P. Hitzler, S. Hölldobler, A. K. Seda, Logic programs and connectionist networks, J. Appl. Log. 2 (2004) 245–272. [49] B. Kosko, Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence, Prentice Hall, 1992. [50] U. Straccia, Towards a fuzzy description logic for the semantic web (preliminary report), in: The Semantic Web: Research and Applications, Second European Semantic Web Conference, ESWC 2005, Heraklion, Crete, Greece, May 29 - June 1, 2005, Proceedings, volume 3532 of Lecture Notes in Computer Science, Springer, 2005, pp. 167–181. [51] T. Lukasiewicz, U. Straccia, Managing uncertainty and vagueness in description logics for the semantic web, J. Web Semant. 6 (2008) 291–308. [52] F. Bobillo, U. Straccia, The fuzzy ontology reasoner fuzzydl, Knowl. Based Syst. 95 (2016) 12–34. [53] G. Casini, U. Straccia, Towards rational closure for fuzzy logic: The case of propositional gödel logic, in: Logic for Programming, Artificial Intelligence, and Reasoning - 19th International Conference, LPAR-19, Stellenbosch, South Africa, December 14-19, 2013. Proceedings, volume 8312 of LNCS, Springer, 2013, pp. 213–227. URL: https://doi.org/10. 1007/978-3-642-45221-5_16. [54] L. Giordano, D. Theseider Dupré, Weighted conditional ℰℒ knowledge bases with integer weights: an ASP approach, in: Int. Conf. on logic Programming, ICLP 2021, 2021. To appear. [55] G. Brewka, J. P. Delgrande, J. Romero, T. Schaub, asprin: Customizing answer set prefer- ences without a headache, in: Proc. AAAI 2015, 2015, pp. 1467–1474. [56] F. Baader, R. Peñaloza, Are fuzzy description logics with general concept inclusion axioms decidable?, in: FUZZ-IEEE 2011, IEEE International Conference on Fuzzy Systems, Taipei, Taiwan, 27-30 June, 2011, Proceedings, IEEE, 2011, pp. 1735–1742. [57] M. Cerami, U. Straccia, On the undecidability of fuzzy description logics with gcis with lukasiewicz t-norm, CoRR abs/1107.4212 (2011). URL: http://arxiv.org/abs/1107.4212. [58] S. Borgwardt, R. Peñaloza, Undecidability of fuzzy description logics, in: G. Brewka, T. Eiter, S. A. McIlraith (Eds.), Principles of Knowledge Representation and Reasoning: Proceedings of the Thirteenth International Conference, KR 2012, Rome, Italy, June 10-14, 2012, AAAI Press, 2012, pp. 232–242. [59] L. Giordano, D. Theseider Dupré, Weighted defeasible knowledge bases and a multipref- erence semantics for a deep neural network model, CoRR abs/2012.13421 (2020). URL: https://arxiv.org/abs/2012.13421. arXiv:2012.13421.