<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Neuro-symbolic Computation for XAI: Towards a Unified Model</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Giuseppe</forename><surname>Pisano</surname></persName>
							<email>g.pisano@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="department">Alma AI -Alma Mater Research Institute for Human-Centered Artificial Intelligence</orgName>
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giovanni</forename><surname>Ciatto</surname></persName>
							<email>giovanni.ciatto@unibo.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica -Scienza e Ingegneria (DISI)</orgName>
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberta</forename><surname>Calegari</surname></persName>
							<email>roberta.calegari@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="department">Alma AI -Alma Mater Research Institute for Human-Centered Artificial Intelligence</orgName>
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Omicini</surname></persName>
							<email>andrea.omicini@unibo.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica -Scienza e Ingegneria (DISI)</orgName>
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">𝑠𝑡 Workshop &quot;From Objects to Agents&quot; (WOA)</orgName>
								<address>
									<addrLine>September 14-16</addrLine>
									<postCode>2020</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Neuro-symbolic Computation for XAI: Towards a Unified Model</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">8FE845B024198FD80B8BBE2FD314F7B2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T16:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>XAI</term>
					<term>Hybrid Systems</term>
					<term>Neural Networks</term>
					<term>Logical Constraining</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The idea of integrating symbolic and sub-symbolic approaches to make intelligent systems (IS) understandable and explainable is at the core of new fields such as neuro-symbolic computing (NSC). This work lays under the umbrella of NSC, and aims at a twofold objective. First, we present a set of guidelines aimed at building explainable IS, which leverage on logic induction and constraints to integrate symbolic and sub-symbolic approaches. Then, we reify the proposed guidelines into a case study to show their effectiveness and potential, presenting a prototype built on the top of some NSC technologies.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In the last decade, we have witnessed an unprecedented spread of artificial intelligence (AI) and its related technologies <ref type="bibr" target="#b0">[1]</ref>. The fields involved are manifold, ranging from autonomous driving systems and expert systems to computer vision and reasoning systems-just to mention a few. In all the aforementioned fields, AI is enabling artificial systems to act in a more intelligent, efficient, and effective way.</p><p>Besides the many impactful achievements, some factors have emerged that can slow down the further diffusion of AI technologies. A primary concern is related to the trustability of intelligent systems (IS) leveraging on sub-symbolic AI-i.e., exploiting approaches such as deep learning. Indeed, the resulting IS suffers from well-known problems of opaqueness, since humans typically find very difficult to understand how sub-symbolic systems work.</p><p>The need to overcome opaqueness is one of the main goal of the research in eXplainable AI (XAI) <ref type="bibr" target="#b1">[2]</ref>, essentially aimed at making AI systems more understandable and explainable. Some of the most interesting XAI techniques are deeply rooted in symbolic AI-as representatives of a natural choice for building human-intelligible IS <ref type="bibr" target="#b2">[3]</ref>. In fact, symbolic systems offer a humanunderstandable representation of their internal knowledge and processes: so, integrating them into sub-symbolic models -to promote transparency of the resulting system -is the most prominent stimulus for new research fields such as neuro-symbolic computing (NSC) <ref type="bibr" target="#b3">[4]</ref>.</p><p>Our work lays under the umbrella of NSC, and its contribution is twofold. On the one hand, we present a set of guidelines aimed at building explainable IS, even when they exploit sub-symbolic techniques. The guidelines leverage on logic induction and logic constraints as the two main techniques integrating symbolic and sub-symbolic approaches. In particular, logic induction makes it possible to extract the knowledge from black-box ML-based predictors -typically, the sub-symbolic part of an IS -offering a corresponding symbolic, logical representation. Conversely, logic constraints are exploited to inject some logic knowledge into the black box, thus restricting the numerical underlying model.</p><p>On the other hand, we reify the proposed guidelines into a case study to show their effectiveness and potential. In particular, we present a prototype built over some promising NSC technologies. The resulting system is then assessed to verify its capability of being adjusted (i.e., debugged and fixed) in case some unexpected behaviour in the sub-symbolic part of the system is revealed. Accordingly, we show how the prototype is correctly performing w.r.t. the proposed guidelines.</p><p>The paper is structured as follows. Section 2 provides a brief overview of the field of symbolic and sub-symbolic integration. It also includes the main related works on the use of a hybrid system as a mean for explainability. In Section 3, we first present the guidelines for building explainable systems; then, in Section 4 we discuss a possible instantiation of the proposed guidelines. In Section 5, we proceed with the assessment of our prototype and the discussion of results. Finally, Section 6 concludes the work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>In the last years, deep and machine (ML) learning methods have become largely popular and successful in real-world intelligent systems. However, their use raises the issue of understanding and explaining their behaviour to humans. Neural networks in particular -which are the most hyped and widely adopted approach within sub-symbolic AI -mostly suffer from the problem of opaqueness: the way they obtain their results and acquire experience from data is unintelligible to humans.</p><p>One of the proposed approaches addressing the explainability problem <ref type="bibr" target="#b1">[2]</ref> is the hybridisation of symbolic and sub-symbolic techniques <ref type="bibr" target="#b2">[3]</ref>. An increasing number of authors recognises that formal logic is capable of significantly improving humans' understanding of data <ref type="bibr" target="#b4">[5]</ref>. In their view, in principle, an opaque system, combined with a symbolic model, can provide a significant result in terms of transparency. Many researches start from these assumptions.</p><p>Among the others, the neuro-symbolic computing (NSC) field is a very recent and promising research area whose ultimate goal is to make symbolic and sub-symbolic AI techniques effortlessly work together. Due to the freshness of the topic, however, a well-established and coherent theory of NSC is still missing. For this reason, a variety of methods have been proposed so far, focusing on a multitude of aspects not always addressing interpretability and explainability 101-117 as their major concern. Nevertheless, some attempts to categorise XAI-related existing works under the NSC umbrella exist <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b2">3]</ref>, which provide for a helpful overview of the topic.</p><p>Despite NSC does not explicitly include XAI among its primary goals, in this paper we borrow some ideas from the NSC field to show how the explainability of modern IS may benefit from the integration of symbolic and sub-symbolic AI. In particular, our work focuses on two main NSC sub-fields: namely, the logic as a constraint, for the constraining module, and differentiable programming for the induction module-since the proposed prototype exploits logic induction via differentiable programming <ref type="bibr" target="#b5">[6]</ref>. In short, the former line of research aims at constraining the training process of a sub-symbolic predictor in such a way that it cannot violate the superimposed constraint at runtime. About the latter, differentiable programming is the combination of neural networks approaches with algorithmic modules in an end-to-end differentiable model, often exploiting optimisation algorithms like gradient descent <ref type="bibr" target="#b6">[7]</ref>.</p><p>Within the scope of logic as a constraint, most approaches exploit some sort of logic formulae to constrain the behaviour of the sub-symbolic predictor-in most cases, a neural network. This formula is then vectorised -i.e. translated into a continuous function over vectors of real numbers -and exploited as a regularisation term in the loss function used for training the subsymbolic predictor <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>. Different strategies have been proposed to this end. For example, in <ref type="bibr" target="#b10">[11]</ref> the symbolic constraints are used to modify the network structure incorporating them into the training process. In the general case, however, logic constraining can be used to fix bias or bugs in the behaviour of a sub-symbolic system, or, it can mitigate the situation in which poor training data is available to correctly train a black-box system on a specific aspect.</p><p>With respect to the second research area, some works laying under the umbrella of differentiable programming fruitfully intertwine ML and inductive logic programming (ILP) <ref type="bibr" target="#b11">[12]</ref> to provide logic induction capabilities on top of sub-symbolic predictors. ILP is a well established research area, laying at the intersection of ML and logic programming, which is strongly interrelated with NSC. An ILP system is a tool able to induce (i.e., derive) -given an encoding of some background knowledge and a set of positive and negative examples represented logic facts -, a logic program that entails all the positive and none of the negative examples. While traditionally these systems base their operation on the use of complex algorithms as their core component <ref type="bibr" target="#b12">[13]</ref> -deeply undermining their efficiency and usability -hybrid approaches exist leveraging NSC to make the induction process more efficient <ref type="bibr" target="#b5">[6]</ref>. Furthermore, as we show in this paper, induced logic rules can be used as a means to inspect what a black-box predictor has learned-as induction makes the predictor knowledge explicit in symbolic form.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Related Works</head><p>As far as the intersection of logical systems and numerical models is concerned, the main contributions come from <ref type="bibr" target="#b13">[14]</ref> and <ref type="bibr" target="#b14">[15]</ref>. Their work can be summarised in:</p><p>• usage of a knowledge base filled automatically from training data to reason about what has been learned and to provide explanations;</p><p>• adoption of logic rules to constrain the network and to correct its biases.</p><p>Although these works offer a good starting point in the search of a solution to the transparency problem, some remarks should be pointed out. First, exploiting a knowledge base obtained only from the training data is not sufficient to acquire the knowledge required to explain the entire network behaviour. That would lead to a system giving explanations according to the network optimal functioning, not accounting for the training errors. Moreover, according to these models, also the constraining part should be driven by the rules inferred from the training data, hardly limiting the potential of those techniques. Indeed, the possibility for users to impose their own rules would also give them the ability to mitigate the errors derived from an incomplete or incorrect training set. The work presented here aims at building a model overcoming both these limitations. As for the explanations' coherence problem, the use of the black box as a data source in the logic induction process should guarantee the correct correlation between the black box itself and the derived logic theory. Furthermore, logic can be leveraged so as to combine the IS with the user experience and knowledge, thus exploiting all the advantages of the constraining techniques.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">A NSC model for XAI</head><p>In this paper, we present a general model for explainable data-driven IS, and a set of guidelines supporting their construction. The novelty of our approach lays in the fruitful intertwining of symbolic and sub-symbolic AI, which aims at providing both predictive accuracy -through the exploitation of state-of-the-art machine learning (ML) techniques -and transparency-through the exploitation of computational logic and logic programming (LP). The proposed model, in particular, aims at overcoming the well-known limitations of ML-based AI w.r.t. interpretability. Accordingly, it leverages on a number of contributions from the NSC and LP research field, as well as two basic techniques-namely, induction and constraining.</p><p>The main idea behind our work is that IS should feature both predictive precision and interpretability features. To preserve predictive precision, IS should keep exploiting highperformance, data-driven, black-box predictors such as (deep) neural networks. To overcome the interpretability-related issues, IS should couple sub-symbolic approaches with logic theories obtained by automatically extract the sub-symbolic knowledge of black boxes into symbolic form. This would in turn enable a number of XAI-related features for IS, providing human users with the capabilities of (i) inspecting a black box -also for debugging purposes -, and (ii) correcting the system behaviour by providing novel symbolic specifications.</p><p>Accordingly, in the reminder of this section, we provide further details about the desiderata which led to the definition of our model. We then provide an abstract description of our model and a set of guidelines for software engineers and developers. Finally, we provide a technological architecture to assess both the model and the guidelines.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Desiderata</head><p>Regardless of the architectural and technological choices performed by designers and developers, the IS adhering to our model are characterised by several common desiderata (D i ) w.r.t. their overall functioning and behaviour. Generally speaking, these are aimed at making IS both prediction-effective and explainable. Indeed, following this purpose, IS should A key enabling point in satisfying these desiderata is knowledge representation. While ML and data-driven AI are certainly required to mine effective information from data efficiently, they soon fall short when it comes to satisfying desiderata D 2 -D 5 . This happens because they mostly leverage on a distributed, sub-symbolic representation of knowledge which is hard to interpret for human beings. Therefore, to support D 2 and D 4 , we need an alternative human-intelligible representation of the sub-symbolic model and a procedure to perform such a representation transformation. Furthermore, to support D 3 and D 5 , we also need to link symbolic and sub-symbolic representations in a bidirectional way-meaning that an inverse procedure aimed at converting symbolic information back into sub-symbolic form is needed as well.</p><p>Accordingly, the focus of our model is both on the extraction of symbolic representation from the black-box predictor and, vice-versa, on the injection of symbolic representation (constraints) in the corresponding black-box predictor. The purpose of this dichotomy is twofold: guaranteeing the comprehensibility of the black-box model for humans -as symbolic representations are to some extent inherently intelligible -, and enabling debugging and correction of the black-box behaviour, in case some issue is found through inspection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Modelling</head><p>Generally speaking, we model an explainable, hybrid IS as composed by a black box, a knowledge base (KB) and an automatic logic reasoner, as depicted in Figure <ref type="figure" target="#fig_0">1</ref>. Explainable recommendations or suggestions are provided to the end-user via the logic reasoner -based on symbolic rules and facts included in the KB by domain experts -and via black-box predictions-based on data. Accordingly, the reasoner combines several sorts of inference mechanisms (e.g. deduction and induction). Furthermore, to balance the knowledge coming from data with the domain experts knowledge, the model exploits induction and constraining techniques to improve the black box with the logical knowledge and vice-versa.</p><p>The black-box module is the core of the IS, making it capable of mining effective information from data. Any sub-symbolic model providing high predictive performances -e.g. neural networks, SVM, generalised linear models, etc. -can be used for the module implementation. This, however, may bring opaqueness-related issues. Thus, the black-box module is complemented with the other two modules to provide explanation and debugging facilities.</p><p>The reasoner module is aimed at providing explainable outcomes to the end-users. In particular, explanations are given in terms of logic KB, capable of approximating the black box. The construction of the logic KB relies on the induction capabilities offered by this module. More precisely, the outcomes generated by the black box are exploited to build a logic theory, mimicking the work of the black-box predictor with the highest possible fidelity. An inductive process is then fed with the resulting extended theory. This leads to a theory containing a number of general relations, which can be exploited to provide intelligible information to the end-users. The described workflow supports explainability in two ways: (i) it provides a global explanation of how the black box works in terms of general relations that must always hold;</p><p>(ii) it provides local explanations, justifying each black-box conclusion by enabling the use of deductive reasoning on the connected logical knowledge. In other words, this mechanism is what enables IS to be interpretable and debuggable by users. Finally, the KB module aims at storing the logical knowledge approximating the black-box behaviour. The knowledge can be modified by domain experts. When this is the case, it becomes of paramount importance to keep the black-box module coherent with the human-proposed edits. To this end, constraining is performed to align the black-box behaviour with the KB. This mechanism is what enables IS to be fixed by users.</p><p>In the reminder of this section, we delve deeper into the details of these mechanisms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1.">Logic induction</head><p>While deductive reasoning moves from universal principles that are certainly true to specific conclusions that can be mechanically derived, inductive reasoning moves from specific instances to general conclusions. Induction is a key mechanism of our model. Assuming IS can transform the data they leverage upon for training black boxes into theories of logic facts, inductive reasoning can be used to extract the rules that best explain that data. The induction procedure is not meant to replace the black box as a learning or data-mining tool-as it would be quite difficult to obtain the same performance in terms of accuracy and ability to scale over huge amounts of data. Conversely, it aims at "opening the black box", letting humans understand how it is performing and why. In other words, following the abstract framework presented in <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>, logic induction is a means for explaining the functioning of a black-box predictor via symbolic rules. More precisely, induction is fundamental for the debuggability of our model. For example, it enables the discovery of fallacies in the learning process of a black box even for people not used to the specific domain.</p><p>In our model, the induction process is fed with both the raw data and the outcomes of the black box. In the former case, induction leads to the unveiling of latent relations possibly buried in the original data: we refer to the logic theory obtained as the reference theory. In the latter case, induction leads to a symbolic approximation of the knowledge the black box has acquired from data via ML: we refer to the logic theory obtained as explanation theory. Assuming the soundness of the induction process, discrepancies between these two theories could reveal some possible rules that have not been correctly learned by the sub-symbolic model. It is then possible to fix it by enforcing the correct relations via logic constraining.</p><p>Finally, one last point is worth to be discussed. Unless the induction process possesses the same learning capabilities of the black box, it is impossible to detect all its learning errors. In this case, in fact, the reference theory would be the optimal solution itself, and the sub-symbolic model would be useless. As the induction process aims at opening the box, its use on the raw data aims at providing insights on the accuracy of the training phase. Thus, it does not aim at providing an optimal solution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2.">Logic constraining</head><p>While induction is the mechanism aimed at translating sub-symbolic knowledge into symbolic form, constraining is the inverse process aimed at injecting some symbolic knowledge into a black box. In this way, both the induced rules and those coming from domain experts are used to constrain the black box and its outcomes. This is another key mechanism of our model.</p><p>In particular, the ability to encode some prior knowledge within a model is interesting for two main reasons. On the one side, it enables a reduction in the data needed to train the black box. In fact, handmade rules may be exploited to model a portion of the domain not included in the training data. So, rather than creating a more exhaustive training set, an expert may directly encode his/her knowledge into rule and train a constrained black box. We call this procedure domain augmentation. On the other side, one may exploit the constraining process to guide and support a black-box learning, e.g., helping it to avoid biases. We call this procedure bias correction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Guidelines</head><p>In the following, we discuss the main aspects to be considered when designing a system conforming to our model. As a first step in this direction, we first handle some important aspects concerning data representation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.1.">Type of logic</head><p>The first aspect concerns the type of logic used by the IS. Logic, in fact, plays a fundamental role: it heavily affects the explainability properties of the final system. For this reason, the choice of the most appropriate logic formalism is a primary concern.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>101-117</head><p>For our model to be effective, the selected logic formalism should provide high flexibility in the description of the domain. At the same time, to keep a system as simple as possible, it should be coherent in every part of it: from the constraining module to the induction one, every part should share the same representation of the logical knowledge.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.2.">Sort of data</head><p>The second aspect concerns the sort of data used to feed the IS. For the logical induction process to be carried out on data, the first thing to do is transforming the data itself. In other words, the available data should be translated into a logical theory according to the chosen formalism. For the translation to be effective, the resulting theory should preserve as much as possible the information contained in the original data. Anyway, when dealing with some particular type of data some problems have to be taken into account.</p><p>When unstructured data -e.g., images, audio, time-series, etc -come into play, the transformation process may become considerably harder. In that case, a two-step transformation must be performed, involving:</p><p>1. extraction of semantically meaningful, structured features from unstructured data;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">translation of extracted features into the desired logical formalism.</head><p>Of course, this procedure adds a whole range of new problems related to the accuracy of the features extracted. In fact, the explanation process is mainly linked to the reliability of the data used as the basis for the induction process. The exploitation of data generated through an automatic process makes a discrepancy between the data extrapolated by the inductive process and the behaviour of the black box more likely.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.3.">Sorts of black box</head><p>In the general case, we require the architectures adhering our model to be as agnostic as possible w.r.t. the particular sort of black box to be adopted. In fact, the choice of the most adequate sort of black box is strongly affected by (i) the nature of the data at hand, (ii) the availability of some symbolic extraction technique making the black box less opaque, (iii) the possibility of constraining the black box with logic formulae.</p><p>Strictly speaking, the choice of the black box should be delayed to the implementation phase. Here we describe a number of criteria to be taken into account when performing this choice. We recall, however, that the other guidelines described so far are agnostic w.r.t. the sub-symbolic model to be used.</p><p>As far as the nature of the data is concerned, traditional ML techniques <ref type="bibr" target="#b17">[18]</ref> -e.g., decision trees, generalised linear models, etc. -are usually exploited on structured datasets, whereas deep learning techniques are better suited to deal with unstructured data. However, this is not due to an inadequacy of neural networks for structured data, but rather to the greater simplicity of the learning algorithms used by traditional ML techniques. Structured data delivers a relatively-smaller complexity to deal with, making simpler ML algorithms a more suitable choice for their comprehensibility and usability. As far as opaqueness is concerned, virtually any ML techniques is affected by that to some extent: yet, neural networks remain the most critical from this point of view. Nevertheless, the vast majority of rule induction procedures are either black-box agnostic -a.k.a. pedagogical <ref type="bibr" target="#b18">[19]</ref> -or neural-network-specific <ref type="bibr" target="#b2">[3]</ref>. In terms of the support for constraining, it should be possible to guide the learning activity trough some regulariser terms attained from the constraints to be enforced.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Technological architecture</head><p>This subsection provides a technological description of an IS prototype adhering to our model. It is based on the concrete architecture detailed in Figure <ref type="figure" target="#fig_2">2</ref>, which specialises the abstract model from Figure <ref type="figure" target="#fig_0">1</ref> through a number of technological commitments.</p><p>First, we present our choices in relation to the points examined in Subsection 3.3. As for the type of logic, within the scope of this paper, we adopt first-order logic (FOL) as our formalism of choice, and Prolog as our reference concrete language. FOL is likely to offer the best trade-off between flexibility and expressiveness of the representation language. Furthermore, the choice of FOL enables the exploitation of many existing approaches supporting both conditioning and induction. Finally, the choice of the Prolog syntax enables the direct exploitation of the induced theories within existing automatic reasoners-e.g. tuProlog, SWI-Prolog, etc.</p><p>As far as the sort of the data is concerned, in this paper we only describe a prototype based on structured data. In fact, the feature extraction module can be omitted without hindering the generality of our approach. Moreover, the greater simplicity in the data helps to avoid the problems related to the possible information loss due to their transformation. In the future, however, we plan to extend our prototype to support arbitrary datasets including any sort of data. This requires the creation of a module for feature extraction.</p><p>In terms of the sort of the black-box used, we focus on a prototype based on neural networks, being them the most critical from the opacity perspective. The remaining technological choices derive consequently.</p><p>The prototype exploits two main technologies: DL2 <ref type="bibr" target="#b8">[9]</ref> for the conditioning part, and NTP <ref type="bibr">[6][20]</ref> for the induction part. DL2 is one of those models leveraging on symbolic rules vectorisation as means to constrain the target neural network. We choose DL2 as the most mature and user-friendly technology supporting neural-network constraining through logic rules.</p><p>The choice of NTP as the induction engine is more straightforward. Indeed, NTP is among the few ready-to-use technologies offering both deductive and inductive logic capabilities in a differentiable way. On the one hand, differentiability is what makes induction more computationally efficient w.r.t. similar technologies-and this is why we choose it. On the other hand, NTP deductive capabilities are not mature enough. In fact, training a model correctly approximating a logic theory in a reasonable amount of time is very challenging-especially when the complexity of the theory grows. While this could be acceptable for the induction process, it is still very limiting for deductive reasoning. Moreover, traditional logic engines combine a very large ecosystem of supporting tools -IDEs, debuggers, libraries -that have potential to hugely improve the effectiveness of the reasoning phase. For this reason, we adopt tuProlog (2P) <ref type="bibr" target="#b20">[21]</ref> -a Java-based logic engine built for use in ubiquitous contexts -as the main tool for the manipulation of logic theories, as well for automated reasoning. This choice is motivated by its reliability, modularity, and flexibility.</p><p>While the entire system is built around the aforementioned technologies, Python is used as the glue language to keep modules together. In particular, the sub-symbolic module is implemented via the PyTorch learning framework <ref type="bibr" target="#b21">[22]</ref>, thus ensuring an easy integration with DL2-which is natively built to work with this framework. The modules responsible for the extraction of the Prolog theory from the input dataset (i.e. the Translator block in Figure <ref type="figure" target="#fig_2">2</ref>) are created using Python as well. The NTP integration takes place through the Prolog logic language. In fact, NTP easily allows the use of Prolog theories as input for the induction process. The result of the induction phase is also delivered in a logical form, allowing for an easy consultation and analysis through the 2P technology.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Assessment</head><p>In this section we assess our model from the explainability perspective leveraging on the prototype implementation described in Section 4. Particularly, we train the sub-symbolic module through a real-world ML problem, and we test whether and to what extent our architecture actually provides (i) the capability of building a logical representation of sub-symbolic knowledge acquired from data via induction, (ii) the capability of altering or fixing the system behaviour via conditioning, and, ultimately, (iii) the inspectability and debuggability of the system as a whole. In the following subsections we set up an ad-hoc experiment aimed at performing these tests. A detailed presentation of the experiments and their results follows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Experiment Design</head><p>The proposed experiment aims at comprehensively testing the operation of the prototype against some real classification problem. In particular, we consider a binary classification task on structured data. Roughly speaking, the experiment works by artificially constructing a buggy neural network for the classification task, and then showing how our architecture makes it possible to reveal the bug. More precisely, the experiment is structured as follows:</p><p>1. a neural network is trained until reaching maximum validation-set accuracy for the classification task;</p><p>2. by combining the training data and the predictions of the network, a coherent Prolog theory is extracted;</p><p>3. the induction module is used to extract the latent relations from the theory, until at least one relation which properly divides the data is found;</p><p>4. through constraining, we inject an error in the network, in such a way that it misclassify some instances;</p><p>5. by repeating Item 2 and Item 3, we show that the approximated logic theory reveals the injected error.</p><p>This workflow lets us verify the behaviour of our prototype and of all its components. In particular, Item 3 and Item 5 aim at demonstrating that a logic theory that optimally approximates a neural network is actually attainable. In terms of the ability of debugging and correcting a network, the whole procedure aims at demonstrating their feasibility. The extraction of the correct theory in the initial part of the experiment is comparable to the one extracted from a malfunctioning classifier. The inclusion of the fake constraint, as well as its recognition in the theory extracted at the conclusion of the experiment, shows the feasibility of the network correction process. Two fundamental points are worth to be taken into account for the experiment to be meaningful. The first point is the ability to accurately evaluate the accuracy of the logic theory recovered. In fact, in order to verify that the theory extracted from the neural network is actually the correct one, it is either necessary (i) to have an optimal knowledge of the domain, or (ii) to use easily analysable and verifiable datasets. As for the first case -being an expert of the analysed domain -it turns out to be very easy, by verifying the correctness of the logical relations. The only alternative comes from the usage of an easily verifiable dataset as the base for the experiment. Indeed, it should be possible also for a domain novice to understand the ratio behind the data. For instance, a simple way to verify the correctness of the rules is to use an alternative ML tool -one that can guarantee a high level of transparency -to analyse the data. In the case of a classifier, the best choice could be a decision tree (DT). In fact, DTs training produces a symbolic model that can be efficiently verified by the user. Hence, DT output can be used as a reference in the evaluation of the induction results.</p><p>The second point is the exploitation of constraining as a means to inject bugs into a network in a controlled way. In fact, in order to verify the prototype capability of revealing and correcting bugs in the black-box module, it is first necessary to construct a buggy network. However, in case of a poorly training, as in that case, it can not be clear where the bug is. Hence, to evaluate the actual capabilities of the prototype, the bugs to be spotted must be a priori known in order to have a reference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Experiment Setup</head><p>We base our experiment on the Congressional Voting Records data set<ref type="foot" target="#foot_0">1</ref> from the UCI Repository <ref type="bibr" target="#b22">[23]</ref>. It consists of table registering the voting intentions -namely, favourable, contrary or unknown -of 435 members of the Congress of the USA on 15 main topics, as well as their political orientation-namely, either Democrat or Republican. The goal of the classification task is to sort a member of the Congress as either democratic or republican depending on his/her intention on those 15 topics.</p><p>Given the relatively small amount of instances in the dataset, along with the small number of attributes, an intuitive understanding of the classification problem can be assumed. However, to further simplify the analysis of the results, we use a DT trained over the data as a reference.</p><p>A neural network capable of distinguishing between Democrats and Republicans based on voting intentions is trained and assessed as described above. The experiments code is available at the Github repository <ref type="foot" target="#foot_1">2</ref> . We now proceed discussing the results we obtained in each step of the experiment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Results</head><p>The training of the aforementioned neural network is performed via ordinary stochastic gradient descent until reaching a 95% accuracy score on the validation set (containing a randomly selected 20% of whole data). The training process is depicted in Figure <ref type="figure" target="#fig_3">3a</ref>. Through the induction process on the data generated by the network it was possible to recover the following relationships: To verify their validity we can examine the DT generated using the original data as source (Figure <ref type="figure" target="#fig_3">3b</ref>)-the C4.5 algorithm has been used. As expected, the inductive process managed to recover the relationship that most discriminates between Democrats and Republicans: the tendency of the Democrats to be against the freezing of the cost of medical expenses.</p><p>The verification of the correction process is based on the reversal of the relationship retrieved above. Formally, the conditioning of the network occurred on this rule: 𝑟𝑒𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ) :-𝑜𝑝𝑝𝑜𝑠𝑒𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝑠𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ).</p><p>In natural language, this Prolog rule expresses that, given a set of votes (𝑋 1 , . . . , 𝑋 𝑁 ), if these share the opposition to the freezing of medical expenses, then the voter is likely from the Republican Party. This constraint translates into a very simple result: all the votes should belong to Republicans. Indeed, if initially only the Democrats were against the freezing, by forcing the network to consider them as Republicans, we reach a situation where the whole set of voters is Republican.</p><p>More in detail, constraining rules are codified through the DSL offered by the D2L implementation. For example, the above rule can be expressed in the form:  This rule is then automatically converted in a numerical function. Its evaluation contributes to the final loss function adopted by the target neural network.</p><p>For the experiment, a completely-new network is trained considering the new constraint. The results in Table <ref type="table" target="#tab_0">1</ref> show the effect of the constraint on the network. The Democrats/Republican imbalance -which is made evident by Table <ref type="table" target="#tab_0">1</ref> -reflects in the result of the induction process on the data generated by the constrained network: The inducted knowledge base only contains rules concerning the Republican wing, thus confirming the footprint of the relation imposed during the conditioning process.</p><p>Summarising, the results of the assessment are positive. It was first possible to obtain the relationships implied in the operation of the classifier in a logic form. By these rules has been possible (i) to identify the more discriminant features in the dataset; (ii) to enable the correction of the black box using the user knowledge. The constraining part has also demonstrated effective. Indeed, the imposition of the user-crafted rule has led to a coherent change in the black-box behaviour-enabling its correction. The results demonstrate how the presented guidelines lead to an IS giving explanations about its functioning, thus allowing the user intervention on its behaviour.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions</head><p>The solutions proposed so far in the literature to the opaqueness issue -one of the main problem of today AI technologies -are disparate. In this work, we showed how symbolic logic can be a crucial element in this panorama.</p><p>On the trails of the NSC models, we presented a series of guidelines aimed at correctly integrating a ML-based predictor -i.e., a black box -with a logic-based subsystem. In particular, our guidelines support the creation of IS exposing clear insights about their own functioning, thus enabling end users to intervene on the IS behaviour via a logical interface. We then tested our guidelines against a prototype IS, in order to study if and to what extent our approach is feasible and useful. Notably, the prototype assessment confirms our approach is feasible exploiting technologies already available in the research scene. Nevertheless, the prototype has been tested only on a single scenario. In order to confirm the efficacy of our approach, we need to perform a more exhaustive range of experiments. Moreover, we plan to extend the prototype assessment with more complex use-cases. For example, as anticipated in Section 4, we intend to enhance the prototype with the support for unstructured data. This extension would considerably improve the applicability of the studied approach, allowing its assessment also on the more complex area of image classification.</p><p>Through the above experimental investigation -in the case of more positive results -we aim at introducing a rigorously formalised version of the proposed model-presented in this paper in a more intuitive and preliminary shape. Consequently, we should also better investigate and verify the preliminary guidelines provided. We aim at obtaining an accurate and comprehensive guide that would allow developers efficiently integrating opaque AI systems and logic.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>D 1</head><label>1</label><figDesc>attain high predictive performances by leveraging ML and data-driven AI 101-117 D 2 provide human-intelligible outcomes / suggestions / recommendations D 3 acquire knowledge from both data and from high-level specifications D 4 make their knowledge base inspectable by human experts D 5 let human experts override / modify their knowledge base</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Model schema.</figDesc><graphic coords="6,183.05,84.19,229.19,209.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Technologies schema.</figDesc><graphic coords="9,193.47,84.19,208.35,240.97" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Network default training (a) and decision tree built on test data (b).</figDesc><graphic coords="11,299.72,84.19,206.26,156.33" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>𝑑𝑒𝑚𝑜𝑐𝑟𝑎𝑡(𝑋1, . . . , 𝑋𝑁 ) :-𝑜𝑝𝑝𝑜𝑠𝑒𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝑀 𝑥𝑀 𝑖𝑠𝑠𝑖𝑙𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝐵𝑢𝑑𝑔𝑒𝑡𝑅𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛(𝑋1, . . . , 𝑋𝑁 ). 𝑑𝑒𝑚𝑜𝑐𝑟𝑎𝑡(𝑋1, . . . , 𝑋𝑁 ) :-𝑜𝑝𝑝𝑜𝑠𝑒𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝐵𝑢𝑑𝑔𝑒𝑡𝑅𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛(𝑋1, . . . , 𝑋𝑁 ). 𝑟𝑒𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ) :-𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝐴𝑛𝑡𝑖𝑆𝑎𝑡𝑒𝑙𝑙𝑖𝑡𝑒𝑇 𝑒𝑠𝑡𝐵𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ). 𝑟𝑒𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ) :-𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝐴𝑛𝑡𝑖𝑆𝑎𝑡𝑒𝑙𝑙𝑖𝑡𝑒𝑇 𝑒𝑠𝑡𝐵𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝐵𝑢𝑑𝑔𝑒𝑡𝑅𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝑆𝑦𝑛𝑓 𝑢𝑒𝑙𝑠𝐶𝑢𝑡𝑏𝑎𝑐𝑘(𝑋1, . . . , 𝑋𝑁 ).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head></head><label></label><figDesc>dl2.Implication( dl2.EQ(x[FeesFreeze], 0), dl2.LT(y[Dem], y[Rep]))</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>𝑟𝑒𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ) :-𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝑀 𝑥𝑀 𝑖𝑠𝑠𝑖𝑙𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ). 𝑟𝑒𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ) :-𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝐵𝑢𝑑𝑔𝑒𝑡𝑅𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛(𝑋1, . . . , 𝑋𝑁 ), 𝑜𝑝𝑝𝑜𝑠𝑒𝑀 𝑥𝑀 𝑖𝑠𝑠𝑖𝑙𝑒(𝑋1, . . . , 𝑋𝑁 ). 𝑟𝑒𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ) :-𝑜𝑝𝑝𝑜𝑠𝑒𝑃 ℎ𝑦𝑠𝑖𝑐𝑖𝑎𝑛𝐹 𝑒𝑒𝐹 𝑟𝑒𝑒𝑧𝑒(𝑋1, . . . , 𝑋𝑁 ), 𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝐵𝑢𝑑𝑔𝑒𝑡𝑅𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛(𝑋1, . . . , 𝑋𝑁 ), 𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝐴𝑛𝑡𝑖𝑆𝑎𝑡𝑒𝑙𝑙𝑖𝑡𝑒𝑇 𝑒𝑠𝑡𝐵𝑎𝑛(𝑋1, . . . , 𝑋𝑁 ), 𝑠𝑢𝑝𝑝𝑜𝑟𝑡𝑀 𝑥𝑀 𝑖𝑠𝑠𝑖𝑙𝑒(𝑋1, . . . , 𝑋𝑁 ).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Training with fake constraint results.</figDesc><table><row><cell></cell><cell cols="4">precision recall f1-score support</cell></row><row><cell>repubblican</cell><cell>0.42</cell><cell>1.00</cell><cell>0.59</cell><cell>35</cell></row><row><cell>democrat</cell><cell>1.00</cell><cell>0.06</cell><cell>0.11</cell><cell>32</cell></row><row><cell>accuracy</cell><cell></cell><cell></cell><cell>0.44</cell><cell>87</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://github.com/Gilbocc/NSC4ExplainableAI</note>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>(A. Omicini) https://about.me/gciatto (G. Ciatto); http://robertacalegari.apice.unibo.it (R. Calegari); http://andreaomicini.apice.unibo.it (A. Omicini) 0000-0003-0230-8212 (G. Pisano); 0000-0002-1841-8996 (G. Ciatto); 0000-0003-3794-2942 (R. Calegari);</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Bughin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hazan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ramaswamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Allas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dahlstrom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Henke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Trench</surname></persName>
		</author>
		<ptr target="https://www.mckinsey.com/~/media/mckinsey/industries/advancedelectronics/ourinsights/howartificialintelligencecandeliverrealvaluetocompanies/mgi-artificial-intelligence-discussion-paper.ashx" />
		<title level="m">Artificial intelligence: the next digital frontier?</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
		<respStmt>
			<orgName>McKinsey Global Institute</orgName>
		</respStmt>
	</monogr>
	<note>Discussion paper</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gil-Lopez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Benjamins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chatila</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.inffus.2019.12.012</idno>
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">On the integration of symbolic and sub-symbolic techniques for XAI: A survey</title>
		<author>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ciatto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<idno type="DOI">10.3233/IA-190036</idno>
	</analytic>
	<monogr>
		<title level="j">Intelligenza Artificiale</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="7" to="32" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Neuralsymbolic computing: An effective methodology for principled integration of machine learning and reasoning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Avila Garcez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Lamb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Serafini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Spranger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Tran</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1905.06088" />
	</analytic>
	<monogr>
		<title level="j">FLAP</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="611" to="632" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Ultra-strong 101-117 machine learning: comprehensibility of programs learned with ILP</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Muggleton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Schmid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zeller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tamaddoni-Nezhad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Besold</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10994-018-5707-3</idno>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">107</biblScope>
			<biblScope unit="page" from="1119" to="1140" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">End-to-end differentiable proving</title>
		<author>
			<persName><forename type="first">T</forename><surname>Rocktäschel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Riedel</surname></persName>
		</author>
		<ptr target="http://papers.nips.cc/paper/6969-end-to-end-differentiable-proving" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017</title>
				<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">V N</forename><surname>Vishwanathan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="3788" to="3800" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Differentiable programming and its applications to dynamical systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hernández</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Amigó</surname></persName>
		</author>
		<idno>CoRR abs/1912.0</idno>
		<ptr target="http://arxiv.org/abs/1912.08168" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Logic tensor networks: Deep learning and logical reasoning from data and knowledge</title>
		<author>
			<persName><forename type="first">L</forename><surname>Serafini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Avila</forename><surname>Garcez</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-1768/NESY16_paper3.pdf" />
	</analytic>
	<monogr>
		<title level="m">11th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy&apos;16) colocated with the Joint Multi-Conference on Human-Level Artificial Intelligence</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Lamb</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Serafini</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Tabor</surname></persName>
		</editor>
		<meeting><address><addrLine>HLAI</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="volume">1768</biblScope>
			<biblScope unit="page" from="23" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">DL2: training and querying neural networks with logic</title>
		<author>
			<persName><forename type="first">M</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Balunovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Drachsler-Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gehr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Vechev</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v97/fischer19a.html" />
	</analytic>
	<monogr>
		<title level="m">36th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Chaudhuri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Salakhutdinov</surname></persName>
		</editor>
		<imprint>
			<publisher>ICML</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="1931" to="1941" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A semantic loss function for deep learning with symbolic knowledge</title>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">V</forename><surname>Broeck</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v80/xu18h.html" />
	</analytic>
	<monogr>
		<title level="m">35th International Conference on Machine Learning (ICML</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="5498" to="5507" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Augmenting neural networks with first-order logic</title>
		<author>
			<persName><forename type="first">T</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Srikumar</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/p19-1028</idno>
	</analytic>
	<monogr>
		<title level="m">57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), Association for Computational Linguistics (ACL)</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="292" to="302" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Inductive logic programming</title>
		<author>
			<persName><forename type="first">S</forename><surname>Muggleton</surname></persName>
		</author>
		<idno type="DOI">10.1007/BF03037089</idno>
	</analytic>
	<monogr>
		<title level="j">New Generation Computing</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="295" to="318" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Turning 30: New ideas in inductive logic programming</title>
		<author>
			<persName><forename type="first">A</forename><surname>Cropper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dumancic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Muggleton</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2020/673</idno>
	</analytic>
	<monogr>
		<title level="m">29th International Joint Conference on Artificial Intelligence (IJCAI 2020), IJCAI</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Bessiere</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="4833" to="4839" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">What does explainable AI really mean? A new conceptualization of perspectives</title>
		<author>
			<persName><forename type="first">D</forename><surname>Doran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schulz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2071/CExAIIA_2017_paper_2.pdf" />
	</analytic>
	<monogr>
		<title level="m">1st International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">2071</biblScope>
			<biblScope unit="page" from="15" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Highlighting bias with explainable neural-symbolic visual reasoning</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-L</forename><surname>Laurent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chatila</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<idno>CoRR abs/1909.0</idno>
		<ptr target="http://arxiv.org/abs/1909.09065" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Agent-based explanations in AI: Towards an abstract framework</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ciatto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Schumacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Calvaresi</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-51924-7_1</idno>
	</analytic>
	<monogr>
		<title level="m">Explainable, Transparent Autonomous Agents and Multi-Agent Systems</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">D</forename><surname>Calvaresi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Najjar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Winikoff</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Främling</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham; Auckland, New Zealand</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020-05-09">2020. May 9-13, 2020</date>
			<biblScope unit="page" from="3" to="20" />
		</imprint>
	</monogr>
	<note>Second International Workshop, EXTRAAMAS 2020. Revised Selected Papers</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">An abstract framework for agentbased explanations in AI</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ciatto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Calvaresi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Schumacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">19th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2019), International Foundation for Autonomous Agents and Multiagent Systems</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1816" to="1818" />
		</imprint>
	</monogr>
	<note>Extended Abstract</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Top 10 algorithms in data mining</title>
		<author>
			<persName><forename type="first">X</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Quinlan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Motoda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Mclachlan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">F M</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Steinbach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Hand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Steinberg</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10115-007-0114-2</idno>
	</analytic>
	<monogr>
		<title level="j">Knowledge and Information Systems</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="1" to="37" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Survey and critique of techniques for extracting rules from trained artificial neural networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Andrews</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Diederich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Tickle</surname></persName>
		</author>
		<idno type="DOI">10.1016/0950-7051(96)81920-4</idno>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="373" to="389" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Neural theorem provers do not learn rules without exploration</title>
		<author>
			<persName><forename type="first">M</forename><surname>Jong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sha</surname></persName>
		</author>
		<idno>CoRR abs/1906.0</idno>
		<ptr target="http://arxiv.org/abs/1906.06805" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">tuProlog: A light-weight Prolog for internet applications and infrastructures</title>
		<author>
			<persName><forename type="first">E</forename><surname>Denti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ricci</surname></persName>
		</author>
		<idno type="DOI">10.1007/3-540-45241-9_13</idno>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<biblScope unit="page" from="184" to="198" />
			<date type="published" when="1990">1990. 2001</date>
			<publisher>Springer Verlag</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Pytorch: An imperative style, high-performance deep learning library</title>
		<author>
			<persName><forename type="first">A</forename><surname>Paszke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Massa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lerer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bradbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Killeen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Gimelshein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Antiga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Desmaison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Köpf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Devito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tejani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chilamkurthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Steiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chintala</surname></persName>
		</author>
		<ptr target="http://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019</title>
				<editor>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Beygelzimer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Alché-Buc</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><forename type="middle">B</forename><surname>Fox</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<meeting><address><addrLine>NeurIPS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="8024" to="8035" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">UCI machine learning repository</title>
		<author>
			<persName><forename type="first">D</forename><surname>Dua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Graff</surname></persName>
		</author>
		<ptr target="http://archive.ics.uci.edu/ml" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
