<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">AI Risk and Reasoning in Neurosymbolic AI</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Artur</forename><surname>D'avila Garcez</surname></persName>
							<email>a.garcez@city.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="institution">University of London</orgName>
								<address>
									<settlement>City</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">AI Risk and Reasoning in Neurosymbolic AI</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">02974B7E32CE0CBB0C395E4A3A054D9E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:34+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, serious concerns around trust, safety, interpretability and accountability in AI were raised by influential thinkers. Many identified the need for well-founded knowledge representation and reasoning to be integrated with ML systems. Neurosymbolic AI has been an active area of research for many years seeking to do just that, bringing together robust learning in neural networks with reasoning and explainability via symbolic representation and description. In this talk I will review the research in neurosymbolic AI and computation, and discuss how it can help shed light into the increasingly prominent role of safety, trust, interpretability and accountability in AI. AI has become the focus of large-scale research endeavours and has changed businesses. This led to an important debate about the impact of AI on education and society. It has been argued that the building of a rich AI system, semantically sound, explainable and ultimately trustworthy, will require a sound reasoning layer in combination with deep learning. Parallels have been drawn between Daniel Kahneman's research on human reasoning and decision making and so-called AI systems 1 and 2. I will revisit early theoretical results of fundamental relevance to shaping the latest research, such as the proof that recurrent neural networks compute the semantics of logic programming. I will also seek to identify bottlenecks and the most promising technical directions for the sound representation of learning and reasoning in neural networks. I will conclude by discussing the key ingredients for sustainable AI going forward, identifying directions and challenges for the next decade of research in the field.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body/>
		<back>
			<div type="references">

				<listBibl/>
			</div>
		</back>
	</text>
</TEI>
