<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CILC</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>AI Risk and Reasoning in Neurosymbolic AI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Artur d'Avila Garcez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>City, University of London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>39</volume>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, serious concerns around trust, safety, interpretability and accountability in AI were raised by influential thinkers. Many identified the need for well-founded knowledge representation and reasoning to be integrated with ML systems. Neurosymbolic AI has been an active area of research for many years seeking to do just that, bringing together robust learning in neural networks with reasoning and explainability via symbolic representation and description. In this talk I will review the research in neurosymbolic AI and computation, and discuss how it can help shed light into the increasingly prominent role of safety, trust, interpretability and accountability in AI. AI has become the focus of large-scale research endeavours and has changed businesses. This led to an important debate about the impact of AI on education and society. It has been argued that the building of a rich AI system, semantically sound, explainable and ultimately trustworthy, will require a sound reasoning layer in combination with deep learning. Parallels have been drawn between Daniel Kahneman's research on human reasoning and decision making and so-called AI systems 1 and 2. I will revisit early theoretical results of fundamental relevance to shaping the latest research, such as the proof that recurrent neural networks compute the semantics of logic programming. I will also seek to identify bottlenecks and the most promising technical directions for the sound representation of learning and reasoning in neural networks. I will conclude by discussing the key ingredients for sustainable AI going forward, identifying directions and challenges for the next decade of research in the field.</p>
      </abstract>
    </article-meta>
  </front>
  <body />
  <back>
    <ref-list />
  </back>
</article>