<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>GlanceNets: Interpretable, Leak-proof Concept-based Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Emanuele Marconato</string-name>
          <email>emanuele.marconato@unitn.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Passerini</string-name>
          <email>andrea.passerini@unitn.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Teso</string-name>
          <email>stefano.teso@unitn.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CIMeC, University of Trento</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DII, University of Pisa</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>DISI, University of Trento</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>The 17th International Workshop on Neural-Symbolic Learning and Reasoning</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this extended abstract, we briefly outline GlanceNets [1], a new class of deep learning classifiers that acquire high-level concepts from data and use them for both computing predictions and generating ante-hoc explanations of those predictions. In contrast with other concept-based networks, GlanceNets ensure the learned concepts, and the explanations built on them, are human interpretable, even in out-of-distribution scenarios. The core ideas at the heart of GlanceNets extend naturally to other Neuro-Symbolic architectures involving reasoning during inference.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Concept-based Models</kwd>
        <kwd>Concept Learning</kwd>
        <kwd>Representation Learning</kwd>
        <kwd>Explainable AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body />
  <back>
    <ref-list />
  </back>
</article>