<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vahidin Hasić</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Sarajevo</institution>
          ,
          <addr-line>71210 Sarajevo</addr-line>
          ,
          <country country="BA">Bosnia and Herzegovina</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>While Deep Neural Networks (DNNs) excel in image classification, their black-box nature necessitates the development of Explainable AI (XAI) methods. Existing XAI techniques often face limitations in balancing explainability, fidelity, and eficiency. My doctoral research addresses these limitations through an evolving series of investigations. Initially, I focused on improving the gradient-based explanations. This research led me to explorations of concept-based explanations. Currently, I am investigating sample-based explanations to attribute the importance of training samples. These seemingly disparate lines of research are connected by a common thread: the pursuit of XAI methods that are faithful to the model, understandable to humans, and computationally eficient for real-time applications.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable AI</kwd>
        <kwd>Image Classification</kwd>
        <kwd>Convolutional Neural Networks</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Deep neural networks (DNNs) revolutionize how we approach complex tasks and achieve exceptional
performance in areas like image classification. Their success comes from their ability to learn complex
patterns from a large amount of data. However, this performance comes at the cost of interpretability.
DNNs are often described as black boxes due to their opaque internal workings, making it dificult for
humans to understand the reasoning behind their predictions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Consequently, the research on Explainable AI (XAI) has gained significant momentum, with the aim of
making DNNs more transparent and human-understandable [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. XAI techniques seek to provide insight
into DNN decision-making processes, allowing users to understand their output and increase trust in
the system. However, existing XAI methods often face trade-ofs between explainability, faithfulness,
and eficiency [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        A common approach for XAI computer vision techniques is to attribute importance scores to pixels
or image patches [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. However, such pixel-level explanations can be overwhelming and dificult for
humans to interpret, as they lack semantic meaning and do not capture higher-level concepts relevant
to the task [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. Furthermore, the faithfulness of these methods can be questionable, as they may
lack sensitivity to the model and the data generating process [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Additionally, some methods, such as
SHAP [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], can be computationally too expensive for practical application.
      </p>
      <p>This research aims to develop novel explainability methods that are faithful to the model,
understandable to humans, and computationally eficient for real-time applications. The main research questions
investigated are:</p>
      <p>Prediction
Explanation generation</p>
      <p>SAM2(x)
0 0 0 0
0 1 1 0
⊙ 00 01 10 00 =</p>
      <p>Binary concept ci</p>
      <p>Insertion</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        A common approach for XAI computer vision techniques is to attribute importance scores to pixels
or image patches [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. However, such pixel-level explanations can be overwhelming and dificult for
humans to interpret, as they lack semantic meaning and do not capture higher-level concepts relevant
to the task [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. Concept-based and prototype-based explanations are promising alternatives [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Concept-based explanations are more closely aligned with human reasoning and how humans explain
decisions [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]; they help identify biases and improve classification performance [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], are more stable
against perturbations and more robust against adversarial attacks than traditional XAI methods [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]
However, existing concept-based methods sufer from limitations such as task specificity, reliance on
manual annotation of concepts, and limited automatic concept discovery. The Explain Any Concept
(EAC) method [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] addresses some of these challenges using the Segment Anything Model (SAM1) for
automated concept extraction. EAC assigns importance scores to SAM-generated image segments using
Monte Carlo SHAP, enabling concept-level explanations without manual annotation. However, EAC’s
reliance on a surrogate linear model to approximate the target DNN and the computational expense of
SHAP limits EAC’s practical applicability.
      </p>
      <p>
        Most of the XAI research is centered on determining the most influential input features, often referred
to as feature importance [
        <xref ref-type="bibr" rid="ref9">14, 9</xref>
        ]. An alternative approach to enhancing model transparency is quantifying
individual training instances’ influence on the model’s predictions, known as sample-based explanations
(SBE) [15]. Current state-of-the-art methods for sample-based explanations are generally categorized
into retraining-based and gradient-based approaches [16]. Retraining-based methods operate on the
principle that a training sample’s importance can be quantified by measuring its removal’s impact
on the model’s performance after retraining [17]. Several notable works have developed methods
based on this approach [18, 19]. While this approach is simple and human-understandable, its primary
limitation is computational complexity. Gradient-based methods attribute training sample importance
by calculating gradients over model parameters and evaluating the similarity between the gradients [20].
A fundamental limitation of gradient-based methods is the computational burden that computing the
inverse Hessian poses [21, 22].
      </p>
      <sec id="sec-2-1">
        <title>Superpixel correlation</title>
      </sec>
      <sec id="sec-2-2">
        <title>CorrSHAP explanations</title>
      </sec>
      <sec id="sec-2-3">
        <title>SHAP explanations Φ</title>
      </sec>
      <sec id="sec-2-4">
        <title>Correlation matrix C</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>Building upon EAC Any Segment Explanations (ASE), an improved local, post-hoc, and model-agnostic
explanation method was proposed. ASE overcomes EAC’s limitations of model approximation and
high computational cost while achieving superior model faithfulness. ASE employs state-of-the-art
image segmentation algorithms Segment Anything Model 2 (SAM2) in combination with Segment
Anything Model 1 (SAM1) and residual segment for concept extraction, enabling broader and more
relevant concept capture. ASE employs concept insertion and deletion techniques to determine concept
attributions, avoiding the need for surrogate models and the associated inaccuracies. The framework of
the proposed ASE method is shown in Figure 1. This paper is currently under review.</p>
      <p>While ASE showed good performance, it does not consider the interdependence of visual concepts.
To address this limitation, a novel method that leverages the correlations between image concepts
to accelerate SHAP attribution calculation Correlation SHAP (CorrSHAP) was proposed. CorrSHAP
transforms image superpixels into centralized vector representations and employs the modified
Pearson correlation approach to quantify superpixel relationships (Figure 2). By leveraging the concept
correlation, CorrSHAP dramatically reduces the number of feature subsets that need to be evaluated for
accurate SHAP value estimation, resulting in substantial computational savings. This paper has been
accepted to the XAI 2025 conference.</p>
      <p>Current state-of-the-art methods for estimating training data attribution are highly computationally
expensive and have problems with scaling up. To address these limitations, a novel black-box approach
leveraging kernel functions was proposed. It achieves better model faithfulness while being much faster
than competing methods. This paper is under review for the ICCV conference.</p>
      <p>(a) Mean AUC insertion performance depending on threshold.
(b) Mean AUC deletion performance depending on threshold.</p>
      <p>(c) Execution time depending on threshold.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>strate that CorrSHAP substantially reduces execution time while maintaining explanation faithfulness
(Table 2). This outcome indicates the efectiveness of our correlation method in accurately assigning
correlations to superpixels. Consequently, we can restrict the computation of superpixel attributions
for explanations to a smaller subset of correlated superpixels. Qualitative comparison of CorrSHAP to
Monte Carlo SHAP is shown in Figure 4.</p>
      <sec id="sec-4-1">
        <title>Original image</title>
      </sec>
      <sec id="sec-4-2">
        <title>MCSHAP</title>
      </sec>
      <sec id="sec-4-3">
        <title>CorrSHAP</title>
      </sec>
      <sec id="sec-4-4">
        <title>Model prediction: apiary</title>
      </sec>
      <sec id="sec-4-5">
        <title>Original image</title>
      </sec>
      <sec id="sec-4-6">
        <title>MCSHAP</title>
      </sec>
      <sec id="sec-4-7">
        <title>CorrSHAP</title>
      </sec>
      <sec id="sec-4-8">
        <title>Model prediction: spider</title>
      </sec>
      <sec id="sec-4-9">
        <title>Original image</title>
      </sec>
      <sec id="sec-4-10">
        <title>MCSHAP</title>
      </sec>
      <sec id="sec-4-11">
        <title>CorrSHAP</title>
      </sec>
      <sec id="sec-4-12">
        <title>Model prediction: elephant</title>
      </sec>
      <sec id="sec-4-13">
        <title>Original image</title>
      </sec>
      <sec id="sec-4-14">
        <title>MCSHAP</title>
      </sec>
      <sec id="sec-4-15">
        <title>CorrSHAP</title>
      </sec>
      <sec id="sec-4-16">
        <title>Model prediction: vestment</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Research Impact and Future Work</title>
      <p>The proposed XAI methods ofer explanations that are faithful to the model, understandable to humans,
and computationally eficient which enables them to be practical and applicable in real-world scenarios.
The proposed sample-based XAI method has broad applicability across various domains. It can detect
mislabeled data, identify data leakage, analyze memorization efects, and optimize training datasets and
is applicable to other fields like control, active learning, and system identification.</p>
      <p>Future work will explore alternative correlation measures, focus on enhancing the robustness of
XAI methods, as well as conducting a deeper investigation into the interaction between kernel choice,
hyperparameters, and the underlying data distribution, with the goal of developing a more stable and
consistently high-performing sample based explanability method.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author has not employed any Generative AI tools.
[14] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual
explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE international
conference on computer vision, 2017, pp. 618–626.
[15] C.-P. Tsai, C.-K. Yeh, P. Ravikumar, Sample based explanations via generalized representers,</p>
      <p>Advances in Neural Information Processing Systems 36 (2024).
[16] Z. Hammoudeh, D. Lowd, Training data influence analysis and estimation: A survey, Machine</p>
      <p>Learning 113 (2024) 2351–2403.
[17] J. Lin, A. Zhang, M. Lécuyer, J. Li, A. Panda, S. Sen, Measuring the efect of training data on
deep learning predictions via randomized experiments, in: International Conference on Machine
Learning, PMLR, 2022, pp. 13468–13504.
[18] J. T. Wang, R. Jia, Data banzhaf: A robust data valuation framework for machine learning, in:</p>
      <p>International Conference on Artificial Intelligence and Statistics, PMLR, 2023, pp. 6388–6421.
[19] C. Zhang, D. Ippolito, K. Lee, M. Jagielski, F. Tramèr, N. Carlini, Counterfactual memorization in
neural language models, Advances in Neural Information Processing Systems 36 (2023) 39321–
39362.
[20] G. Pruthi, F. Liu, S. Kale, M. Sundararajan, Estimating training data influence by tracing gradient
descent, Advances in Neural Information Processing Systems 33 (2020) 19920–19930.
[21] P. W. Koh, P. Liang, Understanding black-box predictions via influence functions, in: International
conference on machine learning, PMLR, 2017, pp. 1885–1894.
[22] A. Schioppa, P. Zablotskaia, D. Vilar, A. Sokolov, Scaling up influence functions, in: Proceedings
of the AAAI Conference on Artificial Intelligence, volume 36, 2022, pp. 8179–8186.
[23] N. Ravi, V. Gabeur, Y.-T. Hu, R. Hu, C. Ryali, T. Ma, H. Khedr, R. Rädle, C. Rolland, L. Gustafson,
et al., Sam 2: Segment anything in images and videos, arXiv preprint arXiv:2408.00714 (2024).
[24] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg,
W.-Y. Lo, et al., Segment anything, in: Proceedings of the IEEE/CVF International Conference on
Computer Vision, 2023, pp. 4015–4026.
[25] A. Shrikumar, P. Greenside, A. Kundaje, Learning important features through propagating
activation diferences, in: International conference on machine learning, PMlR, 2017, pp. 3145–3153.
[26] M. Sundararajan, A. Taly, Q. Yan, Axiomatic attribution for deep networks, in: International
conference on machine learning, PMLR, 2017, pp. 3319–3328.
[27] N. Kokhlikyan, V. Miglani, M. Martin, E. Wang, B. Alsallakh, J. Reynolds, A. Melnikov, N. Kliushkina,
C. Araya, S. Yan, et al., Captum: A unified and generic model interpretability library for pytorch,
arXiv preprint arXiv:2009.07896 (2020).
[28] M. T. Ribeiro, S. Singh, C. Guestrin, "why should i trust you?" explaining the predictions of any
classifier, in: Proceedings of the 22nd ACM SIGKDD international conference on knowledge
discovery and data mining, 2016, pp. 1135–1144.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Geirhos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-H.</given-names>
            <surname>Jacobsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Michaelis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zemel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Brendel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bethge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Wichmann</surname>
          </string-name>
          ,
          <article-title>Shortcut learning in deep neural networks</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>2</volume>
          (
          <year>2020</year>
          )
          <fpage>665</fpage>
          -
          <lpage>673</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Abuhmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>El-Sappagh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Muhammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Alonso-Moral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence</article-title>
          ,
          <source>Information fusion 99</source>
          (
          <year>2023</year>
          )
          <fpage>101805</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial intelligence 267</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Fong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vedaldi</surname>
          </string-name>
          ,
          <article-title>Interpretable explanations of black boxes by meaningful perturbation</article-title>
          ,
          <source>in: Proceedings of the IEEE international conference on computer vision</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3429</fpage>
          -
          <lpage>3437</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khosla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lapedriza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Oliva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torralba</surname>
          </string-name>
          ,
          <article-title>Learning deep features for discriminative localization</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>2921</fpage>
          -
          <lpage>2929</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Fel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Picard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bethune</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Boissin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vigouroux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Colin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cadène</surname>
          </string-name>
          , T. Serre, Craft:
          <article-title>Concept recursive activation factorization for explainability</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>2711</fpage>
          -
          <lpage>2721</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Achtibat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dreyer</surname>
          </string-name>
          , I. Eisenbraun,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bosse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wiegand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Samek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lapuschkin</surname>
          </string-name>
          ,
          <article-title>From attribution maps to human-understandable explanations through concept relevance propagation</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>5</volume>
          (
          <year>2023</year>
          )
          <fpage>1006</fpage>
          -
          <lpage>1019</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Adebayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gilmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muelly</surname>
          </string-name>
          , I. Goodfellow,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Sanity checks for saliency maps</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>31</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Scott</surname>
          </string-name>
          , L.
          <string-name>
            <surname>Su-In</surname>
          </string-name>
          , et al.,
          <article-title>A unified approach to interpreting model predictions</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          )
          <fpage>4765</fpage>
          -
          <lpage>4774</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nauta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schlötterer</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Van Keulen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Seifert</surname>
          </string-name>
          , Pip-net:
          <article-title>Patch-based intuitive prototypes for interpretable image classification</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>2744</fpage>
          -
          <lpage>2753</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Watkins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Russakovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monroy-Hernández</surname>
          </string-name>
          ,
          <article-title>"help me help the ai": Understanding how explainability can support human-ai interaction</article-title>
          ,
          <source>in: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ciravegna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barbiero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lió</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maggini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Melacci</surname>
          </string-name>
          ,
          <article-title>Logic explained networks</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>314</volume>
          (
          <year>2023</year>
          )
          <fpage>103822</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sun</surname>
          </string-name>
          , P. Ma,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Explain any concept: Segment anything meets concept-based explanation</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>36</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>