<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Szymon Bobek</string-name>
          <email>szymon.bobek@uj.edu.pl</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sławomir Nowaczyk</string-name>
          <email>slawomir.nowaczyk@hh.se</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joao Gama</string-name>
          <email>jgama@fep.up.pt</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sepideh Pashami</string-name>
          <email>sepideh.pashami@hh.se</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rita P. Ribeiro</string-name>
          <email>rpribeiro@fc.up.pt</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zahra Taghiyarrenani</string-name>
          <email>zahra.taghiyarrenani@hh.se</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bruno Veloso</string-name>
          <email>bveloso@fep.up.pt</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lala Rajaoarisoa</string-name>
          <email>lala.rajaoarisoa@imt-nord-europe.fr</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maciej Szela˛żek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Grzegorz J. Nalepa</string-name>
          <email>grzegorz.j.nalepa@uj.edu.pl</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AGH UST</institution>
          ,
          <addr-line>Kraków</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Center for Applied Intelligent Systems Research, Halmstad University</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IMT Nord Europe, Univ. of Lille, Center for digital systems</institution>
          ,
          <addr-line>F-59000 Lille</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Jagiellonian Human-Centered Artificial Intelligence Laboratory (JAHCAI), Mark Kac Center for Complex Systems Research, and Institute of Applied Computer Science, Jagiellonian University</institution>
          ,
          <addr-line>31-007 Kraków</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Porto</institution>
          ,
          <addr-line>Porto, Portugal and INESC TEC, Porto</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Advances in artificial intelligence trigger transformations that make more and more companies enter Industry 4.0 and 5.0 eras. In many cases, these transformations are gradual and performed in a bottom-up manner. This means that in the first step, the industrial hardware is upgraded to collect as much data as possible without actual planning of the utilization of the information. Furthermore, the data storage and processing infrastructure is prepared to keep large volumes of historical data accessible for further analysis. Only in the last step are methods for processing the data developed to improve or gain more insight into the industrial and business processes. Such a pipeline makes many companies face a problem with huge amounts of data, an incomplete understanding of how the existing knowledge is represented in the data, under which conditions the knowledge no longer holds, or what new phenomena are hidden inside the data. We argue that this gap needs to be addressed by the next generation of XAI methods which should be expert-oriented and focused on knowledge generation tasks rather than model debugging. The paper is based on the findings of the EU CHIST-ERA project on Explainable Predictive Maintenance (XPM).</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainability</kwd>
        <kwd>Predictive Maintenance</kwd>
        <kwd>Industry 5</kwd>
        <kwd>0</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        neural networks. This includes a variety of model-agnostic methods such as LIME [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], SHAP [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
Anchor [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], LORE [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and LUX [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], but also model-specific approaches that take advantage
of gradient-based deviations from the reconstruction of the input features [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or the internal
features of particular machine learning algorithms, such as GradCam [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], dedicated to deep neural
networks. In parallel to the aforementioned research, another trend is also emerging that forces
the construction of efficient glass-box [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], inherently interpretable models such as explainable
boosting machines [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and, most recently, prototype deep neural networks.
      </p>
      <p>
        Regardless of the underlying techniques used in the XAI methods, their goal is always to explain
the model’s decisions by delivering a description of a relationship between the model input and
the model output. Such a description can be presented in various forms [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]: feature importance,
feature impact, decision rules, prototypes, examples, etc. This builds end-user trust in the model
and allows for better model understanding, which can be used to model debugging, data debiasing,
etc., contributing to boosting the model performance.
      </p>
      <p>
        However, at the same time, one should assume that high-performing models encapsulate
important knowledge, which can be used to better understand not only the model itself but also the
data and the process that generates it. This is especially important in areas such as industry, where
data is a black box itself – as in most cases it comes as unlabeled, noisy, incomplete, and, most
importantly, without any explicitly formulated background knowledge about the processes that
generated them. It is even more essential for Industry 5.0, where the collaboration between AI and
humans is one of the core assumptions, and the pitfalls and false hopes for current XAI have already
been reported [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Therefore, shedding more light on the data and focusing on knowledge
generation regarding the underlying mechanism that is the source of the data can help improve
not only the model but also the whole process (i.e., business process, manufacturing process,
maintenance process) on which the next generation of models can be trained, as presented in
Fig. 1. Surprisingly, this motivation was never a foundation for the state-of-the-art XAI algorithms.
Therefore, in our opinion, the new XAI methods for Industry 5.0 require several major changes
in the design assumptions, including the following:
A1 – Presenting knowledge that has already been available in the domain is closer to confirmation ,
rather than explanation. The new XAI methods should focus more on extracting knowledge from
predictive models than be limited only to the explanations that mainly serve the purpose of a model
or dataset debugging.
      </p>
      <p>
        A2 – Explanation is an act of knowledge transfer. This imposes a need for research on methods
that will allow bidirectional transfer between humans and AI systems. The new generation of XAI
algorithms should inherently be designed to allow humans to formulate expectations and needs
to be addressed by the XAI algorithm and receive explanations on the desired level of abstraction.
A3 – New knowledge discovered in the explanation process may not be consistent with domain
knowledge. The new generation of XAI algorithms should generate explanations that can be
validated against domain knowledge and contested by a human [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Therefore, they should be
tightly coupled with argumentative frameworks [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>A4 – Explanations are useful if they are actionable [15]. New XAI algorithms should be designed
so that the knowledge they extract can be directly utilized by machine learning models, the business
process, or instant decision-making, closing the explanation loop.</p>
      <p>A3: Conformance checking
between domain knowledge
and explanations.</p>
      <p>A1: Explanations
knowledge generation.</p>
      <p>as</p>
    </sec>
    <sec id="sec-2">
      <title>Performance</title>
      <p>Explainability
A4: Utilisation of explanation for boosting performacnce of AI
systems and industrial processes.</p>
      <p>A2: Explanations as interactive knowledge transfer mechanism
between human and the system.</p>
    </sec>
    <sec id="sec-3">
      <title>Human</title>
    </sec>
    <sec id="sec-4">
      <title>Knowledge</title>
      <p>2. Showcases and limitations of current XAI in industry
The assumptions brought by us in the previous section aroused based on our experience from
eXpainable Predictive Maintenance (XPM)1 project where we work on integrating explanations
into Artificial Intelligence (AI) solutions within the area of Predictive Maintenance (PM) [ 16].
In the following section, we present several showcases of major challenges for practical XAI
applications in four selected cases: electric vehicles, metro trains, steel plants and wind farms.</p>
      <p>In the steel industry, one of the questions is how to translate the decision–making process
performed by an ML algorithm in a way that can be understood by a domain expert who is
not necessarily an ML specialist. This is an essential aspect, neglected in many of the current
XAI solutions, as successful implementation of new solutions requires compliance with the
technologies and procedures used in the company (A4). Therefore, it is crucial to develop XAI
methods that will allow incorporating real-life knowledge into quality decision support tasks and
creating a semantic connection between the data and human specialists [17] (A2).</p>
      <p>Another issue related to the proper utilization of domain knowledge in explanations is related to
so-called physics-guided or physics-informed systems, popular in industries that relay on processes
that have strong theoretical background in mathematics or physics. In such a case, the AI system
can be enhanced with that knowledge, as shown in [18]. However, current XAI methods do not
allow us to take full advantage of the fact that domain knowledge has been used in the ML model,
although it could be used as part of the quality control of explanations in terms of compliance
with known theoretical foundations (A3).</p>
      <p>Addressing (A1), (A2), and (A3) is also important in the automotive industry. Transfer Learning
and, more specifically, Domain Adaptation, have been shown to effectively address dynamic and
diverse environments [19]. However, explanations of how domains differ and how AI/ML models
capture and align these variations are still lacking. An industry-specific Explainable Domain
Adaptation (XDA) approach is required to extract information on domain changes that may not
be immediately apparent to domain experts [20] (A1), (A3). Only by leveraging the explanations
and knowledge acquired through XDA, can experts make informed decisions and develop business
strategies (for example, maintenance plans or usage guidelines) that align with the requirements
and expectations of diverse customers and fleet operators (A1).
1See https://www.chistera.eu/projects/xpm.</p>
      <p>Analogous problems were observed by us in explaining failures and anomalies. In estimating
time until a system failure, the explanation should correspond to describing survival or hazard
functions. Survival analysis is a widely used method in various industries for that purpose, however,
it frequently falls short in providing explanations for its estimations. Therefore, it is crucial to
develop explanations that express the influential and distinctive factors that have shaped the
observed survival patterns [21] (A1), (A3).</p>
      <p>In the case of online anomaly detection algorithms [22, 23], some rule-based methods [24, 25]
allow domain experts to identify an abnormal behavior of a specific sensor or module (A1).
Nonetheless, despite the capability to report anomalies in real-time, these rule-based models
do not identify the root cause of the failure (A3) or allow the transfer of domain knowledge to
the model (A2). In the specific case of the mobility industry, imprecise explanations can lead
to a wrong decision of maintenance teams, consequently causing the faulty component to not
be repaired. The explanation layer must be designed to have a human in the loop to improve
knowledge transfer between the domain expert and the model.</p>
      <p>
        In the wind farm industries maintenance driven by an anomaly explanation approach should
assist maintenance planners in making accurate diagnostic decisions [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The question is how
to generate explanations and provide specific inferences about the problem (its severity and
complexity), that can be validated against the domain’s knowledge (A3). In such a setting, the
explanation provided by the most popular XAI algorithms, such as SHAP, LIME, Anchor, etc. is
ifnally not sufficient. The difficulty is, on the one hand, as the Shapley value for each characteristic
is obtained by the contribution of the features to all possible subsets of other features so that for
N characteristics, the calculation of SHAP values is exponential in the number of features [26].
Thus, the application is complex for the decision-support purposes. On the other hand, LIME
method is very sensitive to small perturbations in the input data that can cause a large change
in the values of the features [27]. That implies that in some situations, it is possible to generate
different explanations for very similar observations, which may confuse the target user (operator
or maintenance manager). Accordingly, to make efficient the decision-making process, we should
provide detailed and persistent explanations based on knowledge of the domain expertise, as well
as the exploitation of data and inspection reports [28]. Indeed, the integration of this information
should improve significantly the monitoring and control of industrial processes (A4).
      </p>
      <sec id="sec-4-1">
        <title>3. Summary</title>
        <p>To make Explainable Artificial Intelligence (XAI) more useful, several changes can be
implemented, such as moving toward context-aware, human-centric, understandable and trustworthy
explanations. We argue that this cannot be delivered with current XAI methods unless the new
generation of XAI systems includes the assumptions A1-A4 as design requirements. For the
development of AI systems that means a need for redesigning the ML/DM pipeline to make an XAI an
integral part of it and allocating more resources to research efficient ways of human-AI interaction.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Acknowledgment</title>
        <p>The paper is funded from the XPM project funded by the National Science Centre, Poland under
CHIST-ERA programme (NCN UMO-2020/02/Y/ST6/00070) and Swedish Research Council
under grant CHIST-ERA-19-XAI-012 and by the EC project HumanE-AI: Toward AI Systems
That Augment and Empower Humans(grant #820437)
[15] I. Linkov, S. Galaitsi, B. D. Trump, J. M. Keisler, A. Kott, Cybertrust: From
explainable to actionable and interpretable artificial intelligence, Computer 53 (2020) 91–96.
doi:10.1109/MC.2020.2993623.
[16] S. Pashami, S. Nowaczyk, Y. Fan, J. Jakubowski, N. Paiva, N. Davari, S. Bobek, S. Jamshidi,
H. Sarmadi, A. Alabdallah, R. P. Ribeiro, B. Veloso, M. Sayed-Mouchaweh, L. Rajaoarisoa, G. J.</p>
        <p>Nalepa, J. Gama, Explainable predictive maintenance, 2023. arXiv:2306.05120.
[17] M. Szelażek, S. Bobek, G. J. Nalepa, Semantic data mining-based decision support for quality
assessment in steel industry, Expert Systems n/a (2023) e13319. URL: https://onlinelibrary.
wiley.com/doi/abs/10.1111/exsy.13319. doi:https://doi.org/10.1111/exsy.13319.
arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/exsy.13319.
[18] J. Jakubowski, P. Stanisz, S. Bobek, G. J. Nalepa, Roll wear prediction in strip cold rolling
with physics-informed autoencoder and counterfactual explanations, in: 2022 IEEE 9th
International Conference on Data Science and Advanced Analytics (DSAA), 2022, pp. 1–10.
doi:10.1109/DSAA54385.2022.10032357.
[19] Z. Taghiyarrenani, S. Nowaczyk, S. Pashami, M.-R. Bouguelia, Multi-domain adaptation for
regression under conditional distribution shift, Expert Systems with Applications 224 (2023) 119907.
[20] A. Berenji, S. Nowaczyk, Z. Taghiyarrenani, Data-centric perspective on explainability versus
performance trade-off, in: Advances in Intelligent Data Analysis XXI, Springer Nature Switzerland,
Cham, 2023, pp. 42–54.
[21] A. Alabdallah, S. Pashami, T. Rögnvaldsson, M. Ohlsson, Survshap: A proxy-based algorithm for
explaining survival models with shap, in: 2022 IEEE 9th International Conference on Data Science and
Advanced Analytics (DSAA), 2022, pp. 1–10. doi:10.1109/DSAA54385.2022.10032392.
[22] N. Davari, S. Pashami, B. Veloso, S. Nowaczyk, Y. Fan, P. M. Pereira, R. P. Ribeiro, J. Gama, A fault
detection framework based on lstm autoencoder: A case study for volvo bus data set, in: Advances
in Intelligent Data Analysis XX: 20th International Symposium on Intelligent Data Analysis, IDA
2022, Rennes, France, April 20–22, 2022, Proceedings, Springer, 2022, pp. 39–52.
[23] N. Davari, B. Veloso, R. P. Ribeiro, J. Gama, Fault forecasting using data-driven modeling: A case
study for metro do porto data set, in: Machine Learning and Principles and Practice of Knowledge
Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France,
September 19–23, 2022, Proceedings, Part II, Springer, 2023, pp. 400–409.
[24] R. P. Ribeiro, S. M. Mastelini, N. Davari, E. Aminian, B. Veloso, J. Gama, Online anomaly explanation:
A case study on predictive maintenance, in: Machine Learning and Principles and Practice of
Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble,
France, September 19–23, 2022, Proceedings, Part II, Springer, 2023, pp. 383–399.
[25] G. Vilone, L. Longo, A quantitative evaluation of global, rule-based explanations of post-hoc, model
agnostic methods, Frontiers in Artificial Intelligence 4 (2021).
[26] S. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, in: I. Guyon, U. V.</p>
        <p>Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural
Information Processing Systems 30, Curran Associates, Inc., 2017, pp. 4765–4774.
[27] D. Alvarez-Melis, J. T. S., On the robustness of interpretability methods, CoRR abs/1806.08049
(2018). URL: http://arxiv.org/abs/1806.08049. arXiv:1806.08049.
[28] M. Sayed-Mouchaweh, L. Rajaoarisoa, Explainable decision support tool for iot
predictive maintenance within the context of industry 4.0, in: 2022 21st IEEE International
Conference on Machine Learning and Applications (ICMLA), 2022, pp. 1492–1497.
doi:10.1109/ICMLA55696.2022.00234.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          , “
          <article-title>Why should i trust you?”: Explaining the predictions of any classifier</article-title>
          ,
          <source>in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          , KDD '16,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2016</year>
          , p.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          . URL: https://doi.org/10.1145/2939672.2939778. doi:
          <volume>10</volume>
          .1145/2939672.2939778.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          , G. Erion,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          , A. DeGrave,
          <string-name>
            <surname>J. M. Prutkin</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Nair</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Katz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Himmelfarb</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Bansal</surname>
            ,
            <given-names>S.-I. Lee</given-names>
          </string-name>
          ,
          <article-title>From local explanations to global understanding with explainable ai for trees</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>2</volume>
          (
          <year>2020</year>
          )
          <fpage>56</fpage>
          -
          <lpage>67</lpage>
          . URL: https://doi.org/10.1038/s42256-019-0138-9. doi:
          <volume>10</volume>
          .1038/s42256-019-0138-9.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>Anchors: high-precision model-agnostic explanations</article-title>
          ,
          <source>in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence</source>
          , AAAI'18/IAAI'18/EAAI'18, AAAI Press, New Orleans, Louisiana, USA,
          <year>2018</year>
          , pp.
          <fpage>1527</fpage>
          -
          <lpage>1535</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <source>Local Rule-Based Explanations of Black Box Decision Systems</source>
          ,
          <year>2018</year>
          . URL: http://arxiv.org/abs/
          <year>1805</year>
          .10820. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1805</year>
          .
          <volume>10820</volume>
          , arXiv:
          <year>1805</year>
          .10820 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bobek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Nalepa</surname>
          </string-name>
          ,
          <article-title>Introducing uncertainty into explainable ai methods</article-title>
          , in: M.
          <string-name>
            <surname>Paszynski</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Kranzlmüller</surname>
            ,
            <given-names>V. V.</given-names>
          </string-name>
          <string-name>
            <surname>Krzhizhanovskaya</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          <string-name>
            <surname>Dongarra</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. M.</surname>
          </string-name>
          <article-title>A</article-title>
          .
          <string-name>
            <surname>Sloot</surname>
          </string-name>
          (Eds.),
          <source>Computational Science - ICCS 2021</source>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>444</fpage>
          -
          <lpage>457</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Randriarison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rajaoarisoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sayed-Mouchaweh</surname>
          </string-name>
          ,
          <article-title>Faults explanation based on a machine learning model for predictive maintenance purposes</article-title>
          ,
          <source>in: Proceedings of the 7th edition in the series of the International Conference on Control, Automation and Diagnosis„</source>
          <year>2023</year>
          , p.
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Selvaraju</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Vedantam</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Cogswell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Parikh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Batra</surname>
          </string-name>
          ,
          <article-title>Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization</article-title>
          ,
          <source>CoRR abs/1610</source>
          .02391 (
          <year>2016</year>
          ). URL: http://arxiv.org/abs/1610.02391. arXiv:
          <volume>1610</volume>
          .
          <fpage>02391</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rudin</surname>
          </string-name>
          ,
          <article-title>Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>206</fpage>
          -
          <lpage>215</lpage>
          . URL: https://doi.org/10.1038/s42256-019-0048-x. doi:
          <volume>10</volume>
          .1038/s42256-019-0048-x.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Caruana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gehrke</surname>
          </string-name>
          , G. Hooker,
          <article-title>Accurate intelligible models with pairwise interactions</article-title>
          ,
          <source>in: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          , KDD '13,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2013</year>
          , p.
          <fpage>623</fpage>
          -
          <lpage>631</lpage>
          . URL: https://doi.org/10.1145/2487575.2487579. doi:
          <volume>10</volume>
          .1145/2487575.2487579.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bobek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Veloso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rajaoarisoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Nalepa</surname>
          </string-name>
          ,
          <article-title>Feature importances as a tool for root cause analysis in time-series events</article-title>
          ,
          <source>in: Proceedings of the International Conference on Computational Science (ICCS)</source>
          ,
          <year>2023</year>
          , p.
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          ,
          <article-title>Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>52138</fpage>
          -
          <lpage>52160</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2018</year>
          .
          <volume>2870052</volume>
          , conference Name: IEEE Access.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lahiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Dickerson</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.-I. Lee</surname>
          </string-name>
          ,
          <source>Pitfalls of Explainable ML: An Industry Perspective</source>
          ,
          <year>2021</year>
          . URL: http://arxiv.org/abs/2106.07758. doi:
          <volume>10</volume>
          .48550/arXiv.2106.07758, arXiv:
          <fpage>2106</fpage>
          .07758 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>A novel human-centred evaluation approach and an argument-based method for explainable artificial intelligence</article-title>
          , in: I.
          <string-name>
            <surname>Maglogiannis</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Iliadis</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Macintyre</surname>
          </string-name>
          , P. Cortez (Eds.),
          <source>Artificial Intelligence Applications and Innovations</source>
          , Springer International Publishing, Cham,
          <year>2022</year>
          , pp.
          <fpage>447</fpage>
          -
          <lpage>460</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vassiliades</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bassiliades</surname>
          </string-name>
          , T. Patkos,
          <source>Argumentation and explainable artificial intelligence:</source>
          a survey,
          <source>The Knowledge Engineering Review</source>
          <volume>36</volume>
          (
          <year>2021</year>
          )
          <article-title>e5</article-title>
          . doi:
          <volume>10</volume>
          .1017/S0269888921000011.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>