<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>FCAS Ethical AI Demonstrator</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Florian Osswald</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roman Bartolosch</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Torsten Fiolka</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Engelbert Hartmann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bernhard Krach</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jan Feil</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martin Lederer</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Airbus Defence and Space GmbH</institution>
          ,
          <addr-line>Manching</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Data Machine Intelligence Solutions GmbH</institution>
          ,
          <addr-line>Munich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Fraunhofer FKIE</institution>
          ,
          <addr-line>Wachtberg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>While artificial intelligence (AI) has become part of more and more areas of daily life - both private and business - this development has not yet progressed as far in the military sector. This is just changing with the development of new projects, such as the Future Combat Air System (FCAS) - a highly ambitious European defense project planned as a replacement of systems such as the Eurofighter from 2040 onwards. To facilitate and accelerate discussions on the ethical implications of the use of AI in the military domain, we developed the FCAS Ethical AI Demonstrator. We chose the Target Detection, Recognition, and Identification as one highly probable use case and implemented a simulation to showcase the ethical implications of the collaboration between the operator and an AI-assisted system in that application. To help the operator understand and assess the classifications of the used automatic target recognition, explanations of the AI results are computed with an Explainable AI (XAI) method and then provided in the user interface. With this hands-on demonstrator, we are pleased to contribute to the discussions on the ethical implications of the use of AI in military applications.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Ethical AI</kwd>
        <kwd>Explainable AI</kwd>
        <kwd>Future Combat Air System (FCAS)</kwd>
        <kwd>Targeting Cycle</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial intelligence (AI) has become a technology that influences social and economic life
in many ways – ChatGPT shows this vividly [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The military domain is not excluded from
this trend [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5 ref6 ref7">2, 3, 4, 5, 6, 7</xref>
        ]. AI has the potential to help operators make decisions in situations
with ever decreasing time and ever more information available [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The amount of information
available to the operator is also extensive in the Future Combat Air System (FCAS), so the
support of an AI seems necessary. Set out to be the most ambitious European defense project
for the upcoming years, FCAS is a joined efort of European nations. From 2040 onward FCAS
is planned to integrate gradually with the current systems such as the Eurofighter, which it
ifnally will replace.
      </p>
      <p>
        AI will be utilized in FCAS to help operators focus on the information relevant to the
situation at hand. However, the use of AI in such a sensitive domain poses serious ethical
and legal questions. To investigate the responsible use of new technologies in FCAS and to
determine necessary guidelines for such a system, the FCAS Forum [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] was founded – an
interdisciplinary commission with experts from fields such as political science, history, and
theology, next to technical specialists that have already published on this matter [
        <xref ref-type="bibr" rid="ref10 ref3 ref4 ref6">4, 3, 6, 10</xref>
        ]. As
a basis for discussions about the responsible use of AI in the military domain we developed the
FCAS Ethical AI Demonstrator. It showcases an exemplary scenario of collaboration between
the operator and an AI.
      </p>
      <p>
        Using Explainable AI (XAI) methods is imperative here, since it is crucial for the operator to
understand and be able to assess the AI’s assistance. Extracting the information on which neural
networks base their decision, thus making the decision process explainable resp. interpretable,
is the focus of the research field XAI. It was heavily influenced by the XAI program the
Defense Advanced Research Projects Agency (DARPA) launched in 2017 [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. In the
subsequent years, various approaches were proposed as surveyed in [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13, 14, 15, 16, 17, 18, 19, 20</xref>
        ].
      </p>
      <p>In the following, we first elaborate on the motivation of developing the FCAS Ethical AI
Demonstrator. This requires a rough introduction into the operational context of a military
situation. After that, we explain the actual scope and content of the demonstrator. We then
highlight the technical details including the relevance of XAI in it. Finally, the necessity of XAI
for developing ethical AI is discussed.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Context and Motivation</title>
      <p>The use of AI is expected to have an immense impact on the conduct of military operations
[21]. Performing operational activities in a dynamic environment requires a quick adaptation
of decisions. This is formalized in the OODA-Cycle [22], short for Observe, Orient, Decide,
and Act. It describes four stages of decision-making in fast changing environments. A key
question in relation to the use of AI in these stages of decision-making is the degree of
human involvement. In this context, a machine’s level of authority can be described by its
dependency on human actors in the execution of the OODA-Cycle activities, especially in
light of operational uncertainty [23]: Human-in-the-loop (human makes decisions and acts),
Human-on-the-loop (systems make decisions and act, human is monitoring and can intervene),
or Human-out-of-the loop (systems make decisions and act, no human intervention possible)[24].</p>
      <p>
        The role of the human in the application of AI in military operations and the allocation of
responsibility for machine executed authorities appear central for the development of regulatory
frameworks and the discussion of ethical implications [
        <xref ref-type="bibr" rid="ref2 ref4">2, 4, 25</xref>
        ]. A starting point for a detailed
discussion is a concrete use case. For FCAS in [24], an initial but still incomplete set of AI use
cases was identified and preliminarily assessed. Among others this includes Mission Planning
and Execution (MPE), Target Detection, Recognition and Identification (DRI), and Cyber Security
and Resilience (CSR). The FCAS Ethical AI Demonstrator focuses on DRI covering the use of
AI technology to detect and identify potential targets with an Automatic Target Recognition
(ATR). The following section provides a detailed overview of its scope.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Scope and Content</title>
      <p>The FCAS Ethical AI Demonstrator showcases the collaboration between an operator and an AI
performing DRI. One exemplarily implemented scenario shows an unmanned aerial system
(UAS) flying ahead of the main forces to detect and identify hostile air defense systems (see
Figure 1). On the ground, military vehicles are intermixed with civilian infrastructure. The user
of the demonstrator, i.e. the operator of the UAS, has to decide upon the AI’s detections. They
must verify or reject a detected target or mark it for further investigation.</p>
      <p>To keep the operator with meaningful control in the loop the target detection is presented
together with an explanation. For this, the well-established XAI method LIME (Local Interpretable
Model-Agnostic Explanations, [26]) is used to generate heatmaps visualizing the features of the
targets which were essential for the detection. The user of the demonstrator is thus set in a
possible real-life situation where they can experience the impact an AI-based assistant could
have on the process of target selection. This especially facilitates discussions about ethical
issues which might arise.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Technical Details</title>
      <p>The FCAS Ethical AI Demonstrator is implemented as a web-based application system.
Correspondingly, the synthetic video, ATR data and LIME explanation are combined and
controlled in a web-based user interface. In the following, we first briefly describe the synthetic
video. Then, we describe the used ATR and how its detections are explained with LIME. Finally,
the overall application architecture and user interface (UI) are described. As a data basis for
the demonstrator, we use a 4K video generated within the high fidelity Airbus pilot training
environment. It is based on a scriptable aircraft model carrying a controllable video pod
and observing a manually defined automated ground scenario. The necessary metadata for
geolocalization is embedded into the recording.</p>
      <p>The ATR AI model used on the synthetic video is based on the Airbus proprietary CeMoreDeep
architecture. It combines a feature extractor of a convolutional neural network with a highly
optimized support vector machine to detect and classify objects. Furthermore, the Airbus
proprietary software AI Engine RT stabilizes the generated tracks and transfers the predicted
position into latitude and longitude for geolocalization and visualization on a map. As this
ATR model is provided as a black box, a model agnostic XAI method is necessary to explain its
classifications. While several approaches like SHAP [ 27] and Ablation-CAM [28] are suitable,
we have chosen LIME [26], a well-established surrogate based XAI model, for this purpose.
LIME works by sampling input images and creating locally interpretable models around each
given image. By doing so, it identifies the critical areas of the image that the ATR model uses to
make its predictions. This allows users to better understand the decision-making process of the
ATR model and evaluate the quality of its classifications.</p>
      <p>The demonstrator is implemented as a distributed architecture using web-based technologies.
After logging in into the web UI, the user can play back the video; ATR and LIME data are loaded
from a web server and displayed accordingly. The UI guides the user through the experience
and additionally collects his or her reactions.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Explainable and Ethical AI</title>
      <p>Jobin et al. [29] analysed the guidelines and principles that were issued to constitute ethical AI.
Five dominant principles were identified: transparency, justice and fairness, non-maleficence,
responsibility, and privacy [29, 30]. XAI results, here the visualization of relevant image parts,
help to make the decision-making process of the AI more transparent. While ethics provides
principles that should be met by an AI in order for it to be considered ethical, XAI can be an
enabler to meet the requirements and uncover other ethical issues that may arise in a particular
context. With the FCAS Ethical AI Demonstrator, we contribute to the discussions on the ethical
use of AI in a specific context, the military domain.
[16] W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, K.-R. Müller, Explaining deep neural
networks and beyond: A review of methods and applications, Proceedings of the IEEE 109
(2021) 247–278.
[17] D. Minh, H. X. Wang, Y. F. Li, T. N. Nguyen, Explainable artificial intelligence: A
comprehensive review, Artif. Intell. Rev. 55 (2022) 3503–3568. URL: https://doi.org/10.1007/
s10462-021-10088-y. doi:10.1007/s10462-021-10088-y.
[18] C. Molnar, Interpretable Machine Learning, 2 ed., 2022. URL: https://christophm.github.io/
interpretable-ml-book.
[19] A. Holzinger, A. Saranti, C. Molnar, P. Biecek, W. Samek, Explainable AI Methods - A Brief</p>
      <p>Overview, 2022, pp. 13–38. doi:10.1007/978-3-031-04083-2\_2.
[20] U. Schmid, B. Wrede, What is missing in XAI so far?: An interdisciplinary perspective, KI
- Künstliche Intelligenz 36 (2022). doi:10.1007/s13218-022-00786-2.
[21] P. Svenmarck, L. Luotsinen, M. Nilsson, J. Schubert, Possibilities and challenges for artificial
intelligence in military applications, in: Proceedings of the NATO Big Data and Artificial
Intelligence for Military Decision Making Specialists’ Meeting, 2018, pp. 1–16.
[22] R. Coram, Boyd: The fighter pilot who changed the art of war, Hachette+ ORM, 2002.
[23] M. Firlej, A. Taeihagh, Regulating human control over autonomous systems, Regulation &amp;</p>
      <p>Governance 15 (2021) 1071–1091.
[24] M. Azzano, S. Boria, S. Brunessaux, B. Carron, A. Cacqueray, S. Gloeden,
F. Keisinger, B. Krach, S. Mohrdieck, The responsible use of artificial intelligence in
FCAS—an initial assessment (2021). White Paper. Available online at
https://www.fcasforum.eu/articles/responsible-use-of-artificial-intelligence-in-fcas.
[25] F. E. Morgan, B. Boudreaux, A. J. Lohn, M. Ashby, C. Curriden, K. Klima, D. Grossman,
Military applications of artificial intelligence: ethical concerns in an uncertain world,
Technical Report, RAND PROJECT AIR FORCE SANTA MONICA CA SANTA MONICA
United States, 2020.
[26] M. T. Ribeiro, S. Singh, C. Guestrin, "why should i trust you?" explaining the predictions
of any classifier, in: Proceedings of the 22nd ACM SIGKDD international conference on
knowledge discovery and data mining, 2016, pp. 1135–1144.
[27] S. M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, Advances
in neural information processing systems 30 (2017).
[28] H. G. Ramaswamy, et al., Ablation-cam: Visual explanations for deep convolutional
network via gradient-free localization, in: Proceedings of the IEEE/CVF Winter Conference
on Applications of Computer Vision, 2020, pp. 983–991.
[29] A. Jobin, M. Ienca, E. Vayena, The global landscape of AI ethics guidelines, Nature Machine</p>
      <p>Intelligence 1 (2019) 389–399.
[30] H. Vainio-Pekka, M. O.-o. Agbese, M. Jantunen, V. Vakkuri, T. Mikkonen, R. Rousi, P.
Abrahamsson, The role of explainable AI in the research field of AI ethics, ACM Trans.
Interact. Intell. Syst. (2023). URL: https://doi.org/10.1145/3599974. doi:10.1145/3599974,
just Accepted.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] OpenAI, GPT-4
          <source>Technical Report, Technical Report, OpenAI</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Verdiesen</surname>
          </string-name>
          , F. S. de Sio, V. Dignum,
          <article-title>Accountability and control over autonomous weapon systems: A framework for comprehensive human oversight</article-title>
          ,
          <source>Minds and Machines</source>
          <volume>31</volume>
          (
          <year>2021</year>
          )
          <fpage>137</fpage>
          -
          <lpage>163</lpage>
          . URL: https://doi.org/10.1007/s11023-020-09532-9. doi:
          <volume>10</volume>
          .1007/ s11023-020-09532-9.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koch</surname>
          </string-name>
          ,
          <article-title>On digital ethics for artificial intelligence and information fusion in the defense domain</article-title>
          ,
          <source>IEEE Aerospace and Electronic Systems Magazine</source>
          <volume>36</volume>
          (
          <year>2021</year>
          )
          <fpage>94</fpage>
          -
          <lpage>111</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koch</surname>
          </string-name>
          ,
          <article-title>On ethically aligned information fusion for defence and security systems</article-title>
          ,
          <source>in: 2020 IEEE 23rd International Conference on Information Fusion (FUSION)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koch</surname>
          </string-name>
          ,
          <article-title>What artificial intelligence ofers to the air C2 domain? NATO allied command transformation (ACT</article-title>
          ),
          <year>2022</year>
          . URL: https://issuu.com/spp_plp/docs/what_artificial_
          <article-title>intelligence_ofers_to_the_air_c2_?fr=sNzFiMzQ4MjEzNTc.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>U.</given-names>
            <surname>Franke</surname>
          </string-name>
          ,
          <source>Harnessing artificial intelligence</source>
          (
          <year>2019</year>
          ). URL: https://www.fcas-forum.
          <source>eu/ publications/Harnessing-artificial-intelligence.pdf.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Meerveld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lindelauf</surname>
          </string-name>
          , E. Postma,
          <string-name>
            <given-names>M.</given-names>
            <surname>Postma</surname>
          </string-name>
          ,
          <article-title>The irresponsibility of not using AI in the military</article-title>
          ,
          <source>Ethics and Information Technology</source>
          <volume>25</volume>
          (
          <year>2023</year>
          )
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koch</surname>
          </string-name>
          ,
          <article-title>Perspectives on AI-driven systems for multiple sensor data fusion, tm -</article-title>
          <source>Technisches Messen</source>
          <volume>90</volume>
          (
          <year>2023</year>
          )
          <fpage>166</fpage>
          -
          <lpage>176</lpage>
          . URL: https://doi.org/10.1515/teme-2022-0094. doi:doi:10.1515/teme-2022-0094.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koch</surname>
          </string-name>
          , FCAS forum. mission, 2023-
          <volume>04</volume>
          -
          <fpage>17</fpage>
          . URL: https://www.fcas-forum.eu/en/mission/.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rosert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sauer</surname>
          </string-name>
          ,
          <article-title>How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies</article-title>
          ,
          <source>Contemporary Security Policy</source>
          <volume>42</volume>
          (
          <year>2021</year>
          )
          <fpage>4</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          ,
          <article-title>Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>52138</fpage>
          -
          <lpage>52160</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2018</year>
          .
          <volume>2870052</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Vorm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Turek, DARPA's explainable AI (XAI) program: A retrospective</article-title>
          ,
          <source>Applied AI Letters</source>
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <article-title>e61</article-title>
          . URL: https:// onlinelibrary.wiley.com/doi/abs/10.1002/ail2.61. doi:https://doi.org/10.1002/ail2. 61. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/ail2.
          <fpage>61</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Barredo Arrieta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bennetot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tabik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barbado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gil-Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Molina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Benjamins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chatila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI</article-title>
          ,
          <source>Information Fusion</source>
          <volume>58</volume>
          (
          <year>2020</year>
          )
          <fpage>82</fpage>
          -
          <lpage>115</lpage>
          . URL: https://www.sciencedirect.com/science/ article/pii/S1566253519308103. doi:https://doi.org/10.1016/j.inffus.
          <year>2019</year>
          .
          <volume>12</volume>
          . 012.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <article-title>A survey of methods for explaining black box models</article-title>
          , CoRR abs/
          <year>1802</year>
          .
          <year>01933</year>
          (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1802</year>
          .
          <year>01933</year>
          . arXiv:
          <year>1802</year>
          .
          <year>01933</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D. V.</given-names>
            <surname>Carvalho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Cardoso</surname>
          </string-name>
          ,
          <article-title>Machine learning interpretability: A survey on methods and metrics</article-title>
          ,
          <source>Electronics</source>
          <volume>8</volume>
          (
          <year>2019</year>
          ). URL: https://www.mdpi.com/2079-9292/8/8/832. doi:
          <volume>10</volume>
          .3390/electronics8080832.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>