<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Argumentation-based Explainable Machine Learning ArgEML: α-Version Technical Details</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nicoletta Prentzas</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Cyprus</institution>
          ,
          <addr-line>1 University Avenue, 2109 Aglantzia</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper presents the technical details of the ArgEML system α-version, which implements a general argumentation-based framework and methodology for Explainable Machine Learning. ArgEML is based on a novel approach that integrates sub-symbolic methods with logical methods of argumentation to provide explainable solutions to learning problems. Figure 1: ArgEML Methodology</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable Machine Learning</kwd>
        <kwd>Argumentation in Machine Learning</kwd>
        <kwd>Explainable Conflict Resolution</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. ArgEML Framework</title>
      <p>ArgEML is motivated by several works in the literature that explore the potential of the strong
connection of argumentation with learning in the context of explainability. Some of these works
have studied how to learn argumentation frameworks from data, abstract frameworks, [1],
[2],[3],[4],[5] or structured frameworks, [6], [7], [8], [9], [10], [11]. Other interesting works can
be found in [12], [13], [14], [15],[16], [17], [18], [19].</p>
      <p>The ArgEML learning methodology is a case of symbolic supervised learning, that can be also
applied in a hybrid mode on top of other symbolic or non-symbolic learners that would generate
an initial learning theory. The methodology is outlined in Figure 1 and briefly explained in
paragraph 1.1.</p>
      <sec id="sec-1-1">
        <title>1.1. Methodology overview</title>
        <p>• Step 1: decides the language (relevant features / predictors) of the learning problem in a
similar way to the data processing step in a standard machine learning pipeline.
• Step 2: identifies the basic contexts of the problem domain by selecting a compact set of
arguments with high coverage to initialize the theory.</p>
        <p>Both steps (1) and (2) can be executed automatically or in a hybrid mode by calling onto a
subsymbolic or symbolic existing learner.
•</p>
        <p>Step 3: involves a repeated learning process to produce an argumentation theory as the
final output of the learner. At each iteration step two main operators are considered: a
mitigation of errors in the definite prediction of (some part of) the current theory and an
operator for resolving conflicts in the ambiguity of the current theory. The step is guided by a
learning assessment (metric) that measures the quality of a theory as a trade-off between
accuracy and ambiguity.</p>
        <p>The resulting explainable model is an argumentation theory that supports the conclusions
(labels) of a target variable (classification problem case). To generate a prediction for an input
case the theory is queried against all possible conclusions. If exactly one conclusion can be
derived then the prediction is considered definite, otherwise, the conclusion forms a dilemma
within the theory. Moreover, a definite prediction can be correct or wrong, that is definite correct
or definite wrong. The learning assessment metric, which is a generalization of the standard
classification accuracy, is defined as:
LA includes a weighted element wd that reflects the weakness of dilemmas of the theory, e.g. for
binary classification learning problem this factor can be chose to be one-half.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. ArgEML system: α-version</title>
      <p>System components and its main functions are discussed in chapters 2.1 and 2.2 respectively,
whereas in chapter 2.3 we explain the evaluation (system verification) process followed. Details
of the ArgEML theory and learning method can be found in [20]. Figure 2 shows two screenshots
of the system, an ArgEML run on the left, and an ArgEML output on the right.</p>
      <sec id="sec-2-1">
        <title>2.1. System components</title>
        <p>The ArgEML system is a Java application that integrates with Gorgias [21], a structured
argumentation framework, for the development and evaluation of the argumentation theory it
generates. In the automatic mode of operation, the application accepts as input a dataset
(examples + feature set), while in the hybrid mode of operation, the system also accepts as input
the results of an external ML model’s execution on the input dataset. The current implementation
can process the results of the inTrees [22] library. The application interacts with the SWI-Prolog2
component for the evaluation of the Gorgias argumentation theories learned. This interaction is
achieved via the JPL API3.
2 A versatile implementation of the Prolog language. https://www.swi-prolog.org/
3 A library that provides a bidirectional interface between Java and Prolog. https://jpl7.org/ .</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.1. Main functions</title>
        <p>The system accepts as input a dataset in the form of a csv file (feature set is automatically derived
from the file), a set of decision-rules in a predefined format as a csv file, and a set of parameters
that control the learning process. In the automatic mode of operation, the learning process starts
from exploring the input feature set, to construct the initial set of arguments. In the hybrid mode,
additional knowledge is provided as input to the system, in the form of association rules between
input features and the target feature.</p>
        <p>The output of the system is a Gorgias argumentation theory that we can use like any other ML
model to generate predictions for new inputs with the corresponding explanations. The execution
of the system is highly parametric allowing the end user to fine tune the execution of the process.
The basic parameters are shown in Table 1.</p>
        <p>Values Description
args-np / Strategy to define type of initial arguments as
args-wp /mixed general or with premises or both.</p>
        <p>percentage The target value for Definite errors metric.
percentage The target value for Ambiguity metric.
percentage It defines the percentage above which a class of data</p>
        <p>is considered as a majority class.
percentage It defines the range up to which a class distribution</p>
        <p>is considered balanced.
integer It defines the maximum number of iterative learning</p>
        <p>steps.
percentages It defines the percentages for splitting the data into</p>
        <p>train and test.
integer It defines the maximum number of conditions for</p>
        <p>rules selected by the hybrid process of step 2.
decimal&lt;1 It defines the acceptable performance loss during</p>
        <p>the iterative learning process.
args-np:arguments without premises. args-wp:arguments with premises.</p>
        <p>• Parameters fine tuning: The user can experiment with various parameter values to
understand under which configuration the system performs better for their problem.
• Explanations (system output): Explanations of a prediction are provided in a natural form
containing also a contrastive element against other possible predictions. An example of
explanation is shown in Table 2. In this example the system learns an argumentation theory
from an artificial dataset with 10 binary features that supports scenarios for “staying at home”
or “going to work”.</p>
        <p>The system can also use the argumentation-based explanations to partition the problem-space
into different sub-groups, examples are show in Table 3.</p>
        <p>The system can use these sub-groups to provide a grading a confidence for new predictions
depending on the group that a new case may fall. Also, the identification of the dilemma groups
can guide us to look for new data (to help resolve these).</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.2. Evaluation of the α-version System</title>
        <p>Currently, the ArgEML system supports classification problems on datasets with categorical
features. The ArgEML system is under continuous evaluation on different learning problems
through which we get feedback that can help us tune and improve the approach. We present the
results of our experimentation on three datasets, (1) an artificial dataset, (2) a standard dataset
from a ML repository, and (3) a real-life image dataset. We compare the results with Random
Forest (RF) models in Table 4.</p>
        <p>Train set(80%) Test set(20%)
RF ArgEML RF ArgEML
CA DA LA CA DA LA
Artificial Dataset (120) {args-wp, 0%, 0%, n/a} 1 1 1 n/a n/a n/a
IRIS (150) {args-np, 0%, 0%, n/a} 0.96 0.96 0.96 0.90 0.93 0.93
ACSRS (200) {hybrid, 5%, 10%, 2} 0.90 0.94 0.84 0.78 0.77 0.71
a:{initialize-theory, definite-errors-threshold, ambiguity-threshold, rules-complexity}.</p>
        <p>All experiments run with majority-class=60%, balanced-distribution=20%, iterative-learning-steps=10. The experiment on
the artificial dataset run with 100% on the train set. RF: Random Forest. CA: Classification Accuracy. LA: Learning Assessment.
A: Definite Accuracy.</p>
        <p>The comparison shown in Table 4 is between the metric of Definite Accuracy, defined as
(definite correct predictions) / (definite predictions), for the ArgEML theories, and Classification
Accuracy for the RF models. We are also currently experimenting by running ArgEML in hybrid
mode on top of standard explainability systems, such as LIME [23], SHAP [24] and GLocalX [25].</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Contribution to xAI community</title>
      <p>The related material and the codebase of the system, together with the example data sets used in
the demo are available on GitHub (github.com/nicolepr/argeml). The release of ArgEML α-version
will provide the research community with another xAI tool for learning, experimentation and
development of explainable solutions for decision support. We look forward to collaborate with
the community to improve ArgEML and also work on new ideas. An important case of this is to
examine how ArgEML can be used to enhance post-hoc explainability layer for opaque black-box
learned models.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Niskanen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Wallner</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Järvisalo</surname>
          </string-name>
          , “
          <article-title>Synthesizing argumentation frameworks from examples,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Artif</surname>
          </string-name>
          .
          <source>Intell. Res.</source>
          , vol.
          <volume>66</volume>
          ,
          <year>2019</year>
          , doi: 10.1613/jair.1.11758.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Int. Conf. Knowl. Represent. Reason.</surname>
          </string-name>
          , no.
          <source>Kr</source>
          , pp.
          <fpage>549</fpage>
          -
          <lpage>552</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>H.</given-names>
            <surname>Ayoobi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verbrugge</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Verheij</surname>
          </string-name>
          , “
          <article-title>Argumentation-Based Online Incremental Learning,”</article-title>
          <source>IEEE Trans. Autom. Sci. Eng</source>
          ., vol.
          <volume>19</volume>
          , no.
          <issue>4</issue>
          ,
          <year>2022</year>
          , doi: 10.1109/TASE.
          <year>2021</year>
          .
          <volume>3120837</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Potyka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bazo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Spieler</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Staab</surname>
          </string-name>
          , “
          <article-title>Learning Gradual Argumentation Frameworks using Meta-heuristics,”</article-title>
          <source>in CEUR Workshop Proceedings</source>
          ,
          <year>2022</year>
          , vol.
          <volume>3208</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Potyka</surname>
          </string-name>
          , “
          <article-title>Interpreting Neural Networks as Quantitative Argumentation Frameworks,”</article-title>
          <source>in 35th AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , vol.
          <volume>7</volume>
          , doi: 10.1609/aaai.v35i7.
          <fpage>16801</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dimopoulos</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          , “
          <article-title>Learning non-monotonic logic programs: Learning exceptions</article-title>
          ,
          <source>” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          ,
          <year>1995</year>
          , vol.
          <volume>912</volume>
          , pp.
          <fpage>122</fpage>
          -
          <lpage>137</lpage>
          , doi: 10.1007/3-540-59286-5_
          <fpage>53</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Wardeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Coenen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T. B.</given-names>
            <surname>Capon</surname>
          </string-name>
          , “
          <article-title>PISA: A framework for multiagent classification using argumentation,” Data Knowl</article-title>
          . Eng., vol.
          <volume>75</volume>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>57</lpage>
          ,
          <year>2012</year>
          , doi: 10.1016/j.datak.
          <year>2012</year>
          .
          <volume>03</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          , “
          <article-title>Cognitive reasoning and learning mechanisms,”</article-title>
          <source>in CEUR Workshop Proceedings</source>
          ,
          <year>2017</year>
          , vol.
          <year>1895</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <year>2022</year>
          , [Online]. Available: http://hdl.handle.net/10044/1/98940.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Prentzas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nicolaides</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kyriacou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Pattichis</surname>
          </string-name>
          , “
          <article-title>Integrating machine learning with symbolic reasoning to build an explainable ai model for stroke prediction</article-title>
          ,”
          <source>in Proceedings - 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering</source>
          ,
          <string-name>
            <surname>BIBE</surname>
          </string-name>
          <year>2019</year>
          , Oct.
          <year>2019</year>
          , pp.
          <fpage>817</fpage>
          -
          <lpage>821</lpage>
          , doi: 10.1109/BIBE.
          <year>2019</year>
          .
          <volume>00152</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Prentzas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gavrielidou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neophytou</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          , “
          <article-title>Argumentation-based Explainable Machine Learning (ArgEML): a Real-life Use Case on Gynecological Cancer,”</article-title>
          <source>in CEUR Workshop Proceedings</source>
          ,
          <year>2022</year>
          , vol.
          <volume>3208</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>R.</given-names>
            <surname>Riveret</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. D. A.</given-names>
            <surname>Garcez</surname>
          </string-name>
          , “
          <article-title>Neural-symbolic probabilistic argumentation machines</article-title>
          ,
          <source>” in 17th International Conference on Principles of Knowledge Representation and Reasoning</source>
          ,
          <source>KR</source>
          <year>2020</year>
          ,
          <year>2020</year>
          , vol.
          <volume>2</volume>
          , doi: 10.24963/kr.2020/90.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>E.</given-names>
            <surname>Tsamoura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hospedales</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          , “
          <string-name>
            <surname>Neural-Symbolic Integration</surname>
            :
            <given-names>A Compositional</given-names>
          </string-name>
          <string-name>
            <surname>Perspective</surname>
          </string-name>
          ,” in
          <source>35th AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , vol.
          <volume>6A</volume>
          , doi: 10.1609/aaai.v35i6.
          <fpage>16639</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Sendi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Abchiche-Mimouni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Zehraoui</surname>
          </string-name>
          , “
          <article-title>A new transparent ensemble method based on deep learning</article-title>
          ,” in Procedia Computer Science,
          <year>2019</year>
          , vol.
          <volume>159</volume>
          , doi: 10.1016/j.procs.
          <year>2019</year>
          .
          <volume>09</volume>
          .182.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Rizzo</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          , “
          <article-title>An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems</article-title>
          ,
          <source>” Expert Syst. Appl.</source>
          , vol.
          <volume>147</volume>
          ,
          <year>2020</year>
          , doi: 10.1016/j.eswa.
          <year>2020</year>
          .
          <volume>113220</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rizzo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Dondio</surname>
          </string-name>
          , “
          <article-title>Examining the modelling capabilities of defeasible argumentation and non-monotonic fuzzy reasoning,” Knowledge-Based Syst</article-title>
          ., vol.
          <volume>211</volume>
          ,
          <year>2021</year>
          , doi: 10.1016/j.knosys.
          <year>2020</year>
          .
          <volume>106514</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Rizzo</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          , “
          <article-title>A qualitative investigation of the degree of explainability of defeasible argumentation and non-monotonic fuzzy reasoning,”</article-title>
          <source>in CEUR Workshop Proceedings</source>
          ,
          <year>2018</year>
          , vol.
          <volume>2259</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Rizzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Majnaric</surname>
          </string-name>
          , and L. Longo, “
          <article-title>A Comparative Study of Defeasible Argumentation and Non-monotonic Fuzzy Reasoning for Elderly Survival Prediction Using Biomarkers,”</article-title>
          <source>in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          ,
          <year>2018</year>
          , vol.
          <volume>11298</volume>
          LNAI, doi: 10.1007/978- 3-
          <fpage>030</fpage>
          -03840-3_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Rizzo</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          , “
          <article-title>Comparing and extending the use of defeasible argumentation with quantitative data in real-world contexts</article-title>
          ,” Inf. Fusion, vol.
          <volume>89</volume>
          ,
          <year>2023</year>
          , doi: 10.1016/j.inffus.
          <year>2022</year>
          .
          <volume>08</volume>
          .025.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Prentzas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Pattichis</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          , “Explainable Machine Learning via Argumentation,”
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Moraitis</surname>
          </string-name>
          , and
          <string-name>
            <surname>N. I. Spanoudakis</surname>
          </string-name>
          , “GORGIAS: Applying argumentation,” Argument Comput., vol.
          <volume>10</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>81</lpage>
          ,
          <year>2019</year>
          , doi: 10.3233/AAC-181006.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <given-names>H.</given-names>
            <surname>Deng</surname>
          </string-name>
          , “
          <article-title>Interpreting tree ensembles with inTrees,”</article-title>
          <source>Int. J. Data Sci. Anal</source>
          ., vol.
          <volume>7</volume>
          , no.
          <issue>4</issue>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          277-
          <fpage>287</fpage>
          ,
          <year>2019</year>
          , doi: 10.1007/s41060-018-0144-8.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>M. T. Ribeiro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
            , and
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Guestrin</surname>
          </string-name>
          , “
          <article-title>'Why should i trust you?' Explaining the predictions of any classifier</article-title>
          ,”
          <source>in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug</source>
          .
          <year>2016</year>
          , vol.
          <volume>13</volume>
          -17-Augu, pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          , doi: 10.1145/2939672.2939778.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          and
          <string-name>
            <surname>S. I. Lee</surname>
          </string-name>
          , “
          <article-title>A unified approach to interpreting model predictions,”</article-title>
          <source>in Advances in Neural Information Processing Systems</source>
          ,
          <year>2017</year>
          , vol.
          <source>2017-Decem</source>
          , pp.
          <fpage>4766</fpage>
          -
          <lpage>4775</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Setzu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          , “
          <fpage>GLocalX</fpage>
          - From Local to Global Explanations of Black Box AI Models,” Artif. Intell., vol.
          <volume>294</volume>
          ,
          <year>2021</year>
          , doi: 10.1016/j.artint.
          <year>2021</year>
          .
          <volume>103457</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>