<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>These authors contributed equally.
$ marija.kopanja@biosense.rs (M. Kopanja); sanja.brdar@biosense.rs (S. Brdar); stefan.hacko@biosense.rs
(S. Hačko)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Uncovering Decision-making Process of Cost-sensitive Tree-based Classifiers using the Adaptation of TreeSHAP,⋆⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marija Kopanja</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sanja Brdar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Hačko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>BioSense Institute</institution>
          ,
          <addr-line>Novi Sad</addr-line>
          ,
          <country country="RS">Serbia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Sciences, University of Novi Sad</institution>
          ,
          <country country="RS">Serbia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The cost-sensitive decision tree (CSDT) method is a modification of the decision tree (DT) algorithm with incorporated misclassification costs into the learning process. Cost-sensitive tree-based classifiers are well suited for tackling class imbalance. In general, tree-based classifiers are characterized by a nice graphical representation of the model, which can be used in explaining the classifier's decisionmaking process. However, the depth of the tree can be a limiting factor in comprehending the model. An additional obstacle in comprehending the decision-making process of the CSDT classifiers is the implementation of the method. The TreeSHAP method, a variation of the SHAP methodology for the exact calculation of SHAP values for tree-based models, can facilitate the explanation of (deep) tree-based models. However, the current implementation of the TreeSHAP method is limited to only several treebased models, excluding the cost-sensitive tree-based classifiers. The aim of this paper is to introduce a cost-sensitive tree explanation method based on the TreeSHAP method and analyze insights into the decision-making process of the CSDT classifiers compared to DT classifiers.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable artificial intelligence</kwd>
        <kwd>Tree SHAP</kwd>
        <kwd>Cost-sensitive learning</kwd>
        <kwd>Tree-based classifiers</kwd>
        <kwd>Costsensitive decision tree</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Cost-sensitive (CS) learning is a subgroup of machine learning (ML) classification algorithms
that are able to cope with samples with diferent misclassification costs [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
The misclassification cost of a sample is a function of the actual and predicted class, represented
as a two-dimensional cost matrix. Two types of cost matrices can be distinguished,
classdependent and example-dependent cost matrices [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. A former, much stronger assumption
can be used in the context of imbalanced classification [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Namely, class imbalance in a
two-class classification framework can be described as a problem where one, minority class,
with a smaller number of samples, is underrepresented. The CS approach is important due to
the vast amount of intrinsically imbalanced data in many domains including fraud detection
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], credit rating [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], medicine [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], [16], and others [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>The tree-based classifiers have a nice graphical representation of the model, which facilitates
understanding of the decision-making process. However, the depth of the tree can be a limiting
factor in comprehending which features contribute to the classification of each individual
sample. An additional obstacle for CSDT is the implementation of the method. For tree-based
models there is a model-specific xAI method TreeSHAP, a method for the exact calculation
of SHAP values [17], [18], [19]. The objective of our research is to introduce a CS version of
TreeSHAP that will facilitate the explanation of (deep) cost-sensitive tree-based models, as the
current implementation is not compatible with the CSDT algorithm.</p>
      <p>In the literature, there is a limited amount of articles dealing solely with the interpretable ML
models for imbalanced data, of which the CS classification is a special case [ 20], [21]. To the
best of the present author’s knowledge, there are no articles in the literature considering the
explanation methods for CSDT.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>
        Given a cost matrix, either class-dependent or sample-dependent, the CSDT method uses a
greedy binary splitting procedure for feature space stratification by using a cost-sensitive
splitting criterion that is expressed as expected misclassification cost reduction. The cost-based
impurity measure is the minimal cost of labeling a given node to a particular class. More details
about the method can be found in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>Diferent methods have been proposed for calculating SHAP values [ 22]. One model-specific
method is TreeSHAP [17], a method that computes exact SHAP values in polynomial time for
tree-based models. The method is helpful for understanding the global tree model structure
based on many local explanations. This is especially important for deep tree-based models, as
the depth of the tree is a limiting factor in comprehending the model’s decision process. For
the CSDT classifiers, another limiting factor is the impractical graphical representation of the
model. Extracting decision rules for individual samples is hard even for shallow CSDT models.
On the other hand, applying the TreeSHAP algorithm was impossible due to its incompatibility
with CSDT algorithm.</p>
      <p>The TreeSHAP uses a tree-based model structure to compute SHAP values exactly. The CSDT
uses a cost matrix in the tree-building process for making a decision on whether the sample will
be classified as positive or negative, depending on the number of samples from each class in the
node and cost matrix. Due to the latter fact, the cost matrix needs to be taken into consideration
by the TreeSHAP algorithm.</p>
      <p>To create the CSTreeSHAP method as a subclass of TreeSHAP, we developed several recursive
functions for extracting required information from the CS tree model, such as sequential ID
numbers of left and right nodes. The cost matrix is not used explicitly due to its
sampledependent nature for the dataset used in our research, rather attributes of the existing CS tree
object are adapted such as information about the number of samples in each class per node and
the corresponding part of the cost matrix. This is a crucial part since the prediction in each
terminal node is determined based on the number of samples in the node and their costs in the
cost matrix. Recall, each terminal node in the CSDT model is allocated to the least costly class.
The available implementation of TreeSHAP uses a special tree structure of a tree-based model,
which CSDT does not have, at least not in the required form. However, the CSDT tree model can
be described and therefore recreated using the mentioned list of information. Extraction of the
information for classic DT models is straightforward due to its implementation in scikit-learn
library [23]. The code of our CSDTreeSHAP implementation will be available on GitHub within
the next extended version of the paper.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <p>The aim of our experimental work was to analyze feature importance retrieved from the
obtained local explanations using CSTreeSHAP and TreeSHAP methods for CSDT and DT
models, respectively. By providing insight into the decision-making process of diferent ML
models it is possible to make evidence-based decisions about models that would be preferable.</p>
      <p>
        We used a well-known CS1 dataset [24], from the credit scoring domain, where the minority
class of risky clients has a higher misclassification cost (i.e. the clients that are more likely to
default on a loan) [25], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [26], [27], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. More information about the dataset and the cost
matrix can be found in Costcla library 1 and in the article [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], respectively.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Experimental setup</title>
        <p>
          For partitioning the dataset we used a 10-fold cross-validation procedure. Measures used for
evaluation include precision and recall per class,  1 score and relative cost reduction measure
(RCR) from studies [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], [28], [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], which is the most relevant measure in CS framework. Due to
space limitations, results for mentioned measures are not reported.
        </p>
        <p>In the explaining phase, obtained SHAP values per sample are averaged for each feature
on the whole dataset and represented in the global feature importance plot. The tree depth is
varied, ranging from 2 to 10, i.e. from shallow to deeper tree models, as the objective was to
investigate whether DT and CSDT models trained on the same data make decisions based on
similar indicators and which model would be preferred by domain experts as an end user.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Results</title>
        <p>
          The cross-validated results for varying depths of a tree model are analyzed. The results for CSDT
models coincide with those reported in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Given RCR as a performance measure, the model of
choice would be the CSDT model, which achieves the highest RCR. There is consistently good
performance of CSDT in terms of RCR at all depths compared to DT. Global feature importance
plot using summary plots (Fig.1) shows the top-rated features in descending order according to
their importance, i.e. mean absolute SHAP values.
1A Python module for cost-sensitive machine learning. https://pypi.org/project/costcla/
        </p>
        <p>Due to space limitations, not all plots are presented. For the dataset, the results are shown
in figure Fig.1. for depth 3, as there is a slight increase in RCR for CSDT until the tree depth
reaches 5. Notably, the top features for DT and CSDT coincide, with slight perturbation in
their order. However, the CSDT model is characterized by a larger number of splitting features
compared to DT for the same depth, meaning its harder to comprehend the most important
features for decisions the model makes, especially due to the lack of code implementation that
could facilitate visualization of the CSDT tree model represented as a dictionary object.</p>
        <p>(a) Global feature importance for DT model.</p>
        <p>(b) Global feature importance for CSDT model.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Concluding remarks and future work</title>
      <p>The presented article introduces the CSTreeSHAP method as an xAI tool for better understanding
the driving forces behind the decision-making process of CS classifiers, with a focus on CSDT
models. Except for being well-suited for CS problems, the potential of the tool and CSDT
classifiers can be seen in a large number of imbalanced data on which the methodology could
be applied. The usefulness of the explanation method is demonstrated for diferent datasets
at diferent tree depths. Having the ability to compare explanations of diferent tree models
at varying depths enables well-grounded model selection. Directions for future work include
extending the application of the CSTreeSHAP method on ensemble methods with cost-sensitive
tree models as base learners and other (ensemble) cost-sensitive models.
[16] X. Wan, J. Liu, W. Cheung, T. Tong, Learning to improve medical decision making from
imbalanced data without a priori cost, BMC medical informatics and decision making 14
(2014) 1–9.
[17] S. Lundberg, G. Erion, H. Chen, A. DeGrave, J. Prutkin, B. Nair, R. Katz, J. Himmelfarb,
N. Bansal, S. Lee, From local explanations to global understanding with explainable ai for
trees, Nature Machine Intelligence 2 (2020) 2522–5839.
[18] S. Hart, Shapley Value, in: J. Eatwell, M. Milgate, P. Newman (Eds.), Game Theory, Palgrave</p>
      <p>Macmillan UK, London, 1989, pp. 210–216.
[19] R. Mitchell, E. Frank, G. Holmes, Gputreeshap: Massively parallel exact calculation of shap
scores for tree ensembles, 2020. doi:10.48550/ARXIV.2010.13972.
[20] D. Dablain, C. Bellinger, W. Aha, N. Chawla, B. Krawczyk, Understanding Imbalanced
Data: XAI &amp; Interpretable ML Framework, ResarchGate Preprint (2022). doi:10.13140/
RG.2.2.14645.96489.
[21] Y. Gao, Y. Zhu, Y. Zhao, Dealing with imbalanced data for interpretable defect prediction,
Information and Software Technology 151 (2022). doi:10.1016/j.infsof.2022.107016.
[22] S. Lundberg, S. Lee, A unified approach to interpreting model predictions, in: Advances in</p>
      <p>Neural Information Processing Systems 30, Curran Associates, Inc., 2017, pp. 4765–4774.
[23] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine
Learning Research 12 (2011) 2825–2830.
[24] W. C. Credit Fusion, Give me some credit, 2011. URL: https://kaggle.com/competitions/</p>
      <p>GiveMeSomeCredit.
[25] D. Boughaci, A. Alkhawaldeh, A new variable selection method applied to credit scoring,</p>
      <p>Algorithmic Finance 7 (2018) 43–52. doi:10.3233/AF-180227.
[26] S. Liu, R. Wang, Y. Han, Research on personal credit evaluation based on machine learning
algorithm, in: 2021 6th International Symposium on Computer and Information Processing
Technology (ISCIPT), 2021, pp. 48–52. doi:10.1109/ISCIPT53667.2021.00016.
[27] A. Markov, Z. Seleznyova, V. Lapshin, Credit scoring methods: Latest trends and points to
consider, The Journal of Finance and Data Science 8 (2022) 180–201. doi:https://doi.
org/10.1016/j.jfds.2022.07.002.
[28] B. A. Correa, A. Stojanovic, D. Aouada, B. Ottersten, Improving credit card fraud detection
with calibrated probabilities, 2014. doi:10.1137/1.9781611973440.78.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Shamsuddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ralescu</surname>
          </string-name>
          ,
          <article-title>Classification with class imbalance problem: A review 7 (</article-title>
          <year>2015</year>
          )
          <fpage>176</fpage>
          -
          <lpage>204</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Draper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Brodley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Utgof</surname>
          </string-name>
          ,
          <article-title>Goal-directed classification using linear machine decision trees</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>16</volume>
          (
          <year>1994</year>
          )
          <fpage>888</fpage>
          -
          <lpage>893</lpage>
          . doi:
          <volume>10</volume>
          .1109/34.310684.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bhatnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhatnagar</surname>
          </string-name>
          ,
          <article-title>Classification of imbalanced data: Review of methods and applications</article-title>
          ,
          <source>IOP Conference Series: Materials Science and Engineering</source>
          <volume>1099</volume>
          (
          <year>2021</year>
          )
          <article-title>012077</article-title>
          . doi:
          <volume>10</volume>
          .1088/
          <fpage>1757</fpage>
          -899X/1099/1/012077.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Mienye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Performance analysis of cost-sensitive learning methods with application to imbalanced medical data</article-title>
          ,
          <source>Informatics in Medicine Unlocked</source>
          <volume>25</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1016/j. imu.
          <year>2021</year>
          .
          <volume>100690</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yanmin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <article-title>Classification of imbalanced data: a review</article-title>
          ,
          <source>International Journal of Pattern Recognition and Artificial Intelligence</source>
          <volume>23</volume>
          (
          <year>2011</year>
          ). doi:
          <volume>10</volume>
          .1142/ S0218001409007326.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Elkan</surname>
          </string-name>
          ,
          <article-title>The foundations of cost-sensitive learning</article-title>
          ,
          <source>in: Proceedings of the 17th International Joint Conference on Artificial Intelligence -</source>
          Volume
          <volume>2</volume>
          ,
          <year>2001</year>
          , p.
          <fpage>973</fpage>
          -
          <lpage>978</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lazaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Figueiras-Vidal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Bayes</given-names>
            <surname>Risk</surname>
          </string-name>
          <article-title>Minimization Machine for Example-Dependent Cost Classification</article-title>
          ,
          <source>IEEE transactions on cybernetics 51</source>
          (
          <year>2021</year>
          )
          <fpage>3524</fpage>
          -
          <lpage>3534</lpage>
          . doi:
          <volume>10</volume>
          .1109/ TCYB.
          <year>2019</year>
          .
          <volume>2913572</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mediavilla-Relaño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lázaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Figueiras-Vidal</surname>
          </string-name>
          ,
          <article-title>Imbalance example-dependent cost classification: A bayesian based method</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>213</volume>
          (
          <year>2023</year>
          )
          <article-title>118909</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2022</year>
          .
          <volume>118909</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Correa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stojanovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Aouada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ottersten</surname>
          </string-name>
          ,
          <article-title>Cost Sensitive Credit Card Fraud Detection Using Bayes Minimum Risk</article-title>
          ,
          <source>in: 2013 12th International Conference on Machine Learning and Applications</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>333</fpage>
          -
          <lpage>338</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICMLA.
          <year>2013</year>
          .
          <volume>68</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Makki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Assaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Taher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Haque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hacid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zeineddine</surname>
          </string-name>
          ,
          <article-title>An experimental study with imbalanced classification approaches for credit card fraud detection</article-title>
          ,
          <source>IEEE Access 7</source>
          (
          <year>2019</year>
          )
          <fpage>93010</fpage>
          -
          <lpage>93022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , D. Zhang,
          <article-title>Evolutionary cost-sensitive extreme learning machine</article-title>
          ,
          <source>IEEE transactions on neural networks and learning systems 28</source>
          (
          <year>2016</year>
          )
          <fpage>3045</fpage>
          -
          <lpage>3060</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Correa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Aouada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ottersten</surname>
          </string-name>
          ,
          <article-title>Example-Dependent Cost-Sensitive Logistic Regression for Credit Scoring</article-title>
          ,
          <source>in: 2014 13th International Conference on Machine Learning and Applications</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>263</fpage>
          -
          <lpage>269</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICMLA.
          <year>2014</year>
          .
          <volume>48</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Correa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Aouada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ottersten</surname>
          </string-name>
          ,
          <article-title>Example-dependent cost-sensitive decision trees</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>42</volume>
          (
          <year>2015</year>
          )
          <fpage>6609</fpage>
          -
          <lpage>6619</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2015</year>
          .
          <volume>04</volume>
          . 042.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khalili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rastegar</surname>
          </string-name>
          ,
          <article-title>Optimal cost-sensitive credit scoring using a new hybrid performance metric</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>213</volume>
          (
          <year>2023</year>
          )
          <article-title>119232</article-title>
          . doi:
          <volume>10</volume>
          .1016/j. eswa.
          <year>2022</year>
          .
          <volume>119232</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ibomoiye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yanxia</surname>
          </string-name>
          ,
          <article-title>Performance analysis of cost-sensitive learning methods with application to imbalanced medical data</article-title>
          ,
          <source>Informatics in Medicine Unlocked</source>
          <volume>25</volume>
          (
          <year>2021</year>
          )
          <article-title>100690</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.imu.
          <year>2021</year>
          .
          <volume>100690</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>