<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Fairness Beyond Binary Decisions: A Case Study on German Credit</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Deborah D Kanubala</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Isabel Valera</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kavya Gupta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MPI for Software Systems</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Saarland School of Informatics</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Data-driven approaches are increasingly used to (partially) automate decision-making in credit scoring by predicting whether an applicant is “creditworthy or not” based on a set of features about the applicant, such as age and income, along with what we refer here to as treatment decisions, e.g., loan amount and duration. Existing data-driven approaches for automating and evaluating the accuracy and fairness of such credit decisions ignore that treatment decisions (here, loan terms) are part of the decision and thus may be subject to discrimination. This discrimination can propagate to the final outcome (repaid or not) of positive decisions (granted loans). In this extended abstract, we rely on causal reasoning and a broadly studied fair machine-learning dataset, the German credit, to i) show that the current fair data-driven approach neglects discrimination in treatment decisions (i.e., loan terms) and its downstream consequences on the decision outcome (i.e., ability to repay); and ii) argue for the need to move beyond binary decisions in fair data-driven decision-making in consequential settings like credit scoring.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;algorithmic fairness</kwd>
        <kwd>credit scoring</kwd>
        <kwd>discrimination</kwd>
        <kwd>path-specific counterfactual fairness</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Motivation</title>
      <p>
        In many areas such as hiring [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ], law [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ], and finance [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref8 ref9">8, 9, 10, 11, 12</xref>
        ], data-driven
solutions are used in consequential decisions by predicting outcomes from historical data [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ].
The main assumption of these data-driven approaches for auditing or automating
decisionmaking processes is the access to historical data  = {, x˜, }=1. In the context of loan
approval [
        <xref ref-type="bibr" rid="ref15 ref16 ref17">15, 16, 17</xref>
        ], the available dataset is often assumed to contain a representative sample1
of the random variables corresponding to i) sensitive attribute of applicants ; ii) observed
outcomes after a positive decision  , which is used as a ground-truth label indicating the
“creditworthiness” of applicants [
        <xref ref-type="bibr" rid="ref19 ref20 ref21 ref22">19, 20, 21, 22</xref>
        ]; and iii) features ˜ = {, } that account for
both the applicant characteristics  such as income, educational level, etc., along with the
treatment decisions  which in our case correspond to the loan terms such as duration, and loan
amount, under which a historical positive decision (i.e., granted loan) was given.

˜






(a) Traditional formulation
(b) Proposed Reformulation
      </p>
      <p>
        This setting, however, neglects that past decisions are not binary but also involve the treatment
, which may, in turn, have a causal efect on the observed outcome  . Studies have shown that
treatment decisions  can be discriminatory e.g., the authors of [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] found that while there were
no significant diferences in loan approval rates between binary gender identities, there was a
substantial disparity in loan amounts between men and women. Similar studies [
        <xref ref-type="bibr" rid="ref24 ref25 ref26 ref27">24, 25, 26, 27</xref>
        ]
also highlight the diferences in loan terms between demographic groups. The results of these
studies highlight that decisions are not binary and that discrimination against demographic
groups may be neglected when considering binary decisions. That is while binary decisions
may appear fair (e.g., in terms of acceptance rate [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] or true positive rate [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]) across groups,
some demographic groups may still be subject to discrimination in the treatment they receive,
i.e., loan terms. [
        <xref ref-type="bibr" rid="ref23 ref28 ref29 ref30">23, 28, 29, 30</xref>
        ].
      </p>
      <p>While these works explored discrimination in treatment decisions, to the best of our knowledge
none of these has considered the holistic analysis of the downstream efects of treatment decisions
on the outcome. In this work, we go one step further and propose a setting to systematically
answer the following questions about historical data of the form  = {, x, , }=1, where
we have now made explicit the distinction between applicant features  and treatment :
• Research Question (RQ) 1 : Does discrimination exist in the assignment of treatment
 (i.e., loan terms) across demographic groups?
• Research Question (RQ) 2 : In such a case, what are the downstream efects of such
treatment discrimination on decision outcome  (i.e., repayment probability)?
Implications: The common assumption that  is suficient for making lending decisions is
inadequate. In other words, relying on discriminatory treatment decisions will propagate to the
outcomes e.g., demographics perceived as higher risk may be ofered loan terms that negatively
afect their repayment probability, thus reinforcing the negative perception of their risk. In
summary, our study aims to provide a compelling case for rethinking the existing fairness in
the machine learning pipeline. Next, we detail how we can answer the above questions using a
causality-based approach.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Discrimination in Treatment and its Efect on Outcome</title>
      <p>
        We employ causal reasoning framework [
        <xref ref-type="bibr" rid="ref31">31, 32, 33</xref>
        ] for our work. More specifically, we contrast
the causal graph in Figure 1a, which shows the current fair decision-making framework, with
the one in Figure 1b, where we consider the treatment decisions as part of the decisions made by
the decision-maker. In our revisited data generation process in Figure 1b we make the following
assumptions:
1. The treatment decisions  is a causal child of both  and (potentially) , and we assume
the causal mechanism that generated  to be potentially discriminatory due to a direct
(solid red line Figure 1b) or indirect (dashed green) causal path. We refer to the direct causal
efect of  on  as Direct Treatment Discrimination (DTD), the indirect efect (mediated
by ) of  on  as Indirect Treatment Discrimination (ITD), and the combination of the
two as Treatment Discrimination (TD).
2. The treatment decisions  is a causal parent of the outcome variable  , and thus   may
propagate to  through the causal path(s) from  to  mediated by  (dashed blue line
Figure 1b).
      </p>
      <p>Based on these assumptions, we answer RQ 1 and RQ 2 as follows.</p>
      <p>RQ 1 Discrimination in treatment: We assume that for each sample factual applicant in
the dataset, which we denote by ( ,  ,  ,  ), we can rely on the
action-predictionabduction steps by Pearl [34] to estimate its sensitive counterfactual ( ,  ,  ,  ).
Then, we can define the (total) discrimination in treatment as the treatment diference
between the factual individual  and its sensitive counterfactual  , i.e.,
  =  −  ,</p>
      <p>where  = ( , ( ))).</p>
      <p>Here, (, ) and () denote the causal function2 that define the value of respectively
the random variables  and , respectively, given their causal parents according to the
causal graph in Figure 1b. Equation 1 quantifies the total discrimination. To disentangle
the sensitive attribute’s efect through diferent pathways, we follow path specific in [ 32]
to rewrite to as:</p>
      <p>=   +  , where
  =  − ( , ( ))) and   = ( , ( ))) −  .</p>
      <p>RQ 2 Efect of discrimination in treatment ( ) on outcome ( ): This is simply the
repayment odds3 of a factual applicant ( ,  ,  ) being treated according to its true sensitive
attribute to as it would have been treated according to its counterfactual, i.e., giving a
treatment decision  of a factual (female) to its counterfactual (male). We refer to this as
Treatment Discrimination Efect (TDE) which we computed as:
  =
( )
( )</p>
      <p>= exp [︀  ( ,  ,  ) −  ( ,  ,  )]︀
2We make implicit the dependence of the causal functions to the exogenous variables. In addition, assume the
absence of a hidden confounder, or equivalently, we assume causal suficiency. For more details on structural causal
models, refer to [35].
3Repayment odds refers to the likelihood that a borrower will successfully repay a loan [36] and is computed as
() = /(1 − )
(1)
(2)
(3)
where the repayment probability for an individual (, , ) is given by  =  ( =
1|, , ) =  (︀  (, , ))︀ , with  (· ) denoting the logistic function. Importantly, we can
interpret the   values as follows:
a) If TDE ≤ 1, then the odds of  repaying the credit are equal or lower, meaning no
negative downstream efect of treatment on the outcome.
b) Otherwise, if TDE &gt; 1,  is more likely to repay credit than its sensitive
counterfactual  . In this case, we consider  has been subject to discrimination.</p>
    </sec>
    <sec id="sec-3">
      <title>3. A Case Study Using German Credit</title>
      <p>
        We analyze the German Credit dataset [37], using loan amount and duration as treatment
decisions. Assuming additive (noise) linear causal functions, we learn the parameters of our
causal model. For RQ 1, we measure the discrimination in treatment along the diferent
pathways. Our results align with existing literature [
        <xref ref-type="bibr" rid="ref23 ref25 ref27">23, 25, 27</xref>
        ] and reveal that discrimination
exists in treatment. Table 1, shows that males receive on average an increase of 10% and 20%
in duration and credit amount respectively. While this may allow extending the loan terms over
a long period, this often also means paying interest over the life of the loan and poses a higher
risk of defaulting [38]. On the other hand, females receive on average a significantly lower
credit amount and shorter duration. Our second RQ 2 was to measure how discrimination
in treatment propagates to the outcome. Following our analysis, hypothetically treating a
female (factual) like you would have treated a male (counterfactual) on average decreases the
repayment odds by 9%. On the other hand, a hypothetical treating males (factual) like females
(counterfactual) results in an increase of repayment odds by 10%. Thus, we conclude that even
though males receive preferential treatment with a higher amount of loan and longer duration,
this treatment however has a negative downstream efect on their ability to pay back the loan.
As such, the disparity in treatment across groups increases risks for male borrowers and puts
male borrowers in a higher risk situation.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Open questions</title>
      <p>We have shown from our analysis that there is discrimination in treatment and this propagates
to the predictive outcome. Our study provides a compelling argument for rethinking the entire
pipeline of ML fairness. Although we restricted our analysis to the German Credit dataset, the
implications extend to various other domains such as criminal justice, hiring, education, etc.
Furthermore, ensuring fairness in observed outcomes  may be inadequate to mitigate bias as
this is still a function of biased treatment. This could also lead to a never-ending loop and in the
worst case can worsen the financial situation of discriminated groups. These results prompt us
to question the current framework and raise several open questions: Is there a need to develop
new notions of fairness, considering that  remains a composition of an unfair ? What does
designing a fair policy for  entail? What types of datasets are necessary to ensure fairness in
non-binary decision-making processes?</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgements</title>
      <p>This work has been funded by the European Union (ERC-2021-STG, SAML, 101040177). Views
and opinions expressed are however those of the author(s) only and do not necessarily reflect
those of the European Union or the European Research Council Executive Agency. Neither the
European Union nor the granting authority can be held responsible for them.
[32] S. Chiappa, Path-specific counterfactual fairness, in: Proceedings of the AAAI conference
on artificial intelligence, 2019.
[33] M. J. Kusner, J. Loftus, C. Russell, R. Silva, Counterfactual fairness, Advances in neural
information processing systems (2017).
[34] J. Pearl, et al., Models, reasoning and inference, Cambridge, UK: Cambridge University</p>
      <p>Press (2000).
[35] J. Pearl, M. Glymour, N. P. Jewell, Causal inference in statistics: A primer, John Wiley &amp;</p>
      <p>Sons, 2016.
[36] M. Szumilas, Explaining odds ratios, Journal of the Canadian Academy of Child and</p>
      <p>Adolescent Psychiatry (2010).
[37] H. Hofmann, Statlog (german credit data) data set, UCI Repository of Machine Learning</p>
      <p>Databases (1994).
[38] Z. Guo, Y. Zhang, X. S. Zhao, Risks of long-term auto loans, Journal of Credit Risk (2022).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Faliagka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ramantas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tsakalidis</surname>
          </string-name>
          , G. Tzimas,
          <article-title>Application of machine learning algorithms to an online recruitment system</article-title>
          ,
          <source>in: International Conference on Internet and Web Applications and Services</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Silas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Udhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dahiphale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Parkale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lambhate</surname>
          </string-name>
          ,
          <article-title>Automation of candidate hiring system using machine learning</article-title>
          ,
          <source>International Journal of Innovative Science and Research Technology</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Mahmoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Shawabkeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Salameh</surname>
          </string-name>
          , I. Al Amro,
          <article-title>Performance predicting in hiring process and performance appraisals using machine learning</article-title>
          ,
          <source>in: 10th international conference on Information and communication systems</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Angwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Larson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mattu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirchner</surname>
          </string-name>
          ,
          <article-title>Machine bias, in: Ethics of data and analytics</article-title>
          ,
          <source>Auerbach Publications</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>W.</given-names>
            <surname>Dieterich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mendoza</surname>
          </string-name>
          , T. Brennan,
          <article-title>Compas risk scales: Demonstrating accuracy equity and predictive parity, Northpointe Inc (</article-title>
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          ,
          <article-title>The sexist algorithm</article-title>
          ,
          <source>Behavioral sciences &amp; the law</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Berk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Heidari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jabbari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kearns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <article-title>Fairness in criminal justice risk assessments: The state of the art</article-title>
          ,
          <source>Sociological Methods &amp; Research</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Almheiri</surname>
          </string-name>
          ,
          <source>Automated loan approval system for banks</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lessmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Baesens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-V.</given-names>
            <surname>Seow</surname>
          </string-name>
          , L. C. Thomas,
          <article-title>Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research</article-title>
          ,
          <source>European Journal of Operational Research</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tripathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Edla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bablani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Shukla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <article-title>Experimental analysis of machine learning methods for credit score classification</article-title>
          ,
          <source>Progress in Artificial Intelligence</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Moscato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Picariello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sperlí</surname>
          </string-name>
          ,
          <article-title>A benchmark of machine learning approaches for credit score prediction</article-title>
          ,
          <source>Expert Systems with Applications</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sirignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sadhwani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Giesecke</surname>
          </string-name>
          ,
          <article-title>Deep learning for mortgage risk</article-title>
          ,
          <source>arXiv preprint:1607.02470</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Scantamburlo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Charlesworth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cristianini</surname>
          </string-name>
          ,
          <article-title>Machine decisions and human consequences</article-title>
          , arXiv preprint:
          <year>1811</year>
          .
          <volume>06747</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>A. Coston,</surname>
          </string-name>
          <article-title>Principled Machine Learning for Societally Consequential Decision Making</article-title>
          ,
          <source>Ph.D. thesis</source>
          , Carnegie Mellon University Pittsburgh, PA,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hurlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pérignon</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Saurin,</surname>
          </string-name>
          <article-title>The fairness of credit scoring models</article-title>
          ,
          <source>arXiv preprint:2205.10200</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rajesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lakshmanarao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <article-title>An eficient machine learning classification model for credit approval</article-title>
          ,
          <source>in: Third International Conference on Artificial Intelligence and Smart Energy</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bhatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <article-title>Loan status prediction in the banking sector using machine learning</article-title>
          ,
          <source>in: International Conference on Computational Intelligence</source>
          ,
          <source>Communication Technology and Networking</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lakkaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ludwig</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Mullainathan,</surname>
          </string-name>
          <article-title>The selective labels problem: Evaluating algorithmic predictions in the presence of unobservables</article-title>
          ,
          <source>in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Dobbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gilbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kohli</surname>
          </string-name>
          ,
          <article-title>A broader view on bias in automated decisionmaking: Reflecting on epistemology and dynamics</article-title>
          , arXiv preprint:
          <year>1807</year>
          .
          <volume>00553</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lakkaraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ludwig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mullainathan</surname>
          </string-name>
          ,
          <article-title>Human decisions and machine predictions</article-title>
          ,
          <source>The quarterly journal of economics</source>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          , E. Potash,
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. D'Amour</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Lum</surname>
          </string-name>
          , Algorithmic fairness: Choices, assumptions, and definitions,
          <source>Annual Review of Statistics and Its Application</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>B.</given-names>
            <surname>Green</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Hu,</surname>
          </string-name>
          <article-title>The myth in the methodology: Towards a recontextualization of fairness in machine learning</article-title>
          ,
          <source>in: Proceedings of the machine learning: the debates workshop</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>I.</given-names>
            <surname>Agier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Szafarz</surname>
          </string-name>
          ,
          <article-title>Microfinance and gender: Is there a glass ceiling on loan size?</article-title>
          , World development (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Escalante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Osinubi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dodson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Taylor</surname>
          </string-name>
          ,
          <article-title>Looking beyond farm loan approval decisions: loan pricing and nonpricing terms for socially disadvantaged farm borrowers</article-title>
          ,
          <source>Journal of Agricultural and Applied Economics</source>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Alesina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Mistrulli</surname>
          </string-name>
          ,
          <article-title>Do women pay more for credit? evidence from italy</article-title>
          ,
          <source>Journal of the European Economic Association</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>D.</given-names>
            <surname>Aristei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gallo</surname>
          </string-name>
          ,
          <article-title>Are female-led firms disadvantaged in accessing bank credit? evidence from transition economies</article-title>
          ,
          <source>International Journal of Emerging Markets</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Gender diferences in car loan access: An empirical analysis</article-title>
          ,
          <source>in: Proceedings of the 12th International Conference on E-business, Management and Economics</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cozarenco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Szafarz</surname>
          </string-name>
          ,
          <article-title>Women's access to credit in france: how microfinance institutions import disparate treatment from banks</article-title>
          ,
          <source>Available at SSRN</source>
          <volume>2387573</volume>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>I.</given-names>
            <surname>Agier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Szafarz</surname>
          </string-name>
          ,
          <article-title>Credit to women entrepreneurs: The curse of the trustworthier sex</article-title>
          ,
          <source>Available at SSRN</source>
          <volume>1718574</volume>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fuster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Goldsmith-Pinkham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ramadorai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Walther</surname>
          </string-name>
          ,
          <article-title>Predictably unequal? the efects of machine learning on credit markets</article-title>
          ,
          <source>The Journal of Finance</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>D.</given-names>
            <surname>Plecko</surname>
          </string-name>
          , E. Bareinboim,
          <article-title>Causal fairness analysis</article-title>
          ,
          <source>arXiv preprint:2207.11385</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>