<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luca Deck</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jan-Laurin Müller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Conradin Braun</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Domenique Zipperling</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Niklas Kühl</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bayreuth</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>The topic of fairness in AI, as debated in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) communities, has sparked meaningful discussions in the past years. However, from a legal perspective, particularly from the perspective of European Union law, many open questions remain. Whereas algorithmic fairness aims to mitigate structural inequalities at design-level, European nondiscrimination law is tailored to individual cases of discrimination after an AI model has been deployed. The AI Act might present a tremendous step towards bridging these two approaches by shifting nondiscrimination responsibilities into the design stage of AI models. Based on an integrative reading of the AI Act, we comment on legal as well as technical enforcement problems and propose practical implications on bias detection and bias correction in order to specify and comply with specific technical requirements.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;EU AI Act</kwd>
        <kwd>Algorithmic fairness</kwd>
        <kwd>Non-discrimination law</kwd>
        <kwd>Ethical AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        AI systems’ propensity to discriminate against legally protected groups has been demonstrated
across multiple social contexts, ranging from decision-support systems for criminal risk
assessment [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], recruiting [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and credit scoring [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], to applications in computer vision [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ] and
natural language processing [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. In light of the rapid advancements of AI, the increasing
use of AI systems across multiple domains has triggered a broad and interdisciplinary debate
on the “ethics of algorithms” [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ]. Central to this debate are the FATE principles
(fairness, accountability, transparency, and ethics), with fairness encompassing the social goals of
non-discrimination, inclusion, and equality [
        <xref ref-type="bibr" rid="ref14">14, 15</xref>
        ].
      </p>
      <p>The discourse at the interface with legal scholarship, however, is only starting to gain traction
(e.g., [16, 17, 18, 19, 20, 21]). In this short paper, we make three contributions: First, we briefly
retrace the academic discourses on non-discrimination law and algorithmic fairness to highlight
their current misalignment. Second, we argue that the European Union’s AI Act might pose
a seminal link to merging these debates. Based on this integrative conception, we thirdly
sketch how the AI Act could provide a means to solve the enforcement problems of
both—nondiscrimination law and algorithmic fairness—and comment on upcoming challenges regulators
and developers will face when specifying and verifying technical requirements.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Non-discrimination law vs. algorithmic fairness</title>
      <p>Legal Context: Non-discrimination law and its shortcomings. From a legal perspective,
non-discrimination law appears to be suitable to address the potential harms of unfair AI
systems at first glance. However, legal scholars from both sides of the Atlantic have demonstrated
that U.S. [22] and EU [16] non-discrimination law alike may fall short in doing so. One of
the main deficiencies of traditional non-discrimination regimes in the context of algorithmic
discrimination is law enforcement. Enforcement has always been a central shortcoming of
non-discrimination law, especially in jurisdictions that primarily rely on individual litigation
(cf., [23, 24]). In such jurisdictions, individual victims face substantial problems when it comes
to recognizing, proving, and bringing instances of discrimination before the courts. AI systems
exacerbate these problems [25]. Due to the opacity of these systems, those afected by
algorithmic discrimination are often unable to recognize instances of (potential) discrimination [26].
Moreover, even when individuals suspect discrimination, restricted access to models or training
data severely impedes their ability to meet the requirements of the burden of proof imposed
on them by procedural law [16]. Furthermore, European non-discrimination law is tailored to
individual cases of discrimination hampering its application to broad-scale goals like designing
fair AI systems. Non-discrimination regimes, therefore, face substantial challenges when it
comes to enforcing the principles of equality and non-discrimination.</p>
      <p>
        Technical context: algorithmic fairness and its shortcomings. On a technical level,
methods for algorithmic fairness from the field of computer science set out to fill this gap. By
developing a plethora of technical bias definitions and fairness metrics (cf. [ 27, 28, 29]) as well as
practical bias detection and bias mitigation techniques [
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref18">30, 31, 32, 33</xref>
        ], computer scientists try to
implement ethical and legal fairness considerations “by design” [
        <xref ref-type="bibr" rid="ref19">34</xref>
        ]. The shortcomings of these
technical fairness approaches, however, are twofold: First, formalization and quantification
will never provide answers to fundamentally normative challenges such as selecting the right
fairness metric for the right context or trading of conflicting objectives [
        <xref ref-type="bibr" rid="ref20 ref21">35, 36</xref>
        ]. Such challenges
arising from conflict between values can be supported but not be solved by formal methods
[
        <xref ref-type="bibr" rid="ref22">37</xref>
        ]. Second, due to its orientation towards a specific academic audience and reliance on
self-governance, discourse on algorithmic fairness faces its own “enforcement problems” [
        <xref ref-type="bibr" rid="ref23">38</xref>
        ].
The AI Act may alleviate both—the enforcement problems of non-discrimination law and the
technical fairness discourse—alike.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Implications of the AI Act</title>
      <p>
        Enforcement “by design”? According to Recital 4a, the AI Act explicitly aims to protect the
fundamental rights set out in Art. 2 of the Treaty of the European Union. Among these rights
are equality and non-discrimination in particular. In order to prevent algorithmic discrimination,
the regulation establishes special requirements (Art. 6 et seq. AI Act) for high-risk systems in
the areas of education (Recital 35), employment (Recital 36), insurance and credit (Recital 37),
law enforcement (Recital 38), as well as migration (Recital 39). However, despite its explicit goal
to prevent discrimination, the regulation lacks a clear substantive standard for determining
when unequal treatment is inadmissible. According to Art. 10(2)f AI Act “[t]raining, validation
and testing data sets shall be subject to data governance and management practices appropriate
for the intended purpose of the AI system” and thus have to be examined for “possible biases
that are likely to [...] lead to discrimination prohibited under Union law”. The AI Act therefore
leaves the judgment call about what constitutes illegal discrimination to existing legislation.
However, traditional non-discrimination law’s requirements can only be implemented during
model development (as intended by the AI Act) if they are “translated” into technical fairness
requirements. To achieve this goal, scholars from all domains are bound to collaborate. When
doing so, they must proceed in a conscious and contextualizing manner and take into account
the diverging perspectives of AI Act and non-discrimination law. European non-discrimination
law is tailored to individual instances of discrimination after an AI model has been deployed—an
inherently retrospective approach. In contrast to this, the AI Act prospectively demands fairness
interventions by implementing non-discrimination requirements at the stage of model design.
Guidance by democratically justified institutions on how to implement such requirements might
bridge the gap toward alleviating both the legal and the technical enforcement problems.
Enabling “bias detection and correction”? Legal requirements for the development of AI
systems are not only subject to the AI Act. Due to the tension between fairness and privacy
during the training and evaluation stage of AI, conflicts with data protection law may equally
arise. On the one hand, ignoring personal demographic data promotes the same risk as the
widely rejected idea of fairness through unawareness because legally protected attributes like
race and gender usually correlate to innocuous proxy variables [
        <xref ref-type="bibr" rid="ref24 ref25">39, 40</xref>
        ]. If protected attributes
are unavailable during model training and evaluation, these subtle correlations cannot be
accounted for, nor can technical fairness metrics be tested and optimized. On the other hand,
Art. 9 GDPR places particularly high demands on the lawful processing of personal data about
special categories. Therefore, the same sensitive data that is protected by data protection law
is also essential to efectively avoid discriminatory outputs. The AI Act seeks to mitigate this
tension by broadening the scope of lawful data processing. Art. 10(5) AI Act states that “[t]o
the extent that it is strictly necessary for the purposes of ensuring bias detection and correction in
relation to the high-risk AI systems [...], the providers of such systems may exceptionally process
special categories of personal data referred to in Art. 9(1) [GDPR].” This is accompanied by
Recital 44c, which adds that “[i]n order to protect the right of others from the discrimination that
might result from the bias in AI systems [...] the providers should, exceptionally, [...] be able to
process also special categories of personal data, as a matter of substantial public interest within
the meaning of Art. 9(2)(g) [GDPR].” Therefore, discrimination and fairness considerations can
provide a justification for data processing during the training phase of high-risk AI systems.
However, balancing the public and private interests regarding non-discrimination and privacy
will inevitably lead to intricate trade-ofs.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Practical challenges for compliance</title>
      <p>
        Defining bias: what are “appropriate” fairness metrics? The discussed implications of
the AI Act raise two important questions on how to put non-discrimination and fairness into
practice. First, the concept of technical fairness metrics begs the question which one(s) may
be “appropriate for the intended purpose of the AI system” (Art. 10(2)f AI Act). Technical
fairness definitions have already been examined for their compatibility with moral norms [
        <xref ref-type="bibr" rid="ref26">41</xref>
        ]
and non-discrimination regimes [
        <xref ref-type="bibr" rid="ref27">17, 18, 19, 42, 21</xref>
        ] alike. However, legal concepts relying on
lfexible ex-post standards and human intuition are in tension with the mathematical need for
precision and ex-ante standardization [
        <xref ref-type="bibr" rid="ref27">21, 42</xref>
        ]. Also, the interdisciplinary discourse needs to
acknowledge that fairness and non-discrimination might present inherently diferent concepts
targeted at diferent social contexts. Prior works have suggested that a single standard of fairness
can be achieved by “translating” legal non-discrimination requirements from the employment
context into technical fairness metrics [17, 19]. However, the heterogeneity of social contexts
(e.g., employment versus criminal sentencing) demands a corresponding flexibility in fairness
requirements [
        <xref ref-type="bibr" rid="ref28 ref29">43, 44</xref>
        ]. Instead of aiming for a one-size-fits-all solution, we therefore recommend
applying the landscape of available technical fairness metrics to diferent legal conceptions of
discrimination depending on the societal context.
      </p>
      <p>
        Detecting and correcting bias: when are biases “likely to lead to discrimination”? The
second challenge is defining when “possible biases that are likely to [...] lead to discrimination”.
Technical fairness metrics such as statistical parity or equalized odds ofer an actionable approach
to measure and mitigate “bias” [
        <xref ref-type="bibr" rid="ref15 ref30 ref31">45, 30, 21, 46</xref>
        ]. However, it remains unanswered what kind of
evidence would signal suficient eforts of bias detection and correction. Setting aside the debate
on metric selection, let us assume algorithmic hiring requires male and female applicants to
receive equal hiring rates (demographic parity). Statistical hypothesis testing provides a suitable
method to verify compliance with this requirement, in this case a simple z-test. To test the
hypothesis of compliance with demographic parity, we are interested in the test’s error rates,
i.e., falsely detecting a violation (type 1 error) or the likelihood of failing to detect a violation
(type 2 error). Notably, a larger disparity in hiring probabilities between groups and a larger
sample size decreases type 2 error. Unfortunately, the z-test is also sensitive to the acceptance
rate—particularly for small sample sizes. For example, for 1000 male and 1000 female applicants,
type 2 error decreases by 0.8% - points if only 700 instead of 900 applicants are accepted—despite
identical group disparities (see Appendix A). This efect is especially strong for imbalanced
datasets. For 1800 male and 200 female applicants, type 2 error even decreases by 6% - points if
only 780 instead of 980 applicants are accepted—again, despite identical group disparities (see
Appendix A). Our example highlights the need for guidance in selecting appropriate tests and
specifying standards for the error rates of tests utilized in bias detection.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this short paper, we outlined how the AI Act could promote the convergence of legal
nondiscrimination discourse and technical algorithmic fairness discourse. While we sketch its
potential implications on fairness requirements of future AI developments, specifying and enforcing
concrete legal requirements will be an intricate future task. In the absence of legal precedents,
both disciplines are in need of pioneering work at the intersection of non-discrimination law
and algorithmic fairness.
R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, E. Vayena, AI4People—An Ethical
Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations,
Minds and Machines 28 (2018) 689–707.
[15] European Commission. Directorate General for Communications Networks, Content
and Technology., High Level Expert Group on Artificial Intelligence., Ethics
guidelines for trustworthy AI, 2019. URL: https://digital-strategy.ec.europa.eu/en/library/
ethics-guidelines-trustworthy-ai.
[16] P. Hacker, Teaching fairness to artificial intelligence: Existing and novel strategies
against algorithmic discrimination under eu law, Common Market Law Review 55 (2018)
1143–1185.
[17] S. Wachter, B. Mittelstadt, C. Russel, Why fairness cannot be automated: Bridging the gap
between eu non-discrimination law and ai, Computer Law &amp; Security Review 41 (2021)
1–31.
[18] S. Wachter, B. Mittelstadt, C. Russel, Bias preservation in machine learning: The legality
of fairness metrics under eu non-discrimination law, West Virginia Law Review 123 (2021)
735–790.
[19] M. Hauer, J. H. Kevekordes, M. Amir, Legal perspective on possible fairness measures – a
legal discussion using the example of hiring decisions, Computer Law &amp; Security Review
42 (2021) 1–20.
[20] M. Zehlike, P. Hacker, E. Wiedemann, Matching code and law: achieving algorithmic
fairness with optimal transport, Data Mining and Knowledge Discovery 34 (2020) 163–200.
[21] H. Weerts, R. Xenidis, F. Tarissan, H. P. Olsen, M. Pechenizkiy, Algorithmic unfairness
through the lens of eu non-discrimination law: Or why the law is not a decision tree, in:
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency,
2023.
[22] S. Barocas, A. D. Selbst, Big data’s disparate impact, California Law Review 104 (2016)
671–732.
[23] S. Fredman, Discrimination Law, 2 ed., Oxford University Press, Oxford, 2011.
[24] S. Berghahn, V. Egenberger, M. Klapp, A. Klose, D. Liebscher, L. Supik, A. Tischbirek,
Evaluation des Allgemeinen Gleichbehandlungsgesetzes - erstellt im Auftrag der
Antidiskriminierungsstelle des Bundes vom Büro für Recht und Wissenschaft GbR mit
wissenschaftlicher Begleitung von Prof. Dr. Christiane Brors, Nomos, Baden-Baden, 2016.
[25] I. Spiecker gen. Döhmann, E. Towfigh, Automatisch Benachteiligt. Das Allgemeine
Gleichbehandlungsgesetz und der Schutz vor Diskriminierung durch algorithmische
Entscheidungssysteme. Rechtsgutachten im Auftrag der Antidiskriminierungsstelle des Bundes,
Antidiskriminierungsstelle des Bundes, Berlin, 2023.
[26] J. Burrell, How the machine ‘thinks’: Understanding opacity in machine learning
algorithms, Big Data &amp; Society 3 (2016) 1–11.
[27] S. Verma, J. Rubin, Fairness definitions explained, in: Proceedings of the International</p>
      <p>Workshop on Software Fairness, ACM Conferences, ACM, 2018, pp. 1–7.
[28] A. Chouldechova, A. Roth, The Frontiers of Fairness in Machine Learning, Communications
of the ACM 63 (2018).
[29] D. Pessach, E. Shmueli, A review on fairness in machine learning, ACM Comput. Surv. 55
(2022). URL: https://doi.org/10.1145/3494672.
(2023).</p>
    </sec>
    <sec id="sec-6">
      <title>A. Appendix</title>
      <p>The appendix aims to visualize the efects described (Section 4). Figure 1 refers to the efect of
larger disparity in hiring probabilities on the probability of not detecting a violation (Type 2
error). For example, for a sample size of 2,500 a change from acceptance rate from 0.75 to 0.7
results in a 17% - point decrease (from 33% to 16%) in type 2 error if group 1 has an acceptance
rate of 0.8. Furthermore, it demonstrates that increasing the sample size for the same disparity
also decreases the probability of a type 2 error. Doubling the sample size from 2,500 to 5,000
samples decreases the type 2 error by 27% -points (from 33% to 6%). The first efect increases with
increasing sample size, while the second one decreases with increasing sample size. Figure 2
demonstrates the efect of the same disparity (0.1) but diferent acceptance rates. For 1800 male
and 200 female applicants, the type 2 error decreases by 6% - points if only 780 (720 male, 60
female) instead of 980 (900 male and 80 female) applicants are accepted. This efect is amplified
by imbalanced data sets and small sample sizes.</p>
      <p>Type 2 error if acceptance rate for group 1 is 0.8 (Z-Test)
1.0
0.8
r
rro0.6
E
2
ep0.4
y
T
0.2
0.0
sample size (overall): 250
sample size (overall): 500
sample size (overall): 1000
sample size (overall): 2500
sample size (overall): 5000
0.40
0.45
0.50
0.70
0.75
0.80
0.55 0.60 0.65</p>
      <p>Acceptance rate of group 2
0.35
0.30</p>
      <p>Type 2 error for equal disparity but different base values of the acceptance rate (Z-Test)
[0.0,
0.1]</p>
      <p>Base values of the acceptance rate for a disparitiy of 0.1
acceptance rate values</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Angwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Larson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mattu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirchner</surname>
          </string-name>
          , Machine Bias:
          <article-title>There's software used across the country to predict future criminals. And it's biased against blacks</article-title>
          ., in: K. Martin (Ed.),
          <article-title>Ethics of data and analytics, An Auerbach Book</article-title>
          , CRC Press Taylor &amp; Francis Group,
          <source>Boca Raton and London and New York</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>254</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dastin</surname>
          </string-name>
          ,
          <article-title>Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women *</article-title>
          , in: K. Martin (Ed.),
          <article-title>Ethics of data and analytics, An Auerbach Book</article-title>
          , CRC Press Taylor &amp; Francis Group,
          <source>Boca Raton and London and New York</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>299</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fuster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Goldsmith-Pinkham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ramadoral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Walther</surname>
          </string-name>
          ,
          <article-title>Predictably unequal? the efects of machine learning on credit markets</article-title>
          ,
          <source>The Journal of Finance</source>
          <volume>77</volume>
          (
          <year>2022</year>
          )
          <fpage>5</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shankar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Breck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Atwood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sculley</surname>
          </string-name>
          ,
          <article-title>No classification without representation: Assessing geodiversity issues in open data sets for the developing</article-title>
          world,
          <year>2017</year>
          . URL: https://arxiv.org/pdf/1711.08536.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Buolamwini</surname>
          </string-name>
          , T. Gebru,
          <article-title>Gender shades: Intersectional accuracy disparities in commercial gender classification</article-title>
          ,
          <source>in: Proceedings of the 1st Conference on Fairness, Accountability and Transparency</source>
          , volume
          <volume>81</volume>
          ,
          <string-name>
            <surname>PMLR</surname>
          </string-name>
          ,
          <year>2018</year>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yatskar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ordonez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <article-title>Men also like shopping: Reducing gender bias amplification using corpus-level constraints</article-title>
          ,
          <source>in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>2979</fpage>
          -
          <lpage>2989</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Burns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Hendricks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Saenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Darrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rohrbach</surname>
          </string-name>
          , Women also snowboard:
          <source>Overcoming bias in captioning models</source>
          ,
          <year>2019</year>
          . URL: https://arxiv.org/pdf/
          <year>1803</year>
          .09797.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bolukbasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Saligrama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalai</surname>
          </string-name>
          ,
          <article-title>Man is to computer programmer as woman is to homemaker? debiasing word embeddings</article-title>
          ,
          <source>in: Proceedings of the 30th International Conference on Neural Information Processing Systems</source>
          , NIPS'16, Curran Associates Inc.,
          <string-name>
            <surname>Red</surname>
            <given-names>Hook</given-names>
          </string-name>
          ,
          <string-name>
            <surname>NY</surname>
          </string-name>
          , USA,
          <year>2016</year>
          , p.
          <fpage>4356</fpage>
          -
          <lpage>4364</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Caliskan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <article-title>Semantics derived automatically from language corpora contain human-like biases</article-title>
          ,
          <source>Science</source>
          <volume>356</volume>
          (
          <year>2017</year>
          )
          <fpage>183</fpage>
          -
          <lpage>186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schiebinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Word embeddings quantify 100 years of gender and ethnic stereotypes</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>115</volume>
          (
          <year>2018</year>
          )
          <fpage>E3635</fpage>
          -
          <lpage>E3644</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B. D.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Allo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Taddeo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Floridi,</surname>
          </string-name>
          <article-title>The ethics of algorithms: Mapping the debate</article-title>
          ,
          <source>Big Data &amp; Society</source>
          <volume>3</volume>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kearns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <source>The Ethical Algorithm - The Science of Socially Aware Algorithm Design</source>
          , Oxford University Press, New York,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Martens</surname>
          </string-name>
          ,
          <source>Data Science Ethics - Concepts</source>
          ,
          <source>Techniques, and Cautionary Tales</source>
          , Oxford University Press, New York,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cowls</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Beltrametti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chatila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chazerand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dignum</surname>
          </string-name>
          , C. Luetge,
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          , X. Cheng, M. Donini,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gelman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gollaprolu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Larroy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>McCarthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rees</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Siva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vasist</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yilmaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. B.</given-names>
            <surname>Zafar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Das</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Haas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kenthapadi</surname>
          </string-name>
          , Amazon SageMaker Clarify:
          <article-title>Machine Learning Bias Detection and Explainability in the Cloud</article-title>
          ,
          <source>in: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD</source>
          <year>2021</year>
          ),
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>L.</given-names>
            <surname>Deck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schoefer</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. De-Arteaga</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Kühl</surname>
          </string-name>
          ,
          <article-title>A critical survey on fairness benefits of Explainable AI</article-title>
          , in: ACM Conference on Fairness, Accountability, and
          <string-name>
            <surname>Transparency</surname>
          </string-name>
          (ACM
          <source>FAccT '24)</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sarro</surname>
          </string-name>
          ,
          <article-title>Bias mitigation for machine learning classifiers: A comprehensive survey</article-title>
          ,
          <source>ACM Journal on Responsible Computing</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Pagano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Loureiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. V. N.</given-names>
            <surname>Lisboa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Peixoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A. S.</given-names>
            <surname>Guimarães</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. O. R.</given-names>
            <surname>Cruz</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Araujo</surname>
            ,
            <given-names>L. L.</given-names>
          </string-name>
          <string-name>
            <surname>Santos</surname>
            ,
            <given-names>M. A. S.</given-names>
          </string-name>
          <string-name>
            <surname>Cruz</surname>
            ,
            <given-names>E. L. S.</given-names>
          </string-name>
          <string-name>
            <surname>Oliveira</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Winkler</surname>
            ,
            <given-names>E. G. S.</given-names>
          </string-name>
          <string-name>
            <surname>Nascimento</surname>
          </string-name>
          ,
          <article-title>Bias and unfairness in machine learning models: A systematic review on datasets, tools, fairness metrics, and identification and mitigation methods</article-title>
          ,
          <source>Big Data and Cognitive Computing</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [34]
          <string-name>
            <surname>I. Žliobaitė</surname>
          </string-name>
          ,
          <article-title>Measuring discrimination in algorithmic decision making</article-title>
          ,
          <source>Data Mining and Knowledge Discovery</source>
          <volume>31</volume>
          (
          <year>2017</year>
          )
          <fpage>1060</fpage>
          -
          <lpage>1089</lpage>
          . URL: https://doi.org/10.1007/ s10618-017-0506-1.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <source>Fairness in Machine Learning: Lessons from Political Philosophy, Conference on Fairness, Accountability and Transparency</source>
          <volume>81</volume>
          (
          <year>2018</year>
          )
          <fpage>149</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Friedler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Scheidegger</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Venkatasubramanian,</surname>
          </string-name>
          <article-title>The (Im)possibility of fairness</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>64</volume>
          (
          <year>2021</year>
          )
          <fpage>136</fpage>
          -
          <lpage>143</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <article-title>The limits of the quantitative approach to discrimination, 2022</article-title>
          . URL: https://www.cs.princeton.edu/~arvindn/talks/baldwin-discrimination/ baldwin-discrimination-transcript.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <article-title>Principles alone cannot guarantee ethical ai</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>501</fpage>
          -
          <lpage>507</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kusner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Loftus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Silva</surname>
          </string-name>
          , Counterfactual Fairness,
          <source>in: 31st Conference on Neural Information Processing Systems</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dwork</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pitassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Reingold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zemel</surname>
          </string-name>
          ,
          <article-title>Fairness through awareness</article-title>
          ,
          <source>in: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS '12</source>
          , ACM Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>D.</given-names>
            <surname>Hellman</surname>
          </string-name>
          , Measuring algorithmic fairness,
          <source>Virginia Law Review</source>
          <volume>106</volume>
          (
          <year>2020</year>
          )
          <fpage>811</fpage>
          -
          <lpage>866</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>L.</given-names>
            <surname>Koutsoviti Koumeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Legast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yousefi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vanhoof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Legay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schommer</surname>
          </string-name>
          ,
          <article-title>Compatibility of fairness metrics with eu non-discrimination laws: Demographic parity &amp; conditional demographic disparity</article-title>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/pdf/2306.08394.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>S.</given-names>
            <surname>Corbett-Davies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <source>The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning</source>
          ,
          <year>2018</year>
          . URL: http://arxiv.org/pdf/
          <year>1808</year>
          .00023v2.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <article-title>On the apparent conflict between individual and group fairness</article-title>
          ,
          <source>in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</source>
          , ACM,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <article-title>Fairness and Machine Learning: Limitations and Opportunities, fairmlbook</article-title>
          .org,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sarro</surname>
          </string-name>
          ,
          <article-title>Fairness testing: A comprehensive survey and analysis of trends</article-title>
          ,
          <source>ACM Transactions on Software Engineering and Methodology [0.1</source>
          ,
          <issue>0</issue>
          .2] [0.
          <issue>2</issue>
          ,
          <issue>0</issue>
          .3] [0.
          <issue>3</issue>
          ,
          <issue>0</issue>
          .4] [0.
          <issue>4</issue>
          ,
          <issue>0</issue>
          .5] [0.
          <issue>5</issue>
          ,
          <issue>0</issue>
          .6] [0.
          <issue>6</issue>
          ,
          <issue>0</issue>
          .7] [0.
          <issue>7</issue>
          ,
          <issue>0</issue>
          .8] [0.
          <issue>8</issue>
          ,
          <issue>0</issue>
          .9]
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>