<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Algorithmic Unfairness Through the Lens of EU Non-Discrimination Law</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Or Why the Law is Not a Decision Tree</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hilde Weerts</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raphaële Xenidis</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabien Tarissan</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henrik Palmer Olsen</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykola Pechenizkiy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ENS Paris-Saclay</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Eindhoven University of Technology</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sciences Po Law School</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Copenhagen</institution>
          ,
          <country country="DK">Denmark</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Concerns regarding unfairness and discrimination in the context of artificial intelligence (AI) systems have recently received increased attention from both legal and computer science scholars. Yet, the degree of overlap between notions of algorithmic bias and fairness on the one hand, and legal notions of discrimination and equality on the other, is often unclear, leading to misunderstandings between computer science and law. In this paper, we aim to illustrate to what extent European Union (EU) non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature and where they difer. Ultimately, we show that metaphors depicting the law as a decision tree are misguiding. We suggest moving away from asking what should be equal, and towards asking why a particular distribution of burdens and benefits is right in a given context.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;discrimination</kwd>
        <kwd>fairness metrics</kwd>
        <kwd>technical interventions</kwd>
        <kwd>EU law</kwd>
        <kwd>legal compliance</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Concerns regarding unfairness and discrimination in the context of artificial intelligence (AI)
systems have recently received increased attention from both legal and computer science
scholars. Yet, the degree of overlap between notions of algorithmic bias and fairness on the one
hand, and legal notions of discrimination and equality on the other, is often unclear, leading to
misunderstandings between computer science and law.</p>
      <p>On the one hand, computer scientists have put forward various metrics and technical
interventions to measure and mitigate unfairness of AI systems. However, an AI practitioner hoping
for an explicit answer to the question: “what should be the value of my fairness metric for
my system to be compliant with the law?" is likely to be disappointed, as most of the time the
answer will amount to a variation of “it depends".</p>
      <p>On the other hand, challenges of algorithmic unfairness are not always properly understood
by legal scholars. The technical translation of legal standards raises a range of dificult normative
questions that force lawyers to question the content of overarching legal principles such as equal
treatment and non-discrimination. Since courts are called on to interpret the normative content
of those polysemous legal norms contextually and on a case-by-case basis, a straightforward
technical translation of those norms is impossible.</p>
      <p>
        As a result, computer scientists struggle to understand how legal compliance with equality
law can be ensured, and legal experts and regulators struggle with figuring out how
discrimination law can properly address algorithmic bias and unfairness. Additionally, we observe a
tendency on both sides to overestimate the solutions and answers provided by each discipline.
The legal community tends to overestimate the efectiveness and applicability of technical
interventions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In turn, computer scientists place perhaps too much confidence in the principle
of legal certainty, and the determinacy and specificity of legal norms.
      </p>
      <p>This raises several important questions. What types of bias and unfairness does the law
address when it prohibits discrimination? What role can fairness metrics play in establishing
legal compliance – if any? This paper aims to respond to computer scientists’ uncertainties
about what is legal when it comes to discrimination, and to lawyers’ questions regarding the
challenges and technical possibilities to realise equality rights and non-discrimination law
obligations. To this end, we consider European Union (EU) non-discrimination law and we
show to what extent non-discrimination law coincides with notions of algorithmic fairness
proposed in computer science literature and where they difer. In so doing, we target a broader
audience, bridging two distinct disciplines.</p>
      <p>
        Existing work in this direction has primarily targeted a legal audience [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ]. Most notably,
Wachter et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] set out how the contextual nature of EU non-discrimination law makes it
impossible to automate non-discrimination in the context of AI systems and propose a fairness
metric that aligns with the "gold standard" of the Court of Justice of the European Union (CJEU).
Additionally, several works focus on US anti-discrimination law [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ]. For example, Hellman
considers the compatibility of several fairness metrics under US anti-discrimination law and
touches upon the legitimacy of particular types of technical interventions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>The contributions of this paper are as follows. First, we analyse influential examples of
algorithmic unfairness through the lens of EU non-discrimination law, drawing parallels with
EU case law. Second, we set out the normative underpinnings of fairness metrics and technical
interventions and compare these to the legal reasoning of the Court of Justice of the EU.
Specifically, we show how normative assumptions often remain implicit in both disciplinary
approaches and explain the ensuing limitations of current AI practice and non-discrimination
law. We conclude with implications for AI practitioners and regulators.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Algorithmic Unfairness Through the Lens of</title>
    </sec>
    <sec id="sec-3">
      <title>Non-Discrimination Law</title>
      <p>
        We analyse three influential examples of algorithmic unfairness through the lens of EU
nondiscrimination law, namely the Dutch childcare benefits scandal [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ], the Amazon’s
recruitment algorithm [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and the gender shades [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ], drawing parallels with EU case law.
The purpose is to establish a taxonomy of algorithmic discriminatory harms, assess when and
how those harms fall within the scope of EU equality law, and determine how they can be
redressed from a legal point of view. Relying on those examples, we show that, although EU
non-discrimination law is in principle suited to deal with types of algorithmic unfairness that
closely resemble human discrimination, it cannot be readily applied to all cases of disparate
predictive performance. Moreover, the unintelligibility of prediction-generating mechanisms
and lack of transparency regarding important design choices of AI systems make it dificult for
applicants to provide prima facie evidence.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. The Problem of Emptiness</title>
      <p>
        We then set out the normative underpinnings of fairness metrics and technical interventions [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
and compare these to the legal reasoning of the CJEU. Specifically, we show how normative
assumptions often remain implicit in both disciplinary approaches, and explain the ensuing
limitations of current AI practice and non-discrimination law. To do so, we map the requirements
of non-discrimination law to algorithmic fairness research relying on a case law analysis of
landmark decisions of the Court of Justice of the EU. We reveal the ‘emptiness’ [15] of equality
norms and fairness approaches and argue that uncovering – and reflecting upon – the normative
baselines used as equality standards is key to ‘translating’ legal and technical approaches to
fairness and discrimination.
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusion</title>
      <p>Understanding when particular interventions are appropriate is especially important considering
the dificulties applicants face in providing prima facie evidence in the context of opaque
algorithmic systems. While many fairness metrics have taken inspiration from non-discrimination law,
legal compliance cannot translate into a single threshold or fairness metric. In other terms, EU
equality law is not a decision tree. Rather, fulfilling the requirements of non-discrimination law
demands reflecting explicitly on the normative goal of legal and technical fairness interventions.
We suggest that, in order to meaningfully answer the question that non-discrimination law
poses, we must move beyond merely asking what should be equal and, instead, ask ourselves
why a particular distribution of burdens and benefits is right in a given context.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This project has received funding from the European Union’s Horizon 2020 research and
innovation programme under the Marie Skłodowska-Curie grant agreement No 898937. We
also thank the organisers of the Lorentz workshop on Fairness in Algorithmic Decision Making:
A Domain-Specific Approach in March 2022 for bringing together a group of researchers with
diverse disciplinary backgrounds as well as the participants for their valuable insights.
[15] P. Westen, The empty idea of equality, Harvard Law Review (1982) 537–596.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Balayn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gürses</surname>
          </string-name>
          ,
          <article-title>Beyond debiasing: Regulating ai and its inequalities</article-title>
          ,
          <source>EDRi Report</source>
          . https://edri. org/wp-content/uploads/2021/09/EDRi_
          <string-name>
            <surname>Beyond-Debiasing-Report</surname>
          </string-name>
          _Online. pdf (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <article-title>Algorithmic decision-making: A guide for lawyers</article-title>
          ,
          <source>Judicial Review</source>
          <volume>25</volume>
          (
          <year>2020</year>
          )
          <fpage>2</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Bias preservation in machine learning: the legality of fairness metrics under eu non-discrimination law</article-title>
          , W. Va. L. Rev.
          <volume>123</volume>
          (
          <year>2020</year>
          )
          <fpage>735</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai</article-title>
          ,
          <source>Computer Law &amp; Security Review</source>
          <volume>41</volume>
          (
          <year>2021</year>
          )
          <fpage>105567</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Hellman</surname>
          </string-name>
          , Measuring algorithmic fairness,
          <source>Virginia Law Review</source>
          <volume>106</volume>
          (
          <year>2020</year>
          )
          <fpage>811</fpage>
          -
          <lpage>866</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P. T.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Race-aware algorithms: Fairness, nondiscrimination and afirmative action</article-title>
          , Cal. L. Rev.
          <volume>110</volume>
          (
          <year>2022</year>
          )
          <fpage>1539</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Hines</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Dickerson</surname>
          </string-name>
          ,
          <article-title>Equalizing credit opportunity in algorithms: Aligning algorithmic fairness research with us fair lending regulation</article-title>
          ,
          <source>in: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>368</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Nederlandse</given-names>
            <surname>Autoriteit</surname>
          </string-name>
          <string-name>
            <surname>Persoonsgegevens</surname>
          </string-name>
          , Besluit tot boeteoplegging,
          <year>2021</year>
          . URL: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boetebesluit_ belastingdienst.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Nederlandse</given-names>
            <surname>Autoriteit</surname>
          </string-name>
          <string-name>
            <surname>Persoonsgegevens</surname>
          </string-name>
          , Onderzoeksrapport Belastingdienst/Toeslagen - De verwerking van de nationaliteit van aanvragers van kinderopvangtoeslag,
          <year>2021</year>
          . URL: https://www.autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/ onderzoek_belastingdienst_kinderopvangtoeslag.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Nieuws</surname>
          </string-name>
          ,
          <article-title>Ouders zwartgelakte dossiers: 'Ik weet nog steeds niet wat ik fout heb gedaan'</article-title>
          ,
          <source>NOS Nieuws</source>
          (
          <year>2019</year>
          ). URL: https://nos.nl/artikel/ 2314288-ouders
          <article-title>-zwartgelakte-dossiers-ik-weet-nog-steeds-niet-wat-ik-fout-heb-gedaan.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dastin</surname>
          </string-name>
          ,
          <article-title>Amazon scraps secret ai recruiting tool that showed bias against women</article-title>
          ,
          <source>in: Ethics of Data and Analytics</source>
          , Auerbach Publications,
          <year>2018</year>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>299</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Buolamwini</surname>
          </string-name>
          , T. Gebru,
          <article-title>Gender shades: Intersectional accuracy disparities in commercial gender classification</article-title>
          , in: Conference on fairness,
          <source>accountability and transparency, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>College voor de Rechten van de Mens</surname>
          </string-name>
          ,
          <string-name>
            <surname>Tussenoordeel. De Stichting Vrije</surname>
          </string-name>
          <article-title>Universiteit krijgt de gelegenheid om te bewijzen dat de door haar ingezette antispieksoftware een studente met een donkere huidskleur niet heeft gediscrimineerd</article-title>
          .,
          <year>2022</year>
          . URL: https:// oordelen.mensenrechten.nl/oordeel/2022-146,
          <year>Oordeelnummer 2022</year>
          -
          <volume>146</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Weerts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Royakkers</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Pechenizkiy, Does the end justify the means? on the moral justification of fairness-aware machine learning</article-title>
          ,
          <source>arXiv preprint arXiv:2202.08536</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>