<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>European Workshop on Algorithmic Fairness, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Fairness in Machine Learning as 'Algorithmic Positive Action'</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jan-Laurin Müller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bayreuth</institution>
          ,
          <addr-line>Universitätsstraße 30, Bayreuth, 95447</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>7</fpage>
      <lpage>09</lpage>
      <abstract>
        <p>1 In recent years the interdisciplinary discourse on 'fairness in machine learning' has produced a vast amount of technical approaches to guarantee non-discrimination in algorithmic decision-making “by design”. These fairness criteria and definitions have already been examined for their compatibility with moral and legal norms alike. However, non-discrimination law may not only require computer scientists to de-bias their algorithms. It may also limit the implementation of such fairness ensuring techniques: Both, EU-Member States and private actors using de-biasing and fairness-enhancing methods could be liable of violating the principle of equal treatment by undertaking 'algorithmic positive action'. The article considers the legality of 'fairness in machine learning' from the perspective of EU positive action doctrine as well as U.S. affirmative action jurisprudence. It thereby makes three contributions to the debate about how to best enable algorithmic fairness: First, it demonstrates that private actors introducing technical fairness considerations to their models may be liable for executing “algorithmic positive action” under EU law. To do so, the paper puts the jurisprudence of the Court of Justice of the European Union in context of approaches from computer science literature. The paper's second contribution is to help bridging the gap between computer science and law in the field of fair machine learning. It thereby offers normative guidance for computer science scholars and practitioners trying to implement fairness considerations into their models. Finally, the paper compares EU “positive action” doctrine with U.S. “affirmative action” jurisprudence regarding algorithmic decision making. It demonstrates both, similarities as well as differences across the Atlantic and shows that legal requirements for technological 'fairness by design' approaches are highly contextual and depended on socio-cultural settings. Building on these insights, the paper argues for a genuine European perspective on 'fairness in machine learning'.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Algorithmic Discrimination</kwd>
        <kwd>EU Non-Discrimination Law</kwd>
        <kwd>Affirmative Action</kwd>
        <kwd>Positive Action</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The increasing use of algorithmic-decision-making-systems based on machine learning
(ADMsystems) has triggered a broad and interdisciplinary debate on the ‘ethics of algorithms’ (Mittelstadt et
al. 2016; Tsamados et al. 2021; Martens 2022). At the core of this debate lies the call for fairness,
accountability and transparency in machine learning, with fairness comprising the values of privacy
protection and non-discrimination. The potential of ADM-systems to discriminate against legally
protected groups has been demonstrated across a wide range of social contexts, ranging from criminal
risk assessment
        <xref ref-type="bibr" rid="ref1">(Angwin et al. 2016)</xref>
        , employment
        <xref ref-type="bibr" rid="ref13">(Dastin 2018)</xref>
        , voice and image recognition
        <xref ref-type="bibr" rid="ref10">(Shankar
et al. 2017; Buolamwini/Gebru 2018)</xref>
        to text processing and translation
        <xref ref-type="bibr" rid="ref11 ref15 ref6 ref9">(Bolukbasi et al. 2016; Caliskan
et al. 2017; Garg et al. 2018)</xref>
        . Summarizing the debate, the American AI Now Institute stated famously:
“The question is no longer whether there are harms and biases in AI systems. That debate has been
settled: the evidence has mounted beyond doubt […]. The next task now is addressing these harms.“
        <xref ref-type="bibr" rid="ref16 ref6">(Whittaker et al. 2018, p. 42)</xref>
        . Legal scholars on both sides of the Atlantic have demonstrated that legal
non-discrimination regimes may fall short in doing so
        <xref ref-type="bibr" rid="ref16 ref3 ref5">(see Barocas/Selbst 2016 for the U.S. and Hacker
2018 for the EU)</xref>
        . Computer scientists are trying to fill this gap. By developing theoretical fairness
definitions and bias metrics
        <xref ref-type="bibr" rid="ref12 ref14">(cf. Hutchinson/Mitchell 2018; Verma/Rubin 2018; Dunkelau/Leuschel
2019; Chouldechova/Roth 2020; Pessach/Shmueli 2020)</xref>
        as well as practical bias detection and bias
diminishing techniques
        <xref ref-type="bibr" rid="ref6">(Bellamy et al. 2018; Saleiro et al. 2019)</xref>
        they try to ensure that ADM-systems
match ethical and legal fairness considerations “by design”. These fairness criteria and definitions have
already been examined for their compatibility with moral norms (Hellman 2020) and applicable
nondiscrimination regimes (Wachter et al. 2021a; Wachter et al. 2021b; Hauer et al. 2021).
      </p>
      <p>
        However, non-discrimination law may not only require computer scientists to de-bias their
algorithms. It may also limit the implementation of such fairness ensuring techniques: Public and private
actors using de-biasing and fairness-enhancing methods could be liable of violating the principle of
equal treatment, both under EU positive action doctrine as well as under U.S. affirmative action
jurisprudence. Affirmative and positive action alike describe a variety of measures going beyond simply
refraining from discrimination, but rather serving to promote substantial equality for groups that have
suffered particular disadvantages in the past. Such measures range from mere outreach schemes (like
selective advertising for protected groups) all the way up to reverse discrimination, such as hiring quotas
(for a taxonomy see McCrudden 2011). According to EU anti-discrimination law, Member States or
private actors may implement such approaches to achieve ‘full equality of opportunity’ even though
they conflict with a purely formal conception of equal treatment. The article argues that this conflict
may arise in algorithmic decision making as well: two individuals who differ regarding a protected
attribute, may receive different scores and therefore be treated unequally even though they would have
been treated equally before the intervention of the adjusted model. This could be considered
“algorithmic reverse discrimination” and violate the principle of equal treatment. Differing from the
U.S. context, where the issue is discussed as “algorithmic affirmative action”
        <xref ref-type="bibr" rid="ref7">(Bent 2020; Ho/Xiang
2020, Kim 2022)</xref>
        , in Europe the possibility of fairness in machine learning (FML) leading to
“algorithmic positive action” has not yet been considered. The article therefore aims to answer two
questions: (1.) Does EU non-discrimination law limit the use of technical de-biasing methods? (2.) If
so, to what extent?
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The Legality of ‘Algorithmic Positive Action’</title>
      <p>By answering these questions, the article makes three contributions to the debate about how to best
enable algorithmic fairness:
2.1.</p>
      <p>‘Algorithmic Positive Action’ Under EU Law</p>
      <p>
        First, it demonstrates that private actors introducing technical fairness considerations to their models
may be liable for executing “algorithmic positive action” under EU law. To do so, the paper analyzes
the jurisprudence of the Court of Justice of the European Union (CJEU) and puts it in context of
approaches from computer science literature that systematize technical fairness metrics. The paper
particularly refers to the distinction between group- and individual-fairness
        <xref ref-type="bibr" rid="ref4 ref8">(cf. Verma/Rubin 2018;
Barocas/Hardt/Narayanan 2019, Pessach/Shmueli 2020; critical Binns 2020)</xref>
        . It finds that
‘groupfairness metrics’ in particular may violate the non-discrimination principle. Unlike ‘individual-fairness
metrics’ they aim to equal the distribution of certain statistical variables between groups and do not
solely focus on individual persons. Thus, they closely resemble quota systems, which have been highly
controversial in the CJEU’s jurisprudence. In Badeck (C-158/97 – Badeck), the Court famously held
that “a measure which is intended to give priority in promotion to women in sectors of the public service
where they are under-represented must be regarded as compatible with Community law if […] [1.]
women and men are equally qualified, and [2.] the candidatures are the subject of an objective
assessment which takes account of the specific personal situations of all candidates.” Two aspects are
decisive: First, the corrective measures must consider some notion of qualification (in employment),
creditworthiness (in banking) or risk (in insurance). This is challenging for certain fairness-metrics,
foremost for (pure) ‘statistical parity’. Second, an objective case-by-case examination needs to ensure
that the disadvantaged person’s interests are sufficiently taken into account. The CJEU therefore
considered strict quotas to violate the non-discrimination principle because they pursue a notion of
equality of results (C-450/93 – Kalanke) and flexible quotas pursuing equality of chances to be lawful
(C-409/95 – Marschall). Users of ADM-systems could therefore be required to couple algorithmic
fairness measures with a final evaluation leaving the ultimate decision of whether or not to adopt the
corrective measure to a human
        <xref ref-type="bibr" rid="ref16">(Hacker 2018, p. 1181)</xref>
        . This would be a great challenge for the project
of ensuring ‘fairness by design’. However, in Badeck the CJEU made an exception to its strict approach
and allowed rigid quotas for traineeship positions and job interviews (C-158/97 – Badeck): Employers
were permitted to allocate a fixed number of traineeship positions to women and invite a fixed number
of them to job interviews. The paper shows that these exceptions seamlessly fit into the Court’s
contextbased notion of substantive equal opportunity because they were considered mere preconditions for the
access to the labor market. By doing so, the article offers normative guidance for computer science
scholars and practitioners trying to implement fairness considerations into their models. Its second
contribution therefore is to help bridging the gap between computer sciences and law in the field of fair
machine learning.
2.2.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Comparison with U.S. Affirmative Action Doctrine</title>
      <p>
        Finally, the paper compares EU “positive action” doctrine with U.S. “affirmative action”
jurisprudence regarding algorithmic decision making. In Ricci v. DeStefano (557 U.S. 557 – Ricci v.
DeStefano) the Supreme Court (SC) held that Title VII does not prohibit an employer from considering
racial disparities “before administering a test or practice” (cf. 539 U.S. 244 – Gratz v. Bollinger; 539
U.S. 306 – Grutter v. Bollinger). But once the test or practice has been established, the equal treatment
principle sharply disapproves of any altering of the result on grounds of race
        <xref ref-type="bibr" rid="ref3">(see Primus 2010, p.
13691374; Siegel 2015, p. 682-683; Bagenstos 2016, p. 1151)</xref>
        . These requirements resemble the technical
distinction between pre-processing, in-processing and post- processing methods
        <xref ref-type="bibr" rid="ref14">(cf.
Dunkelau/Leuschel 2019, Martens 2022)</xref>
        . This reading of the case law enables the article to demonstrate
that under U.S. non-discrimination law, post-processing methods will likely be subject to stricter
scrutiny than pre-processing and in-processing approaches. However, even pre- and in-processing
methods will largely be considered illegal. This is because in the U.S., decision-making processes (like
college admissions) are increasingly required not to consider protected attributes at all. This
development will probably continue with the Supreme Court’s decision in Students for Fair Admissions
v. President and Fellows of Harvard College and Students for Fair Admissions v. University of North
Carolina which is expected to overrule existing precedents (for the arguments in the oral hearing see
Liptak 2022).
      </p>
      <p>
        Despite similar historical starting points, EU positive action doctrine and U.S. affirmative action
jurisprudence have taken opposite routes. While EU non-discrimination law allows for a contextual and
substantive reading of equality (of opportunity), recent American jurisprudence
        <xref ref-type="bibr" rid="ref2 ref3">(cf. Areheart 2012;
Bagenstos 2016)</xref>
        opted for a rather formal notion of equality. Considering protected attributes may be
a legitimate strategy in breaking down historical structures of oppression against protected groups under
EU but not under U.S. non-discrimination law. The article therefore demonstrates both, similarities as
well as differences across the Atlantic and thereby shows that legal requirements for technological
“fairness by design” approaches are highly contextual and depended on socio-cultural settings.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. The Necessity of a European Perspective</title>
      <p>Building on these insights, the paper argues for a genuine European perspective on ‘fairness in
machine learning’. It provides a normative point of view on the workshop’s research agenda: Instead
of asking “what, if anything, is specifically European in the debate about fairness in machine learning?”
it tackles the question “why should there be a specifically European debate about fairness in machine
learning?”.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Acknowledgements</title>
      <p>This Word template was created by Aleksandr Ometov, TAU, Finland. The template is made
available under a Creative Commons License Attribution-ShareAlike 4.0 International (CC BY-SA
4.0).</p>
    </sec>
    <sec id="sec-6">
      <title>5. References</title>
      <p>[17] M. Hauer, J. Kevekordes, M. A. Haeri, Legal perspective on possible fairness measures – A legal
discussion using the example of hiring decisions, Computer Law &amp; Security Review 42 (2021).
doi: 10.1016/j.clsr.2021.105583.
[18] D. Hellman, Measuring Algorithmic Fairness, Virginia Law Review 106 (2020) 811-866. URL:
https://www.virginialawreview.org/wp-content/uploads/2020/06/Hellman_Book.pdf.
[19] D. Ho, A. Xiang, Affirmative Algorithms: The Legal Grounds for Fairness as Awareness, The
University of Chicago Law Review Online, 2020. URL:
https://lawreviewblog.uchicago.edu/2020/10/30/aa-ho-xiang/.
[20] B. Hutchinson, M. Mitchell, 50 Years of Test (Un)fairness: Lessons for Machine Learning, in:
FAT* ’19: Conference on Fairness, Accountability, and Transparency, FAT* ’19, ACM Press,
New York, NY, 2019, pp. 49-58. doi:10.1145/3287560.3287600.
[21] A. Liptak, Supreme Court Seems Ready to Throw Out Race-Based College Admissions, 2022,
URL:
https://www.nytimes.com/2022/10/31/us/supreme-court-harvard-unc-affirmativeaction.html.
[22] D. Martens, Data Science Ethics: Concepts, Techniques, and Cautionary Tales, 1st. ed., Oxford</p>
      <p>University Press, New York, NY, 2022.
[23] C. McCrudden, A Comparative Taxonomy of ‘Positive Action’ and ‘Affirmative Action’ Policies,
in: R. Schulze, Non-Discrimination in European Private Law, 1st. ed., Tübingen, Germany, 2011,
pp. 157-180.
[24] B. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, L. Floridi, The ethics of algorithms: Mapping the
debate, Big Data &amp; Society 3 (2016). doi:10.1177/2053951716679679.
[25] P. Kim, Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action, California</p>
      <p>Law Review 110 (2022) 1539-1596. doi: 10.15779/Z387P8TF1W.
[26] D. Pessach, E.Shmueli, A Review on Fairness in Machine Learning, ACM Computing Surveys 55
(2022). doi:10.1145/3494672.
[27] R. Primus, The Future of Disparate Impact, Michigan Law Review 108 (2010) 1341-1387. URL:
https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1517&amp;context=articles.
[28] P. Saleiro, B. Kuester, L. Hinkson, J. London, A. Stevens, A. Anisfeld, K. Rodolfa, R. Ghani,
Aequitas: A Bias and Fairness Audit Toolkit, arXiv.1811.05577 (2019).
doi:10.48550/arXiv.1811.05577.
[29] S. Shankar, Y. Halpern, E. Breck, J. Atwood, J. Wilson, D. Sculley, No Classification without
Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World,
arXiv:1711.08536 (2017). doi:10.48550/arXiv.1711.08536.
[30] R. Siegel, Race-Conscious but Race-Neutral: The Constitutionality of Disparate Impact in the
Roberts Court, Alabama Law Review 66 (2015) 653-689. URL:
https://openyls.law.yale.edu/bitstream/handle/20.500.13051/4540/66AlaLRev653.pdf?sequence=
2&amp;isAllowed=y.
[31] A. Tsamados, N. Aggarwal, J. Cowls, J. Morley, H. Roberts, M. Taddeo, L. Floridi, The ethics of
algorithms: key problems and solutions, AI &amp; SOCIETY (2022) 215-230.
doi:10.1007/s00146021-01154-8.
[32] S. Wachter, B. Mittelstadt, C. Russel, Why fairness cannot be automated: Bridging the gap between
EU non-discrimination law and AI, Computer Law &amp; Security Review 41 (2021).
doi:10.1016/j.clsr.2021.105567.
[33] S. Verma, J. Rubin, Fairness Definitions Explained, in: IEEE/ACM International Workshop on
Software Fairness, FairWare ’18, ACM Press, New York, NY, 2018.
doi:10.1145/3194770.3194776.
[34] S. Wachter, B. Mittelstadt, C. Russel, Bias Preservation in Machine Learning: The Legality of
Fairness Metrics Under EU Non-Discrimination Law, West Virginia Law Review 123 (2021)
735790. URL: https://researchrepository.wvu.edu/cgi/viewcontent.cgi?article=6331&amp;context=wvlr.
[35] M. Whittaker, K. Crawford, R. Dobbe, G. Fried, E. Kaziunas, V. Mathur, S. Myers West, R.</p>
      <p>Richardson, J. Schultz, O. Schwartz, AI Now Report 2018, New York, NY, 2018. URL:
https://ainowinstitute.org/AI_Now_2018_Report.pdf.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Angwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Larson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mattu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirchner</surname>
          </string-name>
          , Machine Bias.
          <article-title>There's software used across the country to predict future criminals. And it's biased against blacks</article-title>
          ,
          <year>2016</year>
          , https://www.propublica.org/article/machine-bias
          <article-title>-risk-assessments-in-criminal-sentencing.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Areheart</surname>
          </string-name>
          , The Anticlassification Turn in Employment Discrimination Law,
          <source>Alabama Law Review</source>
          <volume>63</volume>
          (
          <year>2012</year>
          )
          <fpage>955</fpage>
          -
          <lpage>1006</lpage>
          . URL: https://ir.law.utk.edu/cgi/viewcontent.cgi?article=1223&amp;context=utklaw_facpubs.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bagenstos</surname>
          </string-name>
          ,
          <article-title>Disparate Impact and the Role of Classification and Motivation in Equal Protection Law After Inclusive Communities</article-title>
          ,
          <source>Cornell Law Review</source>
          <volume>101</volume>
          (
          <year>2016</year>
          )
          <fpage>1115</fpage>
          -
          <lpage>1169</lpage>
          . URL: https://repository.law.umich.edu/cgi/viewcontent.cgi?article=2822&amp;context=articles.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <source>Fairness and Machine Learning - Limitations and Opportunities</source>
          ,
          <year>2019</year>
          . URL: https://fairmlbook.org.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Selbst</surname>
          </string-name>
          ,
          <article-title>Big Data's Disparate Impact</article-title>
          ,
          <source>California Law Review</source>
          <volume>104</volume>
          (
          <year>2016</year>
          )
          <fpage>671</fpage>
          -
          <lpage>732</lpage>
          . doi:
          <volume>10</volume>
          .15779/Z38BG31.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bellamy</surname>
          </string-name>
          et al.,
          <source>Ai</source>
          fairness
          <volume>360</volume>
          :
          <article-title>An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias</article-title>
          , arXiv:
          <year>1810</year>
          .
          <year>01943</year>
          (
          <year>2018</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1810</year>
          .
          <year>01943</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bent</surname>
          </string-name>
          , Is Algorithmic Affirmative Action Legal?,
          <source>The Georgetown Law Journal</source>
          <volume>108</volume>
          (
          <year>2020</year>
          )
          <fpage>803</fpage>
          -
          <lpage>853</lpage>
          . URL: https://www.law.georgetown.edu/georgetown-law-journal/wpcontent/uploads/sites/26/2020/04/
          <string-name>
            <surname>Is-Algorithmic-Affirmative-</surname>
          </string-name>
          Action-Legal.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <article-title>On the apparent conflict between individual and group fairness</article-title>
          ,
          <source>in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</source>
          , FAT* '20, ACM Press, New York, NY,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1145/3351095.3372864.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bolukbasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Saligrama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalai</surname>
          </string-name>
          ,
          <article-title>Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings</article-title>
          ,
          <source>in: Proceedings of the 30th International Conference on Neural Information Processing Systems</source>
          , NIPS'16, ACM Press, New York, NY,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .5555/3157382.3157584.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Buolamwini</surname>
          </string-name>
          , T. Gebru, Gender Shades:
          <article-title>Intersectional Accuracy Disparities in Commercial Gender Classification</article-title>
          ,
          <source>in: Proceedings of the 1st Conference on Fairness, Accountability and Transparency</source>
          <year>2018</year>
          , FAT*'18, ACM Press, New York, NY,
          <year>2018</year>
          . URL: http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Caliskan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <article-title>Semantics derived automatically from language corpora contain human-like biases</article-title>
          ,
          <source>Science</source>
          <volume>356</volume>
          (
          <year>2017</year>
          )
          <fpage>183</fpage>
          -
          <lpage>186</lpage>
          . doi:
          <volume>10</volume>
          .1126/science.aal4230.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chouldechova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <article-title>A Snapshot of the Frontiers of Fairness in Machine Learning</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>63</volume>
          (
          <year>2020</year>
          )
          <fpage>82</fpage>
          -
          <lpage>89</lpage>
          . doi:
          <volume>10</volume>
          .1145/3376898.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dastin</surname>
          </string-name>
          ,
          <article-title>Amazon scraps secret AI recruiting tool that showed bias against women</article-title>
          ,
          <year>2018</year>
          . URL: https://www.reuters.com/article/us- amazon
          <article-title>-com-jobs-automation-insight-idUSKCN1MK08G.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dunkelau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Leuschel</surname>
          </string-name>
          , Fairness-Aware
          <source>Machine Learning. An Extensive Overview</source>
          ,
          <year>2019</year>
          . URL: https://www.phil-fak.uniduesseldorf.de/fileadmin/Redaktion/Institute/Sozialwissenschaften/Kommunikations_und_Medienwissenschaft/KMW_I/Working_Paper/Dunkelau___Leuschel__
          <year>2019</year>
          __FairnessAware_Machine_Learning.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schiebinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Word embeddings quantify 100 years of gender and ethnic stereotypes</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences of the United States of America</source>
          <volume>115</volume>
          (
          <year>2018</year>
          )
          <fpage>E3635</fpage>
          -
          <lpage>E3644</lpage>
          . doi:
          <volume>10</volume>
          .1073/pnas.1720347115.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hacker</surname>
          </string-name>
          ,
          <article-title>Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law</article-title>
          ,
          <source>Common Market Law Review</source>
          <volume>55</volume>
          (
          <year>2018</year>
          )
          <fpage>1143</fpage>
          -
          <lpage>1185</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>