<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>European Workshop on Algorithmic Fairness, July</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Beyond Distributions: A Systematic Review on Relational Algorithmic Justice</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laila Wegner</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marie Christin Decker</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carmen Leicht-Scholten</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>RWTH Aachen University</institution>
          ,
          <addr-line>Templergraben 55, 52062 Aachen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>1</fpage>
      <lpage>03</lpage>
      <abstract>
        <p>Examples of algorithmically reinforced inequalities motivated a growing research area on algorithmic fairness. Traditional fairness metrics mainly focus on distributive questions of fairness such as the distribution of positive outcomes within a protected group. However, in philosophy not only distributive but also relational accounts of justice exist, focusing on power hierarchies and structural inequalities. These topics are also the subject of the currently emerging third wave of algorithmic fairness, stressing that algorithms have to be seen as socio-technical systems. We aim to analyze the latest developments of the research in more detail and investigate what a relational perspective on justice adds to the (so far merely distributive) research on algorithmic fairness. Using a systematic literature review, we plan to focus on a novel perspective of relational algorithmic justice and highlight underexplored topics as well as critical and constructive approaches within the third wave of algorithmic fairness.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Algorithmic Fairness</kwd>
        <kwd>Relational Justice</kwd>
        <kwd>Systematic Literature Review</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Algorithmic Decision-Making</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Examples of harmful algorithmic biases in high-stakes decisions led widely to efforts to reduce the
potentially negative impacts of algorithmic decision-making (ADM). Within an interdisciplinary
research area, several formal algorithmic fairness metrics have been developed, mainly based on a
distributive understanding of justice [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, in the philosophical discussion of justice, two
families of justice stand out [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]: Distributive approaches, focusing on different currencies of equality
(e.g., income, wealth, resources) and how they ought to be distributed, are opposed by relational
accounts, which conceptualize equality based on the quality of social relations among citizens and
the treatment of citizens by social institutions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], focusing on unequal power asymmetries, social
relations, and structural injustices.
      </p>
      <p>Aiming to design fair algorithmic decisions poses complex distributive questions, especially
against the background of biased data and algorithms. The comparison of statistics such as error, true
positives, or false positive predictions between different members of so-called protected attributes
(e.g. gender, race, ...) has led to extensive research on different fairness metrics [e.g., 11] and
challenges such as the ‘impossibility theorem’. While it is as urgent as difficult to approach the
distributive challenges of algorithmic biases, it may not directly lead to a holistic perspective of
algorithmic justice. Within a merely distributive framework questions such as ‘How does an ADM
affect the interaction between the decision subject and decision maker?’, ‘Who benefits and who is
harmed by the use of ADM in a specific context?’, ‘How does the power between decision subjects
and an institution shift once an ADM is involved?’, and ‘How are those who use ADM (e.g., Recruiter)
affected in their daily work?’ remain almost unnoticed. This kind of question substantiates the
demand to expand algorithmic fairness research with a relational focus.</p>
      <p>
        Although the distributive perspective predominates the discourse on algorithmic justice, the
currently emerging third wave [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] puts attention to the drawbacks of this approach. In general,
scholars of the third wave of AI ethics emphasize that algorithms have to be seen as socio-technical
systems that necessitate the discussion of power dynamics and social structures when talking about
algorithmic justice [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. The three waves of AI ethics were named for the first time by Carley Kind
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], director of the Ada Lovelace Institute, in an online blog post that is increasingly taken up by
scientific literature [e.g., 4, 6–8]. Roughly, it can be summarized that the first wave was dominated by
guidelines and principles that demanded fair development and use of algorithms (for reviews see [
        <xref ref-type="bibr" rid="ref10 ref9">9,
10</xref>
        ]). The second wave aimed to overcome the abstract and high-level character of guidelines and
developed mathematical solutions to identify and mitigate unfair biases [e.g., 11, 12]. Häußermann
and Lütge [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] highlight that the third wave is currently evolving and does not yet have a clear upshot.
Thus, there are no systematic reviews of the emerging third wave yet. This research gap motivates
our contribution, aiming to draw a clear picture of these latest advances in the research on algorithmic
justice. Importantly, the mathematical fairness approaches developed in the second wave are mainly
concerned with distributive questions of the algorithmic outcome such as the share between men and
women getting a positive prediction. In contrast, the broader focus of the third wave includes power
dynamics and structural inequalities and thus seems to focus on thematic discourses typical for
relational theories of justice.
      </p>
      <p>
        Agreeing with Branford’s statement at the CEPE [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] that AI Ethics currently faces a “relational
turn” which should be further encouraged, we followed our hypotheses that the third wave of
algorithmic justice is highly influenced by the thematic discourses of relational justice. We aim to
support the perspective of ‘relational algorithmic justice’. To do so, we executed a systematic
literature research (SLR) and crystallized insights of scientific contributions to the developing third
wave of algorithmic justice.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background: Structural Inequalities and Relational Justice</title>
      <p>
        As highlighted by Binns [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], the developed algorithmic fairness metrics often lack an in-depth
consideration of the moral foundations of justice. Especially relational justice is a rather unpopular
perspective within the algorithmic fairness literature. Distributive and relational accounts share the
basic agreement that every person has an equal moral worth. Thus, for both theories equality is a
central aspect, often referred to as egalitarianism [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. From then on, the theories differ on what should
be the central concern of egalitarians – either focusing on how some goods such as resources, income,
or wealth ought to be distributed (i.e., distributive justice) or focusing on social relationships,
treatment with mutual respect, and power imbalances (i.e., relational justice). Relational egalitarians
consider distributive injustices as a symptom caused by social injustices and thus demand to focus on
the root cause such as social relations and structures instead. This means, relational justice might
have also distributive implications but stresses that only focusing on distributive injustices may not
display the whole picture.
      </p>
      <p>
        Centering the root of inequalities means focusing on oppression, domination, and unequal power
hierarchies. Equal relations demand treatment with reciprocity and mutual respect, with no one who
perceives themselves as superior or inferior to others. This does not only refer to the interpersonal
level but also to the structural level, considering how social institutions treat citizens. The structural
level is fundamentally influenced by Iris Young [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] who coined the concept of structural inequalities.
These inequalities result from a sum of non-blameworthy processes that limit the capabilities of large
groups while others benefit by gaining power and privileges. The non-blameworthy processes refer
to the rational decision to prioritize personal goals such as employees outsourcing labor to low-wage
countries [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. While it is a decision within the societal rules and norms and mainly motivated by
economic incentives, it is also an example of reinforced exploitive structures in which individuals
contribute to structural injustices without bad intentions.
      </p>
      <p>To summarize, the individual level of relational justice focuses on the treatment with mutual
respect, and the institutional level highlights the structural nature of injustices. Both families of
relational justice will inform the notion of relational algorithmic justice which is investigated in a
systematic literature review.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Preliminary Results and Discussion</title>
      <p>To the best of our knowledge, no systematic reviews corresponding to the third wave of algorithmic
justice have yet been conducted. Based on the observation that the third wave of algorithmic justice
centers relational topics, our systematic literature review follows a search query based on theories on
relational egalitarianism [e.g., 15, 17, 18] enriched with technical keywords such as ‘machine learning’
or ‘algorithm’. Following predefined inclusion and exclusion criteria, we will select papers focusing
on relational concerns within the algorithmic justice research. Afterward, the included papers are
analyzed in detail for critical and constructive approaches of the third wave.</p>
      <p>
        First results indicate several underexplored topics within the distributive frame of algorithmic
justice. Among others, particularly evident became the critical emphasis on the categorization and
measurements of humans, stressing that the selection of protected attributes is subjective [e.g., 19]
and reduces fluid concepts such as gender identity to discrete categories [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. This oversimplification
can lead to misrepresentation and stigmatization [21, 22]. Furthermore, the analyzed literature
highlights the interplay between algorithms, power, and capitalism, including a critical analysis of
the approaches to intersectionality. Here, it stands out that the current approaches to intersectional
fairness focus on subgroup fairness while failing to engage with systems of oppression [
        <xref ref-type="bibr" rid="ref19">19, 23</xref>
        ].
Additionally, the relational focus revealed several epistemic challenges of algorithmic fairness,
highlighting for example that the discourse of algorithmic fairness is Western centralized [24–26] and
hard codes societal norms by treating constructed categories as facts [22, 27–29].
      </p>
      <p>
        The primary findings of the literature review critically highlight several issues of current
algorithmic fairness approaches. However, several authors also express the hope that algorithmic
systems could be used proactively to reduce structural injustices. In this spirit, Kasirzadeh [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] states
that “there is the potential that algorithmic systems can be used to repair some problematic structures
and to generate better ones” (p. 355). Leavy et al. [30] suggest that this might be reached when
quantitative data is actively used to highlight and combat racism. While emphasizing the difficult
challenge of using algorithms as a driver for more justice, Zajko [31] summarizes that “there may be
ways of designing new AI systems that help to shift power, as long as this is done with the
participation of the people, groups, and communities that such efforts are intended to help” (p. 1048).
      </p>
      <p>A challenge of the outlined SLR involves ensuring that the relational perspective on algorithmic
fairness also contains concrete guidance for its implementation. However, this challenge lies in the
nature of the presented subject; several authors criticize that research on challenges without a clear
solution strategy is likely to be considered outside the scope of the research on algorithmic fairness
[24,27,31]. Relational algorithmic justice must extend the focus beyond technical and clear solutions
to consider complex structural dependencies. Thus, the strength of the presented SLR lies in the
compilation of underexplored problems and critical reflections within algorithmic justice research –
a necessary first step before founded guidance of relational algorithmic justice can be developed.</p>
      <p>Summarizing, this literature review and a focus on relational algorithmic justice underlines the
need to consider a broader scope than the identification and mitigation of biases within the
algorithmic justice research. The relational perspective extends the focus beyond technical solutions
to consider complex structural dependencies. An interdisciplinary, well-founded examination of
philosophical approaches is essential to improve the efforts of algorithmic justice research and has
the potential to enable the use of algorithms for fighting structural injustices and enabling structural
change.
[21] M. Andrus and S. Villeneuve, Demographic-Reliant Algorithmic Fairness: Characterizing the
Risks of Demographic Data Collection in the Pursuit of Fairness, in: 2022 ACM Conference on
Fairness, Accountability, and Transparency, Seoul Republic of Korea, 2022, pp. 1709–1721.
[22] T. Krupiy, A vulnerability analysis: Theorising the impact of artificial intelligence
decisionmaking processes on individuals, society and human diversity from a social justice perspective,
Computer Law &amp; Security Review, vol. 38, p. 105429, 2020, doi: 10.1016/j.clsr.2020.105429.
[23] A. L. Hoffmann, Where fairness fails: data, algorithms, and the limits of antidiscrimination
discourse, Information, Communication &amp; Society, vol. 22, no. 7, pp. 900–915, 2019, doi:
10.1080/1369118X.2019.1573912.
[24] A. Birhane, Algorithmic injustice: a relational ethics approach, Patterns (New York, N.Y.), vol.</p>
      <p>2, no. 2, 2021, doi: 10.1016/j.patter.2021.100205.
[25] A. Gwagwa, E. Kazim, and A. Hilliard, The role of the African value of Ubuntu in global AI
inclusion discourse: A normative ethics perspective, Patterns (New York, N.Y.), vol. 3, no. 4, p.
100462, 2022, doi: 10.1016/j.patter.2022.100462.
[26] Z. Tacheva, Tracking a critical look at the critical turn in data science: From “data feminism” to
transnational feminist data science, Big Data &amp; Society, vol. 9, no. 2, 205395172211129, 2022,
doi: 10.1177/20539517221112901.
[27] B. Green, Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic</p>
      <p>Fairness, Philosophy &amp; Technology, vol. 35, no. 4, 2022, doi: 10.1007/s13347-022-00584-6.
[28] C. Lu, J. Kay, and K. R. McKee, Subverting machines, fluctuating identities: Re-learning human
categorization, in: FAccT ’22, 2022, pp. 1005–1015.
[29] A. Zimmermann and C. Lee-Stronach, Proceed with Caution, Canadian Journal of Philosophy,
vol. 52, no. 1, pp. 6–25, 2022, doi: 10.1017/can.2021.17.
[30] Leavy, S., Siapera, E., &amp; O'Sullivan, B. Ethical Data Curation for AI: An Approach based on
Feminist Epistemology and Critical Theories of Race. In Conference on Artificial Intelligence,
Ethics and Society (AIES), Virtual Event, USA, 2021
[31] Zajko, M. Conservative AI and social inequality: conceptualizing alternatives to bias through
social theory. AI &amp; SOCIETY, 36(3), 1047–1056, 2021,
https://doi.org/10.1007/s00146-021-011539</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kasirzadeh</surname>
          </string-name>
          ,
          <article-title>Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy</article-title>
          ,
          <source>in: AIES '22: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery (ACM)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>348</fpage>
          -
          <lpage>356</lpage>
          . Accessed: Jan. 26
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Arneson</surname>
          </string-name>
          , Egalitarianism, in: The Stanford Encyclopedia of Philosophy, Edward N. Zalta, Ed., 2013rd ed.: Metaphysics Research Lab, Stanford University,
          <year>2013</year>
          . URL: https:// plato.stanford.edu/entries/egalitarianism/
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Häußermann</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Lütge</surname>
          </string-name>
          ,
          <article-title>Community-in-the-loop: towards pluralistic value creation in AI, or-why AI needs business ethics</article-title>
          ,
          <source>AI Ethics</source>
          , vol.
          <volume>2</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>341</fpage>
          -
          <lpage>362</lpage>
          ,
          <year>2022</year>
          , doi: 10.1007/s43681-021-00047-2.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L. T.-L.</given-names>
            <surname>Huang</surname>
          </string-name>
          , H.-Y. Chen,
          <string-name>
            <given-names>Y.-T.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.-R.</given-names>
            <surname>Huang</surname>
          </string-name>
          , and T.-W. Hun,
          <article-title>Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy</article-title>
          ,
          <source>Feminist Philosophy Quarterly</source>
          , vol.
          <volume>8</volume>
          ,
          <issue>¾</issue>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kind</surname>
          </string-name>
          ,
          <article-title>The term 'ethical AI' is finally starting to mean something</article-title>
          ,
          <source>VentureBeat</source>
          , 23 Aug.,
          <year>2020</year>
          . URL: https://venturebeat.com
          <article-title>/ai/the-term-ethical-ai-is-finally-starting-to-meansomething/ (accessed:</article-title>
          <source>Jun. 7</source>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Borg</surname>
          </string-name>
          ,
          <article-title>Four investment areas for ethical AI: Transdisciplinary opportunities to close the publication-to-practice gap</article-title>
          ,
          <source>Big Data &amp; Society</source>
          , vol.
          <volume>8</volume>
          , no.
          <issue>2</issue>
          ,
          <issue>205395172110401</issue>
          ,
          <year>2021</year>
          , doi: 10.1177/20539517211040197.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Braun</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Hummel</surname>
          </string-name>
          ,
          <article-title>Data justice and data solidarity</article-title>
          ,
          <source>Patterns</source>
          (New York, N.Y.), vol.
          <volume>3</volume>
          , no.
          <issue>3</issue>
          , p.
          <fpage>100427</fpage>
          ,
          <year>2022</year>
          , doi: 10.1016/j.patter.
          <year>2021</year>
          .
          <volume>100427</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burr</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Leslie</surname>
          </string-name>
          ,
          <article-title>Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies</article-title>
          ,
          <source>AI Ethics</source>
          , vol.
          <volume>3</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>73</fpage>
          -
          <lpage>98</lpage>
          ,
          <year>2023</year>
          , doi: 10.1007/s43681-022-00178-0.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cachat-Rosset</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Klarsfeld</surname>
          </string-name>
          , Diversity, Equity, and
          <source>Inclusion in Artificial Intelligence: An Evaluation of Guidelines, Applied Artificial Intelligence</source>
          , vol.
          <volume>37</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>2176618</fpage>
          ,
          <year>2023</year>
          , doi: 10.1080/08839514.
          <year>2023</year>
          .
          <volume>2176618</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jobin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ienca</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Vayena,</surname>
          </string-name>
          <article-title>The global landscape of AI ethics guidelines</article-title>
          ,
          <source>Nat Mach Intell</source>
          , vol.
          <volume>1</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>389</fpage>
          -
          <lpage>399</lpage>
          ,
          <year>2019</year>
          , doi: 10.1038/s42256-019-0088-2.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Caton</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Haas</surname>
          </string-name>
          ,
          <source>Fairness in Machine Learning: A Survey</source>
          , ACM Computing Surveys,
          <year>2020</year>
          , doi: 10.48550/arXiv.
          <year>2010</year>
          .
          <volume>04053</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Pessach</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Shmueli</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Review on Fairness in Machine Learning</article-title>
          ,
          <source>ACM Computing Surveys</source>
          , vol.
          <volume>55</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>44</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Branford</surname>
          </string-name>
          ,
          <article-title>Experiencing AI and the Relational 'Turn' in AI Ethics</article-title>
          , International Conference on Computer Ethics,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <article-title>Fairness in Machine Learning: Lessons from Political Philosophy</article-title>
          ,
          <source>in: Proceedings of Machine Learning Research</source>
          , New York,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . URL: https://ssrn.com/abstract=3086546
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Young</surname>
          </string-name>
          , Ed.,
          <source>Justice and the politics of difference</source>
          . Princeton: Princeton University Press,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <article-title>Responsibility for justice</article-title>
          . New York, Oxford: Oxford University Press,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <article-title>What is the Point of Equality?</article-title>
          ,
          <source>Ethics</source>
          , vol.
          <volume>109</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>287</fpage>
          -
          <lpage>337</lpage>
          ,
          <year>1999</year>
          , doi: 10.1086/233897.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.</given-names>
            <surname>Nath</surname>
          </string-name>
          , Relational egalitarianism,
          <source>Philosophy Compass</source>
          , vol.
          <volume>15</volume>
          , no.
          <issue>7</issue>
          ,
          <year>2020</year>
          , doi: 10.1111/phc3.
          <fpage>12686</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kong</surname>
          </string-name>
          , Are “Intersectionally Fair”
          <article-title>AI Algorithms Really Fair to Women of Color? A Philosophical Analysis</article-title>
          ,
          <source>in: 2022 ACM Conference on Fairness, Accountability, and Transparency</source>
          , Seoul Republic of Korea,
          <year>2022</year>
          , pp.
          <fpage>485</fpage>
          -
          <lpage>494</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tomasev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. R.</given-names>
            <surname>McKee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <article-title>Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities</article-title>
          ,
          <source>in: Proceedings of the 2021 AAAI/ACM Conference on AI</source>
          ,
          <string-name>
            <surname>Ethics</surname>
          </string-name>
          , and
          <string-name>
            <surname>Society</surname>
          </string-name>
          , Virtual Event USA,
          <year>2021</year>
          , pp.
          <fpage>254</fpage>
          -
          <lpage>265</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>