<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>IIR</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Dissatisfaction Induced by Pairwise Swaps⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Discussion Paper</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Fabris</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianmaria Silvello</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gian Antonio Susto</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Asia J. Biega</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Max Planck Institute for Security and Privacy</institution>
          ,
          <addr-line>Bochum</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Padova</institution>
          ,
          <addr-line>Padova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>13</volume>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Fairness is increasingly recognized as an important property of information access systems. Pairwise fairness is a measure of equity in ranking whose normative grounding has not been clearly studied nor discussed in the literature. In this work, we target this gap by providing a clear interpretation for this family of measures, by demonstrating and remedying its key limitations, and by analysing its relationship to other measures of fair ranking.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Information Access Systems (IAS) have become increasingly prominent in recent years as
they help users interact with large amounts of content through the ranking and presentation
of items based on their estimated relevance or merit [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. In the context of IAS, content
producers are now seen as important stakeholders, whose economic and societal needs should
be considered alongside consumers to promote a fair and productive information ecosystem
[
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ]. Algorithmic fairness [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ] is a research field concerned with ensuring equitable
algorithmic outcomes through specific measures [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ], algorithmic designs [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], and
auditing procedures [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ].
      </p>
      <p>
        In this work, we explore the pairwise fairness family of ranking measures [
        <xref ref-type="bibr" rid="ref15">15, 16, 17</xref>
        ], providing
a new interpretation based on browsing models and highlighting limitations of existing metrics.
We propose a new metric that overcomes these limitations by modeling realistic browsing
behaviors and individual provider perspectives. This new measure captures aspects of observed
unfairness and dissatisfaction, specifically related to the perceived quality of IAS by content
producers. Additionally, we characterize the relationship between pairwise and exposure-based
fairness measures both analytically and empirically. Overall, we make significant contributions
by ofering a new interpretation of pairwise fairness, proposing a novel metric, and studying
the relationship between pairwise and exposure-based fairness.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <p>Notation. We let  ∈ ℐ denote an item in a set to be ranked, and  denote its relevance. We let
 ∈  = {, } indicate a (binary, for ease of exposition) sensitive attribute,1 and  ∈  the
membership of  in group . We use  * for an “ideal” ranking, i.e., a permutation which orders
items decreasingly by relevance:  * = argsort(). Finally, we let  denote a ranking returned
by the IAS in response to a query, and  () indicate the item ranked by  in position .
Discordant pairs. The notion of discordant pair is key to pairwise fairness. Two items ,  ∈ ℐ
form a discordant pair if their relative order in  * and  is diferent. Let  − 1() indicate the
position of item  in  , i.e.,  − 1() =  ⇐⇒  () = . The indicator function for a discordant
pair in rankings  and  * is defined as
(, ) = 1( − 1() &lt;  − 1(),  *− 1() &gt;  *− 1()) + 1( − 1() &gt;  − 1(),  *− 1() &lt;  *− 1())
⏟  (,⏞) ⏟  (,⏞)
In other words,  is in a discordant pair when ranking  unfairly puts it at an advantage ( ) or
a disadvantage ( ) on item ; subscripts  and  denote that the first item is in a Favorable
Discordant Pair (FDP) or an Unfavorable Discordant Pair (UDP).</p>
      <p>
        Pairwise Fairness. Inter-Group Inaccuracy (IGI) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and Rank Equality Error (REE) [16], are
the two most popular measures of pairwise fairness, defined as
 · ∑∈︁ ∑∈︁  (, ),
where  is a normalizing constant. The literature lacks an explicit discussion of the normative
reasoning behind these pairwise fairness metrics and the construct they capture. For instance,
for IGI, Beutel et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] “draw on the intuition of Hardt et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] for equality of odds, where
the fairness of a classifier is quantified by comparing its false positive rate and/or false negative
rate.”, while REE is based on the “postulate that there is value in considering error-based fairness
criteria for rankings” [16].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. What does Pairwise Fairness Actually Measure?</title>
      <p>Browsing model. To provide an interpretation for pairwise fairness, we begin by demonstrating
and deriving its implicit user browsing model. REE and IGI are related to Kendall’s Tau [18],
according to which the inaccuracy of a ranking can be written as</p>
      <p>1
 =  ·
∑︁ ∑︁ (, ) = 1 ∑− ︁1 ∑− ︁1  (′) (, ′),</p>
      <p>·
 ̸=
=0 ′=0
where we use (, ′) = ( (),  (′)) as shorthand notation for a discordant pair of items
ranked by  at positions (, ′). Moreover, we let  () denote the probability that users will
1We follow the literature on pairwise fairness and consider binary sensitive attributes.
(1)
(2)
visit the item  (). The equality in Equation (2) holds under a trivial browsing model where
users visit all items with the same probability  () = 1 ∀.</p>
      <p>Interpretation. At rank , item producers evaluate ranking  by focusing on the most visible
cases of unfair treatment against their item  (). Their dissatisfaction with  grows each time
they encounter a UDP for  (), which is an item of lesser relevance ranked better than their
own. The inner summation ∑︀−′=10  (′) (, ′) represents a weighted counter of UDPs, with
the weight proportional to the visibility of the unjustly favored item. Kendall’s Tau is interpreted
as operationalizing aggregate producer dissatisfaction with  for unjustly favoring other items.</p>
      <p>This interpretation also applies to pairwise fairness (Equation 1), by focusing on cross-group
comparisons.
This formulation summarizes the dissatisfaction of items and their producers in one group due
to being unfairly ranked lower than items of lesser relevance from another group. Pairwise
fairness thus communicates observed injustice, which can afect perceptions of platform quality
[19, 20], and influence the loyalty of item producers [21].</p>
    </sec>
    <sec id="sec-4">
      <title>4. Current Limitations and Proposed Improvements</title>
      <p>
        Fabris et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] describe several limitations of pairwise fairness and overcome them with targeted
reformulations, two of which are presented below.
      </p>
      <p>Top-heaviness. Pairwise fairness metrics do not consider realistic browsing behaviors. In
particular, they use a uniform visit probability for all ranks, which is not realistic in practice. The
top ranking positions are more likely to be visited by searchers, and this should be accounted
for in the metrics. As shown above, pairwise fairness measures can account for user browsing
models  ():</p>
      <p>1 ∑︁ ∑− ︁1  () (,  ()) · 1( () ∈ ).</p>
      <p>∈ =0
The IAS literature has proposed and studied several top-heavy user models, including logarithmic
( () ∝ 1/log() [22]) and exponential discount ( () ∝   [23]).</p>
      <p>Tie handling. Pairwise fairness metrics such as IGI and REE do not consider ties in relevance
scores. Ties are common in practical applications like recommender systems and information
retrieval, where relevance judgments are often discrete or quantized. This means that IAS that
favor a group by breaking ties in its favor are not flagged as problematic by IGI or REE.</p>
      <p>Since  * = argsort(), we rewrite the indicator function for UDPs as  (, ) = 1( − 1() &gt;
 − 1(),  &gt;  ). We generalize UDPs to handle ties as:
 (, ) =1( − 1() &gt;  − 1(),  &gt;  ) + 1( − 1() &gt;  − 1(),  =  )
(4)
Here  indicates the dissatisfaction of an item ranked worse than another of same relevance.
We call this partial UDP. Values for  range in (0, 1), where  = 0 indicates indiference to ties,
while  = 1 corresponds to partial UDPs leading to the same dissatisfaction as regular UDPs.
(3)
(a) Synthetic data.</p>
      <p>(b) Real-world data.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Relation to Exposure-based Fairness</title>
      <p>Based on the limitations and improvements discussed above, we propose Dissatisfaction Induced
by Pairwise Swaps (DIPS), a new pairwise fairness measure defined as
DIPS =
1 − 1 − 1</p>
      <p>
        ∑︁ ∑︁  () (,  ()) · 1( ∈ ,  () ∈ ),
DIPS =0 =0
(5)
which can model top-heavy browsing models  () and handle ties through parameter  in
the definition of  (· ). In Figure 1, we compare DIPS with Equity of Attention (EA) [24] and
Expected Exposure (EE) [25] on both a synthetic and real-world dataset. These experiments,
presented in more detail in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], along with an analytical comparison between these measures,
yield two key interpretations. On one hand, DIPS inherits a top-heavy behavior from  () and
is thus suited to highlight UDPs at highly visible ranks, similarly to EA, EE and in opposition
to REE (Figure 1a). On the other hand, DIPS captures a diferent construct from EE and EA,
enabling a desirable outcome: fairness interventions in favor of a group can have a sizeable
impact on group equity, as measured by EA and EE, while maintaining dissatisfaction low for
the privileged group, as measured by DIPS (Figure 1b).
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Our work motivates and generalizes pairwise fairness in ranking by retrospectively mapping it
to the construct of producer dissatisfaction, highlighting its current limitations and proposing
specific improvements. We also compare it to other families of fair ranking measures. We add
to the ongoing discussion about the normative reasoning of algorithmic fairness, supporting an
informed and contextualized adoption of these measures.
C. Goodrow, Fairness in recommendation ranking through pairwise comparisons, in:
Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data
Mining, KDD ’19, ACM, 2019, p. 2212–2220.
[16] C. Kuhlman, M. VanValkenburg, E. Rundensteiner, Fare: Diagnostics for fair ranking
using pairwise error metrics, in: The World Wide Web Conference, WWW ’19, ACM,
2019, p. 2936–2942. URL: https://doi.org/10.1145/3308558.3313443. doi:10.1145/3308558.
3313443.
[17] H. Narasimhan, A. Cotter, M. Gupta, S. Wang, Proc. of the AAAI Conference on Artificial
Intelligence 34 (2020) 5248–5255. URL: https://ojs.aaai.org/index.php/AAAI/article/view/
5970. doi:10.1609/aaai.v34i04.5970.
[18] M. G. Kendall, A new measure of rank correlation, Biometrika 30 (1938) 81–93.
[19] R. Dudley, Amazon’s new competitive advantage: Putting
its own products ifrst, https://www.propublica.org/article/
amazons-new-competitive-advantage-putting-its-own-products-first, 2020.
[20] A. Jefries, L. Yin, Amazon puts its own “brands” above better
rated products, https://themarkup.org/amazons-advantage/2021/10/14/
amazon-puts-its-own-brands-first-above-better-rated-products, 2021.
[21] J. Kim, Platform quality factors influencing content providers’ loyalty,
Journal of Retailing and Consumer Services 60 (2021) 102510. URL: https://www.
sciencedirect.com/science/article/pii/S096969892100076X. doi:https://doi.org/10.
1016/j.jretconser.2021.102510.
[22] K. Järvelin, J. Kekäläinen, Cumulated gain-based evaluation of ir techniques, ACM Trans.</p>
      <p>Inf. Syst. 20 (2002) 422–446. URL: https://doi.org/10.1145/582415.582418. doi:10.1145/
582415.582418.
[23] A. Mofat, J. Zobel, Rank-biased precision for measurement of retrieval efectiveness, ACM
Trans. Inf. Syst. 27 (2008). URL: https://doi.org/10.1145/1416950.1416952. doi:10.1145/
1416950.1416952.
[24] A. J. Biega, K. P. Gummadi, G. Weikum, Equity of attention: Amortizing individual fairness
in rankings, in: The 41st International ACM SIGIR Conference on Research &amp; Development
in Information Retrieval, SIGIR ’18, ACM, 2018, p. 405–414.
[25] F. Diaz, B. Mitra, M. D. Ekstrand, A. J. Biega, B. Carterette, Evaluating stochastic rankings
with expected exposure, in: Proc. of the 29th ACM International Conference on Information
&amp; Knowledge Management, CIKM ’20, ACM, 2020, p. 275–284. URL: https://doi.org/10.
1145/3340531.3411962. doi:10.1145/3340531.3411962.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          , G. Silvello,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Susto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Biega</surname>
          </string-name>
          ,
          <article-title>Pairwise fairness in ranking as a dissatisfaction measure</article-title>
          ,
          <source>in: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining</source>
          , WSDM '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , p.
          <fpage>931</fpage>
          -
          <lpage>939</lpage>
          . URL: https://doi.org/10.1145/3539597.3570459. doi:
          <volume>10</volume>
          .1145/3539597. 3570459.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Baeza-Yates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ribeiro-Neto</surname>
          </string-name>
          , et al.,
          <source>Modern information retrieval</source>
          , volume
          <volume>463</volume>
          , ACM press New York,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Joachims</surname>
          </string-name>
          ,
          <article-title>Optimizing search engines using clickthrough data</article-title>
          ,
          <source>in: Proc. of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2002</year>
          , p.
          <fpage>133</fpage>
          -
          <lpage>142</lpage>
          . URL: https://doi.org/10.1145/775047.775067. doi:
          <volume>10</volume>
          .1145/ 775047.775067.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Ekstrand</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Burke</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Diaz</surname>
          </string-name>
          ,
          <article-title>Fairness and discrimination in information access systems</article-title>
          ,
          <source>arXiv preprint arXiv:2105.05779</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Pitoura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Stefanidis</surname>
          </string-name>
          , G. Koutrika,
          <article-title>Fairness in rankings and recommendations: an overview</article-title>
          ,
          <source>The VLDB Journal</source>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zehlike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stoyanovich</surname>
          </string-name>
          ,
          <article-title>Fairness in ranking, part i: Score-based ranking</article-title>
          ,
          <source>ACM Comput. Surv</source>
          . (
          <year>2022</year>
          ). URL: https://doi.org/10.1145/3533379. doi:
          <volume>10</volume>
          .1145/3533379, just Accepted.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <article-title>Fairness and Machine Learning, fairmlbook</article-title>
          .org,
          <year>2019</year>
          . http://www.fairmlbook.org.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Messina</surname>
          </string-name>
          , G. Silvello,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Susto</surname>
          </string-name>
          ,
          <article-title>Algorithmic fairness datasets: the story so far</article-title>
          ,
          <source>Data Mining and Knowledge Discovery</source>
          <volume>36</volume>
          (
          <year>2022</year>
          )
          <fpage>2074</fpage>
          -
          <lpage>2152</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Castelnovo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Crupi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Regoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. G.</given-names>
            <surname>Penco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Cosentini</surname>
          </string-name>
          ,
          <article-title>A clarification of the nuances in the fairness metrics landscape</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <fpage>4209</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Raj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Ekstrand</surname>
          </string-name>
          ,
          <article-title>Measuring fairness in ranked results: An analytical and empirical comparison</article-title>
          ,
          <source>in: Proc. of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Price</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Srebro</surname>
          </string-name>
          ,
          <article-title>Equality of opportunity in supervised learning</article-title>
          ,
          <source>in: Proc. of the 29th Annual Conference on Neural Information Processing Systems (NIPS</source>
          <year>2016</year>
          ), Barcelona,
          <string-name>
            <surname>ES</surname>
          </string-name>
          ,
          <year>2016</year>
          , pp.
          <fpage>3323</fpage>
          -
          <lpage>3331</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Geyik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ambler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kenthapadi</surname>
          </string-name>
          ,
          <article-title>Fairness-aware ranking in search &amp; recommendation systems with application to linkedin talent search</article-title>
          ,
          <source>in: Proceedings of the 25th acm sigkdd international conference on knowledge discovery &amp; data mining</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>2221</fpage>
          -
          <lpage>2231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gottardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Carletti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Daicampi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Susto</surname>
          </string-name>
          , G. Silvello,
          <article-title>Algorithmic audit of italian car insurance: Evidence of unfairness in access and pricing</article-title>
          ,
          <source>in: Proceedings of the 2021 AAAI/ACM Conference on AI</source>
          ,
          <string-name>
            <surname>Ethics</surname>
          </string-name>
          , and Society, AIES '21,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2021</year>
          , p.
          <fpage>458</fpage>
          -
          <lpage>468</lpage>
          . URL: https://doi.org/10.1145/3461702.3462569. doi:
          <volume>10</volume>
          .1145/3461702.3462569.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esuli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moreo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sebastiani</surname>
          </string-name>
          ,
          <article-title>Measuring fairness under unawareness of sensitive attributes: A quantification-based approach</article-title>
          ,
          <source>Journal of Artificial Intelligence Research</source>
          <volume>76</volume>
          (
          <year>2023</year>
          )
          <fpage>1117</fpage>
          -
          <lpage>1180</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Beutel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Doshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Heldt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>