<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Continuous Sensitive Attributes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luca Giuliani</string-name>
          <email>luca.giuliani13@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eleonora Misino</string-name>
          <email>eleonora.misino2@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberta Calegari</string-name>
          <email>roberta.calegari@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Lombardi</string-name>
          <email>michele.lombardi2@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alma Mater Studiorum-Università di Bologna</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recent advancements have made significant progress in addressing fair ranking and fairness with continuous sensitive attributes as separate challenges. However, their intersection remains underexplored, although crucial for guaranteeing a wider applicability of fairness requirements. In many real-world contexts, sensitive attributes such as age, weight, income, or degree of disability are measured on a continuous scale rather than in discrete categories. Addressing the continuous nature of these attributes is essential for ensuring efective fairness in such scenarios. This work aims to fill the gap in the existing literature by proposing a novel methodology that integrates state-of-the-art techniques to address longterm fairness in the presence of continuous protected attributes. We demonstrate the efectiveness and lfexibility of our approach using real-world data.</p>
      </abstract>
      <kwd-group>
        <kwd>fair AI</kwd>
        <kwd>fair ranking</kwd>
        <kwd>long-term fairness</kwd>
        <kwd>continuous sensitive attributes</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]; however, these approaches often focus on a single ranking, as if the AI system only produces
one ranking throughout its lifetime, failing to consider that the ranking process is repeated over
time. Considering the lifespan of an AI system, it becomes essential to ensure that the system
can be deemed fair across all rankings produced, ensuring what is known as long-term fairness
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The study and assurance of long-term fairness are necessary to guarantee consistent and
unbiased treatment across multiple iterations of the AI system, ensuring that biases do not
accumulate or shift over time. If fairness is only considered for individual rankings, it may lead
to temporary fairness that can fluctuate, resulting in long-term disparities. Furthermore, current
approaches to fair ranking typically only work with categorical sensitive attributes. However,
in various real-world scenarios, sensitive attributes like income, or degree of disability are
continuous rather than discrete. Consequently, efectively managing their continuous nature
is necessary for assessing and ensuring fairness. While there are studies focusing on fairness
concerning continuous sensitive attributes, they do not intersect with existing work on fair
ranking.
      </p>
      <p>This work aims to fill the gap in the existing literature by proposing a methodology that
integrates state-of-the-art techniques to address long-term fairness in the presence of continuous
protected attribu.tes</p>
      <p>The paper is structured as follows. Section 2 aims at placing our work within the existing
literature on fair machine learning, focusing on applications of fairness in ranking and fairness
with continuous sensitive attributes. In Section 3, we provide the essential technical background
required to understand the details and significance of our approach, as it incorporates diferent
state-of-the-art techniques and frameworks. Following this, we describe the specific aspects
of our contribution in Section 4, where we present our methodology grounded on a specific
use case. We outline the main results of our empirical evaluation in Section 5. Finally, we
summarize our findings in Section 6 and highlight potential directions for future investigation.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>To the best of our knowledge, no previous work has addressed the task of fair ranking with
continuous sensitive attributes. Still, there has been a significant growth in publications over the
last decade in the two distinct fields, both stemming from the broader domain of fair machine
learning. Hereby, we summarize the key developments in these areas as a means to efectively
frame our work within the current state of the art.</p>
      <sec id="sec-3-1">
        <title>2.1. Fair Machine Learning</title>
        <p>
          Mehrabi et al. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] categorize fair machine learning methods into three major groups, namely
pre-processing, in-processing, and post-processing. This categorization is based on the timing
of debiasing interventions. For example, pre-processing methods can be applicable when
there is an opportunity to alter training data [
          <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
          ]. In contrast, in-processing methods
are used when the inherent training procedure of the machine learning model is modified,
either by loss regularizers or other types of constraint injection [
          <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8, 9, 10, 11</xref>
          ]. Lastly,
postprocessing methods are employed when the algorithm must operate on an already trained
model, treating it as a black box and reassigning output labels through a specific function in the
post-processing stage [
          <xref ref-type="bibr" rid="ref12 ref13 ref14">12, 13, 14</xref>
          ]. Our research aligns with the third category, as we build on
the work by [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] regarding the FAiRDAS framework, which aims to ensure sustained fairness
in ranking systems by post-processing the results produced by the learned model in successive
batches, independently of the characteristics of the model itself.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Fairness in Ranking Applications</title>
        <p>
          In their survey [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], Zehlike et al. distinguish between two types of fair ranking algorithms: (1)
score-basedmethods, which use a predefined ranking function and allow the bias mitigation
step to intervene on either the initial scores of the candidates, the ranking function  , or the final
ranked outcome, and (2) supervised learning-to-ranmkethods, which train the ranking function
on data and can thus be further categorized as in Section 2.1. Interestingly, the authors note
that post-processing methods for learning-to-rank handle fairness constraints similarly to
scorebased methods. Under this lens, FAiRDAS can be seen as both a learning-to-rank application
imposing constraints on model-predicted scores, and a score-based method enforcing fairness
by adjusting original scores, whether generated by a model or given as gold standards.
        </p>
        <p>
          Most fair ranking approaches employ top- proportional representations as a fairness metric.
Namely, they try to ensure an equal representation of protected groups in the first  candidates.
For example, among the post-processing fairness methods for learning-to-rank, [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] and [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
adjust the positions of the candidates in the final ranking to meet certain minimal (and optionally
maximal) requirements per subgroup. These methods treat top- rankings as sets, hence
disregarding the position of candidates. In contrast, [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] and [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] take the position into account
by addressing the visibility bias rather than the score itself; in fact, the exposure of candidates
has been shown to decrease geometrically with respect to their ranking position, as defined
by their score. Moreover, the latter work proposes a methodology to dynamically change
rankings for the same query to achieve equal attention over time, thus inherently incorporating
long-term fairness efects within their framework, although at a query level only. For a more
comprehensive overview of bias mitigation in ranking at diferent stages in the pipeline and
using diferent methods, we refer the reader to the original survey.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>2.3. Fairness with Continuous Protected Attributes</title>
        <p>
          In the last few years, some works have proposed new metrics and computational methodologies
to address continuous sensitive attributes in fairness enforcement tasks. Among them, [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]
adopts for the first time the Hirschfeld–Gebelein–Rényi (HGR) correlation coeficient as a way
to enforce model debiasing over continuous protected features. This metric, also referred to as
the maximal correlation coeficient, is defined as the highest Pearson correlation that can be
obtained by transforming random variables into nonlinear spaces through copula transformations.
For this reason, its computation poses significant computational dificulties, yet various
simplifications and approximations have been developed over recent years. Specifically, [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] introduced
a diferentiable way to calculate a lower bound of the metric using kernel-density estimation
techniques, thus paving the way for its application as a loss regularizer in gradient-based
learning algorithms. That work was subsequently improved by [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], whose novel computational
technique based on two adversarial neural networks was shown to outperform the former.
        </p>
        <p>
          A parallel efort was undertaken by [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ], who introduced an indicator named Generalized
Disparate Impact (GeDI) by slightly modifying the formulation of HGR to better adhere with
the legal concept of “Disparate Impact”. Disparate impact arises when a seemingly impartial
practice adversely afects a protected group, and a first method to measure it in both regression
and classification scenarios was introduced by [ 23], who proposed a novel fairness metric
called Disparate Impact Discrimination Inde(xDIDI). The Generalized Disparate Impact indicator
straightforwardly extends this metric to the case of continuous inputs where, as usual, higher
GeDI values signify a greater disparity concerning the chosen protected attribute.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Background</title>
      <p>
        In this section, we provide a formalization of the ranking problem general enough to model
our case study and other similar applications. Next, we introduce FAiRDAS [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], a general
framework designed to address long-term fairness in ranking systems. Finally, we describe
the Generalized Disparate Impact (GeDI) indicator [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], which we utilize to efectively handle
continuous protected attributes.
      </p>
      <sec id="sec-4-1">
        <title>3.1. Ranking Problem Formulation</title>
        <p>We focus on a process wherein a set ℛ of  resources undergoes repetitive ranking guided by
observable information arriving over time. For example, ℛ may contain students that need to
be ranked based on predicted academic performance. The observable information, hereinafter
referred to as batches, is seen as a stochastic process indexed by time, denoted as {  }∞=1 . Each
batch   is a random variable characterized by a domain  and probability distribution  (  ).
The ranking quality is characterized using a metric function defined in probabilistic terms,
typically relying on expectations or event probabilities, namely:
 ∶  ,  ↦  [ ; ]
(1)
Here,  ∈ Θ is an action vectorwhose values can be adjusted to control the ranking procedure
behavior. For example, the action vectormight represent penalty or reward terms linked to
sensitive groups. The vector  [ ; ] ∈ ℝ  denotes the values of  metrics for a given batch 
and action vector  . In real-world scenarios, these metrics will always admit a finite sample
formulation, often derived by substituting theoretical expectations with sample averages.</p>
        <p>Given that the ranking is performed for every batch, followed by an adjustment of the action
vector, the ranking problem can be defined in terms of the tuple:
⟨{  }∞=1 , {  }∞=1 ⟩
(2)
where   and   are the batch and action vector at time  respectively. The value of the metrics
at time  is determined given   and   (Equation (1)).</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. FAiRDAS</title>
        <p>
          FAiRDAS [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] is a general framework that models long-term fairness as a dynamic system.
It aims at stabilizing fairness and quality metrics below user-defined thresholds and allows
users to define a target behavior approximated through a sequence of action; for example,
one may modify an input ranking by adjusting the scores for diferent protected groups. The
approximation of the target behavior involves solving an optimization problem that minimizes
the discrepancy between the target values for the metrics  ̄ and the actual metrics   determined
by the actions   , namely:
 ∗(  )̄ =arg min ℒ ( ,  ̄ )
        </p>
        <p>∈Θ
ℒ (  ,  ̄ ) = ‖ [  ;   ] −  ̄ ‖22.</p>
        <p>The solution method for Equation (3) relies on the action space characteristics and the chosen
distance function. A possible choice for ℒ ( ,  ̄ )is the Euclidean distance:</p>
        <p>The exact evaluation of Equation (4) is often unfeasible, primarily due to the unknown
distribution  (  ); thus, metric values  [
derived from historical data.</p>
        <p>;   ] are replaced typically with a Monte Carlo approximation
FAiRDAS Grounding. To apply FAiRDAS efectively to a specific scenario, it is essential
to delineate its core components: 1) the metrics of intere,stwhich establish the criteria for
evaluating fairness and ranking quality; 2) the corresponding threshold vecto;r3s) the target
dynamic system which define the ideal metrics behavior; 4) the set of action, sdelineating how
metrics can be manipulated to enhance ranking fairness and quality; 5) the distance functio,n
defining the metric for assessing the efectiveness of the target system’s approximation; and
ifnally, 6) the optimization methodussed to address Equation (3), which heavily depends on the
chosen set of actions and distance function.</p>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. Generalized Disparate Impact</title>
        <p>
          The Generalized Disparate Impact (GeDI) was first introduced in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] as an extension of the
Disparate Impact Discrimination Index (DIDI) [23] to expand the availability of fairness metrics
for the fully continuous case. It features a mapping function  () for the input attribute  ∈ 
which enables accounting for non-linear correlations between the sensitive input and the target.
        </p>
        <p>This choice is inspired by the copula transformations of the Hirschfeld–Gebelein–Rényi
(HGR) maximum correlation coeficient. However, one major diference between GeDI and HGR
lies in the absence of a second mapping function on the output feature  ∈ 
, which prevents it
from measuring non-functional dependencies of the type  ↦  , akin to the DIDI. In addition
to that, instead of leveraging the original definition of Pearson’s coeficient, the formulation of
GeDI is slightly altered to make the indicator sensitive to scale variations. This ensures that
reductions in unfairness are proportionally translated to diminished disparate impacts even if
the shape of the unfair behavior is not modified, and also guarantees compatibility between
GeDI and DIDI since both metrics yield identical results when the input attribute is binary.
Finally, the mapping function  () is restricted to a linear combination over a polynomial kernel.
This allows one to frame the computation as a linear optimization problem, thus keeping a
low computational burden although retaining high approximation capabilities thanks to the
inherent non-linearities. Additionally, it serves the dual purpose of reducing overfitting while
maximizing user-configurability and interpretability of the metric.
(3)
(4)</p>
        <p>Formally,  () is defined as the vector product V  ⋅  , where V  is the polynomial expansion
matrix built from the input vector  – i.e., the Vandermonde matrix –, while  ∈ ℝ  is a coeficient
vector that weighs the contribution of each polynomial order. GeDI is eventually computed as:
GeDI(,  ; V ) =|
cov(V  ⋅ ,  )
var(V  ⋅ )
|
s.t. ‖‖ 1 = 1
(5)
where the constraint on the L1 norm of the coeficient vector is intended to replace the absence
of the scaling factor on the output term. An important detail to note is that the order  of the
polynomial expansion is part of the specification of the indicator, as it appears in its notation
and aims to ofer users a simple way to balance the bias-variance trade-of.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. FAiRDAS with Continuous Attribute</title>
      <p>As a demonstration of our approach, we focus on ranking students by their predicted academic
performance to identify those at risk of dropping out. The real-world data is provided by
the Canarian Agency for Quality Assessment and Accreditation (ACCUEE)1, which gathers
information to assess the performance of their educational system through regular diagnostic
reports. The data spans four academic years (2015-2019) including (1) the evaluation of students’
academic proficiency in subjects such as Mathematics, Spanish, and English and (2) context
questionnaires completed by students, school principals, families, and teachers to collect
sociodemographic background information. In our test case, we rank students based on their
Mathematics proficiency measured by a normalized score. The protected attribute considered is
the Economic, Social, and Cultural Status (ESCS) [24], namely a continuous indicator that serves
as a proxy for the socioeconomic status of students. Ensuring long-term stability is crucial in
this context: although consistently high accuracy and fairness are desirable, it is essential to
maintain stable actions over time to prevent negatively afecting students’ academic progress.</p>
      <p>
        In addressing the task at hand, we define two distinct groundings of FAiRDAS framework.
The first grounding, inspired by [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], adopts a set of discrete actions that requires a discretization
of the sensitive attribute; conversely, the second grounding relies on a set of continuous actions
that do not require any discretization. In both groundings, the continuous nature of the attribute
is preserved when computing the fairness metric as we rely on GeDI [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. The groundings we
propose represent two potential approaches to addressing long-term fairness with continuous
attributes and should not be seen as conflicting: in certain scenarios, depending on the desired
level of interpretability and the overall system requirements, discrete actions may be necessary,
while in others, continuous actions might be preferred. In the remaining of the section, we
describe the two groundings in detail.
      </p>
      <sec id="sec-5-1">
        <title>4.1. Grounding with Discrete Actions</title>
        <p>
          Set of Actions. Inspired by [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], we design a set of discrete actions that directly modify the
scores used by the ranking algorithm. Formally, given the discretization  ∈  = { 1,  2, ...,   }
of the continuous protected attribute ESCS, the actions are represented by a vector  ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] ∣ ∣
1Dataset: https://zenodo.org/records/11171863.
with unit L1 norm. The modified score of a student with  = 
is obtained by multiplying
their original score by (1 −   ). Thus, the action vector components act as penalizing factors
for over-represented sensitive groups in a batch, specifically afecting the scores of students
in these protected groups. Higher values in the action vector (closer to 1) correspond to more
significant penalization, whereas values closer to zero result in minimal modification to the
student’s score. In our application, we discretize the continuous protected attribute ESCS in four
levels; thus, the action vector  has four components, each applying to the students belonging
to the corresponding ESCS level.
        </p>
        <p>
          Metrics of Interest. In our case study, we are interested in decreasing socioeconomic
discrimination while preserving ranking accuracy; thus, we need 1) a fairness metric able to deal
with the continuous protected attribute ESCS and 2) an accuracy metric to measure the drop in
ranking performance due to the application of the action vector  . As a fairness metric, we rely
on GeDI, whereas to assess the system’s drop in performance we measure the sum of absolute
diferences between the original and modified scores, namely:
(6)
(7)

1
where  is the number of students in a batch,  
 ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] is the component of the action vector
corresponding to the ESCS level of the k-th student, and   ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] is the score of the k-th
student. It is worth noting that the two metrics of interest conflict:
zero to maintain the original ranking, whereas GeDI requires  
SAE drives   towards
        </p>
        <p>&gt; 0 for some  to mitigate
discrimination. Given that the action vector must have a unit L1 norm, the trivial solution of

  = 0 for all  , which would nullify both metrics, is not allowed.</p>
        <p>Target Dynamic System. As we aim to meet the metric thresholds while maintaining
longterm stability, we define our desired behavior by means of following dynamic system, which
defines a smooth evolution of the target metrics toward the thresholds:</p>
        <p>+̄1 =  ⊙ ( ̄  − ) +  ̄  ,
where  ̄ represent the metric values in the target system,  is the vector of thresholds,  ∈ (0, 2) ,
and ⊙ refers to the Hadamard (element-wise) product. Given that we are focusing on two
metrics (GeDI and SAE),  is a 2-dimensional vector, with its values determined through a
preliminary experiment detailed in Section 5.</p>
        <sec id="sec-5-1-1">
          <title>Distance Function and Optimization Method.</title>
          <p>We use Equation (4) – Euclidean distance
– as the distance function, optimizing it with the scipy implementation of Sequential Least
Squares Programming (SLSQP) optimizer.</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Grounding with Continuous Actions</title>
        <p>
          Set of Actions. To avoid the discretization of the protected attribute ESCS, we define the set of
possible actions as a family of polynomial functions   parameterized by  ∈ ℝ +1 , where  is
the order of the polynomial2. The functions map each value of ESCS to a real number, which is
then used as a multiplicative discount factor to modify the student’s score. First, we rescale ESCS
into the domain [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], then we impose two constraints on the family of polynomial functions  
,
namely: 1) their integral must be unitary over domain in order to avoid degenerate solutions,
and 2) their roots must lie outside the domain in order to guarantee that each discount factor
  (  )is strictly positive for all   ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. These constraints enhance the interpretability of the
mitigation strategy by simplifying the comparison between the selected polynomial functions.
Additionally, they prevent the trivial solution of a constant function equal to zero, which would
nullify the fairness metric.
between the original and modified scores 3:
        </p>
        <sec id="sec-5-2-1">
          <title>Metrics of Interest.</title>
          <p>As in Discrete Action,swe use GeDI as fairness metric to deal with the
continuous nature of ESCS. The ranking performance is measured by the mean squared error

where   is the ESCS value of the k-th student, and   (  )the weighting polynomial function
evaluated on   . As in Discrete Action,sthe two metrics of interest conflict since
  to be close to the constant function   = 1 while GeDI forces   to deviate from it.
MSE pushes</p>
        </sec>
        <sec id="sec-5-2-2">
          <title>Target Dynamic System.</title>
          <p>We rely on the same dynamic system in Equation (7), as our goal
is to stably evolve the two metrics of interest below the predefined thresholds.</p>
        </sec>
        <sec id="sec-5-2-3">
          <title>Distance Function and Optimization Method.</title>
          <p>As before, we use Equation (4) – Euclidean
distance – as a distance function. However, when optimizing it, we rely on the scipy
implementation of the Trust Region Method (trust-constr), as it proved to be more reliable in the
solution, although at the expense of a slightly higher computational time.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Experimental Results</title>
      <p>This section outlines the empirical evaluation performed on the case study described in Section
4. We first define the evaluation procedure and then report the numerical results
4
.</p>
      <sec id="sec-6-1">
        <title>5.1. Evaluation</title>
        <p>We compare each of the two groundings with a baseline method focusing on metrics of interest
and action smoothness(m</p>
        <p>) described below. For each approach, we report the mean and
standard deviation of the metrics across batches to assess performance and stability over time.
2In our application, we choose  = 4 as it provides a suficient trade-of between the expressiveness of the function
and the known numerical instability of polynomial kernels, along with their higher computational workload.
3We rely on MSE and not on SAE to avoid the computation of an absolute error.
4The source code to reproduce the experiments can be found at https://github.com/EleMisi/FairRanking under MIT
license.</p>
        <p>Action Smoothness. To evaluate the stability of the chosen actions over time, we compute
the cosine distance between actions performed on consecutive batches. For the Discrete Actions
grounding, m is defined as follows:</p>
        <p>
          where  is the number of incoming batches and   is the action vector of the  -th batch. For
the Continuous Actiongsrounding, m is computed by evaluating the weighting polynomial
functions on a fine-grained discretization of the interval [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. Formally, it is defined as:
= 1
where  is the number of incoming batches and    is the evaluation of the polynomial function
chosen for the  -th batch.
        </p>
        <p>Baseline Approach. We compare FAiRDAS in its Discrete Actiongsrounding against a baseline
approach that focuses on finding the optimal action vector that minimizes:</p>
      </sec>
      <sec id="sec-6-2">
        <title>5.2. Numerical Results</title>
        <p>As a preliminary step, we examine how the eigenvalues  of the FAiRDAS dynamic system
influence action smoothness to determine their optimal values for the experiments. We conduct
multiple runs with a fixed threshold while varying the eigenvalues (Table 1). As expected, based
on the theoretical characteristics of the dynamic state under consideration, lower eigenvalues
result in more stable actions in both groundings. For our experiments, we select the eigenvalues
corresponding to the inflection point of the action smoothness metric.</p>
        <p>Next, we compare the performance of FAiRDAS and the baseline under diferent pairs of
thresholds for the metrics of interest. For both Discrete Actionasnd Continuous Actionssettings,
the threshold pair {0, 2} represents an extreme scenario where fairness is prioritized over ranking
performance. Subsequently, we examine a loose threshold pair, {0.7, 0.7}, and a stringent pair,
{0.5, 0.5}. Finally, we investigate a pair of thresholds, {0.2, 0.2}, that cannot be reached.
ℒ () = max (GeDI(),  
)+ max (SAE(),  
)
where   and   are the metrics’ thresholds. The action vector  is the same described
in Section 4.1, and it is optimized via the SLSQP method, as for FAiRDAS. For the Continuous
Actionsgrounding, the baseline approach searches for the optimal polynomial function   that
satisfies the constraints described in Section 4.2 and minimizes:
ℒ () = max (GeDI(),  
)+ max (MSE(),  
).
with   and   are the metrics’ thresholds. As for FAiRDAS, we rely on the Trust Region
Methods to tackle the optimization problem.
Mean and standard deviation of the action smoothness computed over the batches for FAiRDAS. We
analyse the results for 5 diferent eigenvalues (  ) with a fixed threshold pair {0.5, 0.5}. For each eigenvalue,
we run eight experiments. We select  = 0.2 as the elbow of the curve for both groundings (in bold).

metrics throughout 100 batches for both baseline and FAiRDAS approach in Discrete Actions
setting. Across all threshold pairs, the two methods achieve comparable levels of the metrics of
interest (GeDI and SAE). However, the baseline exhibits notably higher levels of instability in
the chosen actions (higher m</p>
        <p>) compared to FAiRDAS, especially with stringent thresholds.</p>
        <p>This finding confirms the ability of FAiRDAS to maintain both performance efectiveness and
fairness over time while also avoiding drastic actions that may raise ethical concerns. The
increased stability of FAiRDAS approach is demonstrated in Figure 1, which shows the action
vectors selected by both approaches in an experiment with stringent thresholds. This figure
provides a component-wise comparison of the baseline and FAiRDAS action vectors across all
100 batches. As detailed in Section 4.1, each component of the action vectors afects students
from the corresponding ESCS level and acts as penalizing factors on their scores, potentially
altering their ranking. Higher values indicate more significant penalization, while values near
zero mean the student’s score remains untouched. The baseline method tends to favor rapid and
drastic interventions, indicated by 1) the sudden color change between batches and 2) action
components close to one (lighter color). In contrast, FAiRDAS exhibits a more moderated and
balanced behavior, with action vectors evolving smoothly over the experiment (gradual color
changes along x-axis) and similar penalization across groups (uniform color along y-axis).
(a)
(b)</p>
        <p>
          Results with Continuous Actions. In Table 3 we report the mean and standard deviation of
the metrics over 100 batches for both baseline and FAiRDAS approach under Continuous Actions
setting. As with Discrete Action,sthe numerical results confirm FAiRDAS’s capability to maintain
both performance efectiveness and fairness over time while avoiding drastic actions. FAiRDAS
and the baseline achieve similar levels for the metrics of interest (GeDI and MSE) across all
thresholds, but FAiRDAS reaches lower values of action smoothness (m ). This result is
exemplified in Figure 2, where we present an example of the polynomial functions selected by
FAiRDAS and the baseline throughout 100 batches. Each column displays the function chosen
for the corresponding batch, evaluated over the ESCS domain [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] (y-axis). As described in
Section 4.2, the functions influence the ranking based on the students’ ESCS value, serving as a
penalizer on their scores: lower values correspond to more substantial penalization, whereas
(a)
(b)
values close to one indicate that the student’s score is unafected. As for Discrete Action, swe
observe that the baseline method tends to favor rapid and drastic actions, as indicated by 1) the
abrupt color changes between batches and 2) the high penalization values (higher contrast).
Conversely, FAiRDAS demonstrates a more moderated and balanced behaviour, with polynomial
functions evolving smoothly throughout the batches (gradual color changes along x-axis) and
more consistent penalization across diferent ESCS values (smooth color changes along y-axis).
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>6. Conclusion</title>
      <p>
        We introduced a novel approach that integrates state-of-the-art techniques to address
longterm fairness in the presence of continuous protected attributes. This is achieved by pairing
FAiRDAS [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], a framework aimed at ensuring long-term fairness in ranking systems while
preserving stable actions, with the Generalized Disparate Impact (GeDI) indicator [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], a fairness
metric specifically designed to handle continuous protected attributes. Our contribution includes
the definition of two possible sets of actions to handle continuous attributes. The first set
prioritizes interpretability but introduces discretization, whereas the second set maintains the
continuity of actions at the expense of interpretability. The selection of the set of actions to
apply depends on the specific requirements and constraints of the application context. We
validated our methodology through a case study in the domain of AI and Education, where we
compared the performance and stability of FAiRDAS against a baseline method. Our analysis
demonstrates that the integration of FAiRDAS and GeDI with our defined actions presents a
robust solution for addressing long-term fairness under continuous protected attributes.
      </p>
      <p>To the best of our knowledge, this is the first work that tackles long-term fairness and stability
in ranking with continuous attributes. Thus, we believe that it could lay the groundwork for
further research and applications in several domains where handling continuous attributes and
stability are of key importance, yet currently understudied.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgements</title>
      <p>The work has been partially supported by the AEQUITAS project funded by the European
Union’s Horizon Europe Programme (Grant Agreement No. 101070363), and by PNRR - M4C2
Investimento 1.3, Partenariato Esteso PE00000013 - “FAIR - Future Artificial Intelligence
Research” - Spoke 8 “Pervasive AI”, funded by the European Commission under the NextGeneration
EU programme5.
5Disclaimer: This paper reflects only the authors’ views. The European Commission is not responsible for any use
that may be made of the information it contains.
https://proceedings.mlr.press/v202/giuliani23a.html.
[23] S. Aghaei, M. J. Azizi, P. Vayanos, Learning optimal and fair decision trees for
nondiscriminative decision-making, in: The Thirty-Third AAAI Conference on Artificial
Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence
Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial
Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, AAAI Press,
2019, pp. 1418–1426. URL: https://doi.org/10.1609/aaai.v33i01.33011418. doi:10.1609/aaai.
v33i01.33011418.
[24] F. Avvisati, The measure of socio-economic status in pisa: a review and some suggested
improvements, Large-scale Assessments in Education 8 (2020). URL: http://dx.doi.org/10.
1186/s40536-020-00086-x. doi:10.1186/s40536-020-00086-x.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehrabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Morstatter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lerman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galstyan</surname>
          </string-name>
          ,
          <article-title>A survey on bias and fairness in machine learning</article-title>
          , volume
          <volume>54</volume>
          , ACM New York, NY, USA,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zehlike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stoyanovich</surname>
          </string-name>
          ,
          <article-title>Fairness in ranking: A survey</article-title>
          ,
          <source>arXiv preprint arXiv:2103.14000</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L. T.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Rolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Simchowitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <article-title>Delayed impact of fair machine learning</article-title>
          , in: J. G. Dy,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Krause (Eds.),
          <source>Proceedings of the 35th International Conference on Machine Learning</source>
          ,
          <string-name>
            <surname>ICML</surname>
          </string-name>
          <year>2018</year>
          , Stockholmsmässan, Stockholm, Sweden,
          <source>July 10-15</source>
          ,
          <year>2018</year>
          , volume
          <volume>80</volume>
          <source>of Proceedings of Machine Learning Resear,cPhMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>3156</fpage>
          -
          <lpage>3164</lpage>
          . URL: http://proceedings.mlr.press/v80/liu18c.html.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehrabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Morstatter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lerman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galstyan</surname>
          </string-name>
          ,
          <article-title>A survey on bias and fairness in machine learning</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>54</volume>
          (
          <year>2021</year>
          ). URL: https://doi.org/10.1145/3457607. doi:
          <volume>10</volume>
          .1145/3457607.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Kamiran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Calders</surname>
          </string-name>
          ,
          <article-title>Data preprocessing techniques for classification without discrimination</article-title>
          ,
          <source>Knowledge and Information Systems</source>
          <volume>33</volume>
          (
          <year>2011</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          . URL: http://dx.doi.org/10. 1007/s10115-011-0463-8. doi:
          <volume>10</volume>
          .1007/s10115- 011- 0463- 8.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Calmon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vinzamuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Natesan Ramamurthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Varshney</surname>
          </string-name>
          ,
          <article-title>Optimized pre-processing for discrimination prevention</article-title>
          , in: I. Guyon,
          <string-name>
            <given-names>U. V.</given-names>
            <surname>Luxburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fergus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vishwanathan</surname>
          </string-name>
          , R. Garnett (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>30</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2017</year>
          . URL: https://proceedings. neurips.cc/paper/2017/file/9a49a25d845a483fae4be7e341368e36-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Celis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Keswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vishnoi</surname>
          </string-name>
          ,
          <article-title>Data preprocessing to mitigate bias: A maximum entropy based approach</article-title>
          , in: International conference on machine learning,
          <source>PMLR</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1349</fpage>
          -
          <lpage>1359</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kamishima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Akaho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Asoh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sakuma</surname>
          </string-name>
          ,
          <article-title>Fairness-aware classifier with prejudice remover regularizer</article-title>
          , in: P. A.
          <string-name>
            <surname>Flach</surname>
          </string-name>
          , T. De Bie, N. Cristianini (Eds.),
          <source>Machine Learning and Knowledge Discovery in Databases</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2012</year>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Komiyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Takeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Honda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shimao</surname>
          </string-name>
          ,
          <article-title>Nonconvex optimization for regression with fairness constraints</article-title>
          , in: J.
          <string-name>
            <surname>Dy</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Krause (Eds.),
          <source>Proceedings of the 35th International Conference on Machine Learning</source>
          , volume
          <volume>80</volume>
          <source>of Proceedings of Machine Learning Resear,ch PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>2737</fpage>
          -
          <lpage>2746</lpage>
          . URL: https://proceedings.mlr.press/v80/komiyama18a.html.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E. Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>Policy optimization with advantage regularization for long-term fairness in decision systems</article-title>
          ,
          <source>arXiv preprint arXiv:2210.12546</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ge</surname>
          </string-name>
          , S. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ou</surname>
          </string-name>
          , et al.,
          <article-title>Towards long-term fairness in recommendation</article-title>
          ,
          <source>in: Proceedings of the 14th ACM international conference on web search and data mining</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>445</fpage>
          -
          <lpage>453</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Calders</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Verwer</surname>
          </string-name>
          ,
          <article-title>Three naive bayes approaches for discrimination-free classification</article-title>
          ,
          <source>Data Min. Knowl. Discov</source>
          .
          <volume>21</volume>
          (
          <year>2010</year>
          )
          <fpage>277</fpage>
          -
          <lpage>292</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10618- 010- 0190- x.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Price</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Price</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Srebro</surname>
          </string-name>
          ,
          <article-title>Equality of opportunity in supervised learning</article-title>
          , in: D.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sugiyama</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <string-name>
            <surname>Luxburg</surname>
            ,
            <given-names>I. Guyon</given-names>
          </string-name>
          , R. Garnett (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>29</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2016</year>
          . URL: https: //proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Xian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Fair and optimal classification via post-processing</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>37977</fpage>
          -
          <lpage>38012</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>E.</given-names>
            <surname>Misino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Calegari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lombardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Milano</surname>
          </string-name>
          , Fairdas:
          <article-title>Fairness-aware ranking as dynamic abstract system</article-title>
          , in: R.
          <string-name>
            <surname>Calegari</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          <string-name>
            <surname>Tubella</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>González-Castañé</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Dignum</surname>
          </string-name>
          , M. Milano (Eds.),
          <source>Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI</source>
          <year>2023</year>
          ), Kraków, Poland,
          <year>October 1st</year>
          ,
          <year>2023</year>
          , volume
          <volume>3523</volume>
          <source>of CEUR Workshop Proceeding</source>
          , sCEUR-WS.org,
          <year>2023</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3523</volume>
          /paper5.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zehlike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Castillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hajian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Megahed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Baeza-Yates</surname>
          </string-name>
          ,
          <article-title>Fa* ir: A fair top-k ranking algorithm</article-title>
          ,
          <source>in: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1569</fpage>
          -
          <lpage>1578</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Geyik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ambler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kenthapadi</surname>
          </string-name>
          ,
          <article-title>Fairness-aware ranking in search &amp; recommendation systems with application to linkedin talent search</article-title>
          ,
          <source>in: Proceedings of the 25th acm sigkdd international conference on knowledge discovery &amp; data mining</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>2221</fpage>
          -
          <lpage>2231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Joachims</surname>
          </string-name>
          ,
          <article-title>Fairness of exposure in rankings</article-title>
          ,
          <source>in: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>2219</fpage>
          -
          <lpage>2228</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Biega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Gummadi</surname>
          </string-name>
          , G. Weikum,
          <article-title>Equity of attention: Amortizing individual fairness in rankings</article-title>
          ,
          <source>in: The 41st international acm sigir conference on research &amp; development in information retrieval</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>405</fpage>
          -
          <lpage>414</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Calauzènes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. E.</given-names>
            <surname>Karoui</surname>
          </string-name>
          ,
          <article-title>Fairness-aware learning for continuous attributes and treatments</article-title>
          , in: K. Chaudhuri, R. Salakhutdinov (Eds.),
          <source>Proceedings of the 36th International Conference on Machine Learning</source>
          , volume
          <volume>97</volume>
          <source>of Proceedings of Machine Learning Resear,ch PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>4382</fpage>
          -
          <lpage>4391</lpage>
          . URL: https://proceedings.mlr.press/v97/mary19a.html.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>V.</given-names>
            <surname>Grari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lamprier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Detyniecki</surname>
          </string-name>
          ,
          <article-title>Fairness-aware neural rényi minimization for continuous features</article-title>
          , in: C.
          <string-name>
            <surname>Bessiere</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, International Joint Conferences on Artificial Intelligence Organization</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>2262</fpage>
          -
          <lpage>2268</lpage>
          . URL: https://doi.org/10.24963/ijcai.
          <year>2020</year>
          /313. doi:
          <volume>10</volume>
          .24963/ijcai.
          <year>2020</year>
          /313, main track.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giuliani</surname>
          </string-name>
          , E. Misino,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lombardi</surname>
          </string-name>
          ,
          <article-title>Generalized disparate impact for configurable fairness solutions in ML, in: A</article-title>
          .
          <string-name>
            <surname>Krause</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Brunskill</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Cho</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Engelhardt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Sabato</surname>
          </string-name>
          , J. Scarlett (Eds.),
          <source>Proceedings of the 40th International Conference on Machine Learning</source>
          , volume
          <volume>202</volume>
          <source>of Proceedings of Machine Learning Resear,chPMLR</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>11443</fpage>
          -
          <lpage>11458</lpage>
          . URL:
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>