<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Long-Term Fairness Strategies in Ranking with Continuous Sensitive Attributes</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Luca</forename><surname>Giuliani</surname></persName>
							<email>luca.giuliani13@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eleonora</forename><surname>Misino</surname></persName>
							<email>eleonora.misino2@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberta</forename><surname>Calegari</surname></persName>
							<email>roberta.calegari@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michele</forename><surname>Lombardi</surname></persName>
							<email>michele.lombardi2@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Alma Mater Studiorum-Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Long-Term Fairness Strategies in Ranking with Continuous Sensitive Attributes</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">447E68AE5301A5A7E1CC0B1712947458</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>fair AI</term>
					<term>fair ranking</term>
					<term>long-term fairness</term>
					<term>continuous sensitive attributes</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recent advancements have made significant progress in addressing fair ranking and fairness with continuous sensitive attributes as separate challenges. However, their intersection remains underexplored, although crucial for guaranteeing a wider applicability of fairness requirements. In many real-world contexts, sensitive attributes such as age, weight, income, or degree of disability are measured on a continuous scale rather than in discrete categories. Addressing the continuous nature of these attributes is essential for ensuring effective fairness in such scenarios. This work aims to fill the gap in the existing literature by proposing a novel methodology that integrates state-of-the-art techniques to address longterm fairness in the presence of continuous protected attributes. We demonstrate the effectiveness and flexibility of our approach using real-world data.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Ranking in AI is increasingly used across various sectors to enhance decision-making processes, spanning from credit scoring and hiring to education and other high-stakes domains. For instance, in credit scoring, AI models evaluate creditworthiness by analyzing vast amounts of financial data; in hiring, AI ranks candidates by assessing resumes and predicting job fit; and educational programs leverage AI to rank students' performance, providing personalized learning experiences.</p><p>The social and ethical implications of these systems have recently gained attention both in research and application domains, particularly concerning their potential to perpetuate or accentuate discrimination. Several approaches and metrics have been proposed to enforce and quantify adherence to fairness requirements, ensuring that trained models do not exhibit discriminations against minorities or individuals <ref type="bibr" target="#b0">[1]</ref>.</p><p>In all these scenarios, it is possible to mitigate discrimination by adjusting the ranking to promote fairness criteria (such as equal opportunity or statistical parity) across sensitive groups or individuals. Various algorithmic mitigation strategies have been proposed in the literature <ref type="bibr" target="#b1">[2]</ref>; however, these approaches often focus on a single ranking, as if the AI system only produces one ranking throughout its lifetime, failing to consider that the ranking process is repeated over time. Considering the lifespan of an AI system, it becomes essential to ensure that the system can be deemed fair across all rankings produced, ensuring what is known as long-term fairness <ref type="bibr" target="#b2">[3]</ref>. The study and assurance of long-term fairness are necessary to guarantee consistent and unbiased treatment across multiple iterations of the AI system, ensuring that biases do not accumulate or shift over time. If fairness is only considered for individual rankings, it may lead to temporary fairness that can fluctuate, resulting in long-term disparities. Furthermore, current approaches to fair ranking typically only work with categorical sensitive attributes. However, in various real-world scenarios, sensitive attributes like income, or degree of disability are continuous rather than discrete. Consequently, effectively managing their continuous nature is necessary for assessing and ensuring fairness. While there are studies focusing on fairness concerning continuous sensitive attributes, they do not intersect with existing work on fair ranking.</p><p>This work aims to fill the gap in the existing literature by proposing a methodology that integrates state-of-the-art techniques to address long-term fairness in the presence of continuous protected attributes.</p><p>The paper is structured as follows. Section 2 aims at placing our work within the existing literature on fair machine learning, focusing on applications of fairness in ranking and fairness with continuous sensitive attributes. In Section 3, we provide the essential technical background required to understand the details and significance of our approach, as it incorporates different state-of-the-art techniques and frameworks. Following this, we describe the specific aspects of our contribution in Section 4, where we present our methodology grounded on a specific use case. We outline the main results of our empirical evaluation in Section 5. Finally, we summarize our findings in Section 6 and highlight potential directions for future investigation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>To the best of our knowledge, no previous work has addressed the task of fair ranking with continuous sensitive attributes. Still, there has been a significant growth in publications over the last decade in the two distinct fields, both stemming from the broader domain of fair machine learning. Hereby, we summarize the key developments in these areas as a means to effectively frame our work within the current state of the art.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Fair Machine Learning</head><p>Mehrabi et al. <ref type="bibr" target="#b3">[4]</ref> categorize fair machine learning methods into three major groups, namely pre-processing, in-processing, and post-processing. This categorization is based on the timing of debiasing interventions. For example, pre-processing methods can be applicable when there is an opportunity to alter training data <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. In contrast, in-processing methods are used when the inherent training procedure of the machine learning model is modified, either by loss regularizers or other types of constraint injection <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>. Lastly, postprocessing methods are employed when the algorithm must operate on an already trained model, treating it as a black box and reassigning output labels through a specific function in the post-processing stage <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14]</ref>. Our research aligns with the third category, as we build on the work by <ref type="bibr" target="#b14">[15]</ref> regarding the FAiRDAS framework, which aims to ensure sustained fairness in ranking systems by post-processing the results produced by the learned model in successive batches, independently of the characteristics of the model itself.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Fairness in Ranking Applications</head><p>In their survey <ref type="bibr" target="#b1">[2]</ref>, Zehlike et al. distinguish between two types of fair ranking algorithms: (1) score-based methods, which use a predefined ranking function and allow the bias mitigation step to intervene on either the initial scores of the candidates, the ranking function 𝑓, or the final ranked outcome, and (2) supervised learning-to-rank methods, which train the ranking function on data and can thus be further categorized as in Section 2.1. Interestingly, the authors note that post-processing methods for learning-to-rank handle fairness constraints similarly to scorebased methods. Under this lens, FAiRDAS can be seen as both a learning-to-rank application imposing constraints on model-predicted scores, and a score-based method enforcing fairness by adjusting original scores, whether generated by a model or given as gold standards.</p><p>Most fair ranking approaches employ top-𝑘 proportional representations as a fairness metric. Namely, they try to ensure an equal representation of protected groups in the first 𝑘 candidates. For example, among the post-processing fairness methods for learning-to-rank, <ref type="bibr" target="#b15">[16]</ref> and <ref type="bibr" target="#b16">[17]</ref> adjust the positions of the candidates in the final ranking to meet certain minimal (and optionally maximal) requirements per subgroup. These methods treat top-𝑘 rankings as sets, hence disregarding the position of candidates. In contrast, <ref type="bibr" target="#b17">[18]</ref> and <ref type="bibr" target="#b18">[19]</ref> take the position into account by addressing the visibility bias rather than the score itself; in fact, the exposure of candidates has been shown to decrease geometrically with respect to their ranking position, as defined by their score. Moreover, the latter work proposes a methodology to dynamically change rankings for the same query to achieve equal attention over time, thus inherently incorporating long-term fairness effects within their framework, although at a query level only. For a more comprehensive overview of bias mitigation in ranking at different stages in the pipeline and using different methods, we refer the reader to the original survey.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Fairness with Continuous Protected Attributes</head><p>In the last few years, some works have proposed new metrics and computational methodologies to address continuous sensitive attributes in fairness enforcement tasks. Among them, <ref type="bibr" target="#b19">[20]</ref> adopts for the first time the Hirschfeld-Gebelein-Rényi (HGR) correlation coefficient as a way to enforce model debiasing over continuous protected features. This metric, also referred to as the maximal correlation coefficient, is defined as the highest Pearson correlation that can be obtained by transforming random variables into nonlinear spaces through copula transformations. For this reason, its computation poses significant computational difficulties, yet various simplifications and approximations have been developed over recent years. Specifically, <ref type="bibr" target="#b19">[20]</ref> introduced a differentiable way to calculate a lower bound of the metric using kernel-density estimation techniques, thus paving the way for its application as a loss regularizer in gradient-based learning algorithms. That work was subsequently improved by <ref type="bibr" target="#b20">[21]</ref>, whose novel computational technique based on two adversarial neural networks was shown to outperform the former.</p><p>A parallel effort was undertaken by <ref type="bibr" target="#b21">[22]</ref>, who introduced an indicator named Generalized Disparate Impact (GeDI) by slightly modifying the formulation of HGR to better adhere with the legal concept of "Disparate Impact". Disparate impact arises when a seemingly impartial practice adversely affects a protected group, and a first method to measure it in both regression and classification scenarios was introduced by <ref type="bibr" target="#b22">[23]</ref>, who proposed a novel fairness metric called Disparate Impact Discrimination Index (DIDI). The Generalized Disparate Impact indicator straightforwardly extends this metric to the case of continuous inputs where, as usual, higher GeDI values signify a greater disparity concerning the chosen protected attribute.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Background</head><p>In this section, we provide a formalization of the ranking problem general enough to model our case study and other similar applications. Next, we introduce FAiRDAS <ref type="bibr" target="#b14">[15]</ref>, a general framework designed to address long-term fairness in ranking systems. Finally, we describe the Generalized Disparate Impact (GeDI) indicator <ref type="bibr" target="#b21">[22]</ref>, which we utilize to effectively handle continuous protected attributes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Ranking Problem Formulation</head><p>We focus on a process wherein a set ℛ of 𝑚 resources undergoes repetitive ranking guided by observable information arriving over time. For example, ℛ may contain students that need to be ranked based on predicted academic performance. The observable information, hereinafter referred to as batches, is seen as a stochastic process indexed by time, denoted as {𝑋 𝑡 } ∞ 𝑡=1 . Each batch 𝑋 𝑡 is a random variable characterized by a domain 𝒳 and probability distribution 𝑃(𝑋 𝑡 ). The ranking quality is characterized using a metric function defined in probabilistic terms, typically relying on expectations or event probabilities, namely:</p><formula xml:id="formula_0">𝑦 ∶ 𝑋 , 𝜃 ↦ 𝑦[𝑋 ; 𝜃]<label>(1)</label></formula><p>Here, 𝜃 ∈ Θ is an action vector whose values can be adjusted to control the ranking procedure behavior. For example, the action vector might represent penalty or reward terms linked to sensitive groups. The vector 𝑦[𝑋 ; 𝜃] ∈ ℝ 𝑛 denotes the values of 𝑛 metrics for a given batch 𝑋 and action vector 𝜃. In real-world scenarios, these metrics will always admit a finite sample formulation, often derived by substituting theoretical expectations with sample averages. Given that the ranking is performed for every batch, followed by an adjustment of the action vector, the ranking problem can be defined in terms of the tuple:</p><formula xml:id="formula_1">⟨{𝑋 𝑡 } ∞ 𝑡=1 , {𝜃 𝑡 } ∞ 𝑡=1 ⟩<label>(2)</label></formula><p>where 𝑋 𝑡 and 𝜃 𝑡 are the batch and action vector at time 𝑡 respectively. The value of the metrics at time 𝑡 is determined given 𝑋 𝑡 and 𝜃 𝑡 (Equation ( <ref type="formula" target="#formula_0">1</ref>)).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">FAiRDAS</head><p>FAiRDAS <ref type="bibr" target="#b14">[15]</ref> is a general framework that models long-term fairness as a dynamic system. It aims at stabilizing fairness and quality metrics below user-defined thresholds and allows users to define a target behavior approximated through a sequence of action; for example, one may modify an input ranking by adjusting the scores for different protected groups. The approximation of the target behavior involves solving an optimization problem that minimizes the discrepancy between the target values for the metrics ȳ 𝑡 and the actual metrics 𝑦 𝑡 determined by the actions 𝜃 𝑡 , namely:</p><formula xml:id="formula_2">𝜃 * ( ȳ 𝑡 ) = arg min 𝜃 𝑡 ∈Θ ℒ (𝜃 𝑡 , ȳ 𝑡 )<label>(3)</label></formula><p>The solution method for Equation (3) relies on the action space characteristics and the chosen distance function. A possible choice for ℒ (𝜃 𝑡 , ȳ 𝑡 ) is the Euclidean distance:</p><formula xml:id="formula_3">ℒ (𝜃 𝑡 , ȳ 𝑡 ) = ‖𝑦[𝑋 𝑡 ; 𝜃 𝑡 ] − ȳ 𝑡 ‖ 2 2 .<label>(4)</label></formula><p>The exact evaluation of Equation ( <ref type="formula" target="#formula_3">4</ref>) is often unfeasible, primarily due to the unknown distribution 𝑃(𝑋 𝑡 ); thus, metric values 𝑦[𝑋 𝑡 ; 𝜃 𝑡 ] are replaced typically with a Monte Carlo approximation derived from historical data.</p><p>FAiRDAS Grounding. To apply FAiRDAS effectively to a specific scenario, it is essential to delineate its core components: 1) the metrics of interest, which establish the criteria for evaluating fairness and ranking quality; 2) the corresponding threshold vectors; 3) the target dynamic system which define the ideal metrics behavior; 4) the set of actions, delineating how metrics can be manipulated to enhance ranking fairness and quality; 5) the distance function, defining the metric for assessing the effectiveness of the target system's approximation; and finally, 6) the optimization methods used to address Equation ( <ref type="formula" target="#formula_2">3</ref>), which heavily depends on the chosen set of actions and distance function.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Generalized Disparate Impact</head><p>The Generalized Disparate Impact (GeDI) was first introduced in <ref type="bibr" target="#b21">[22]</ref> as an extension of the Disparate Impact Discrimination Index (DIDI) <ref type="bibr" target="#b22">[23]</ref> to expand the availability of fairness metrics for the fully continuous case. It features a mapping function 𝑓 (𝑥) for the input attribute 𝑥 ∈ 𝒳, which enables accounting for non-linear correlations between the sensitive input and the target. This choice is inspired by the copula transformations of the Hirschfeld-Gebelein-Rényi (HGR) maximum correlation coefficient. However, one major difference between GeDI and HGR lies in the absence of a second mapping function on the output feature 𝑦 ∈ 𝒴, which prevents it from measuring non-functional dependencies of the type 𝑦 ↦ 𝑥, akin to the DIDI. In addition to that, instead of leveraging the original definition of Pearson's coefficient, the formulation of GeDI is slightly altered to make the indicator sensitive to scale variations. This ensures that reductions in unfairness are proportionally translated to diminished disparate impacts even if the shape of the unfair behavior is not modified, and also guarantees compatibility between GeDI and DIDI since both metrics yield identical results when the input attribute is binary. Finally, the mapping function 𝑓 (𝑥) is restricted to a linear combination over a polynomial kernel. This allows one to frame the computation as a linear optimization problem, thus keeping a low computational burden although retaining high approximation capabilities thanks to the inherent non-linearities. Additionally, it serves the dual purpose of reducing overfitting while maximizing user-configurability and interpretability of the metric.</p><p>Formally, 𝑓 (𝑥) is defined as the vector product V 𝑘 𝑥 ⋅ 𝛼, where V 𝑘 𝑥 is the polynomial expansion matrix built from the input vector 𝑥 -i.e., the Vandermonde matrix -, while 𝛼 ∈ ℝ 𝑘 is a coefficient vector that weighs the contribution of each polynomial order. GeDI is eventually computed as:</p><formula xml:id="formula_4">GeDI(𝑥, 𝑦; V 𝑘 ) = | cov(V 𝑘 𝑥 ⋅ 𝛼, 𝑦) var(V 𝑘 𝑥 ⋅ 𝛼) | s.t. ‖𝛼‖ 1 = 1<label>(5)</label></formula><p>where the constraint on the L1 norm of the coefficient vector is intended to replace the absence of the scaling factor on the output term. An important detail to note is that the order 𝑘 of the polynomial expansion is part of the specification of the indicator, as it appears in its notation and aims to offer users a simple way to balance the bias-variance trade-off.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">FAiRDAS with Continuous Attribute</head><p>As a demonstration of our approach, we focus on ranking students by their predicted academic performance to identify those at risk of dropping out. The real-world data is provided by the Canarian Agency for Quality Assessment and Accreditation (ACCUEE) <ref type="foot" target="#foot_0">1</ref> , which gathers information to assess the performance of their educational system through regular diagnostic reports. The data spans four academic years (2015-2019) including (1) the evaluation of students' academic proficiency in subjects such as Mathematics, Spanish, and English and (2) context questionnaires completed by students, school principals, families, and teachers to collect sociodemographic background information. In our test case, we rank students based on their Mathematics proficiency measured by a normalized score. The protected attribute considered is the Economic, Social, and Cultural Status (ESCS) <ref type="bibr" target="#b23">[24]</ref>, namely a continuous indicator that serves as a proxy for the socioeconomic status of students. Ensuring long-term stability is crucial in this context: although consistently high accuracy and fairness are desirable, it is essential to maintain stable actions over time to prevent negatively affecting students' academic progress.</p><p>In addressing the task at hand, we define two distinct groundings of FAiRDAS framework. The first grounding, inspired by <ref type="bibr" target="#b14">[15]</ref>, adopts a set of discrete actions that requires a discretization of the sensitive attribute; conversely, the second grounding relies on a set of continuous actions that do not require any discretization. In both groundings, the continuous nature of the attribute is preserved when computing the fairness metric as we rely on GeDI <ref type="bibr" target="#b21">[22]</ref>. The groundings we propose represent two potential approaches to addressing long-term fairness with continuous attributes and should not be seen as conflicting: in certain scenarios, depending on the desired level of interpretability and the overall system requirements, discrete actions may be necessary, while in others, continuous actions might be preferred. In the remaining of the section, we describe the two groundings in detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Grounding with Discrete Actions</head><p>Set of Actions. Inspired by <ref type="bibr" target="#b14">[15]</ref>, we design a set of discrete actions that directly modify the scores used by the ranking algorithm. Formally, given the discretization 𝑣 ∈ 𝒱 = {𝑣 1 , 𝑣 2 , ..., 𝑣 𝑛 } of the continuous protected attribute ESCS, the actions are represented by a vector 𝜃 ∈ [0, 1] |𝒱 | with unit L1 norm. The modified score of a student with 𝐸𝑆𝐶𝑆 = 𝑣 is obtained by multiplying their original score by (1 − 𝜃 𝑣 ). Thus, the action vector components act as penalizing factors for over-represented sensitive groups in a batch, specifically affecting the scores of students in these protected groups. Higher values in the action vector (closer to 1) correspond to more significant penalization, whereas values closer to zero result in minimal modification to the student's score. In our application, we discretize the continuous protected attribute ESCS in four levels; thus, the action vector 𝜃 has four components, each applying to the students belonging to the corresponding ESCS level.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Metrics of Interest.</head><p>In our case study, we are interested in decreasing socioeconomic discrimination while preserving ranking accuracy; thus, we need 1) a fairness metric able to deal with the continuous protected attribute ESCS and 2) an accuracy metric to measure the drop in ranking performance due to the application of the action vector 𝜃. As a fairness metric, we rely on GeDI, whereas to assess the system's drop in performance we measure the sum of absolute differences between the original and modified scores, namely:</p><formula xml:id="formula_5">SAE(𝜃) = 1 𝐾 𝐾 ∑ 𝑘=1 |𝑠 𝑘 − (1 − 𝜃 𝑣 𝑘 ) ⋅ 𝑠 𝑘 | = 1 𝐾 𝐾 ∑ 𝑘=1 |𝑠 𝑘 ⋅ 𝜃 𝑣 𝑘 | = 1 𝐾 𝐾 ∑ 𝑘=1 𝑠 𝑘 ⋅ 𝜃 𝑣 𝑘 ,<label>(6)</label></formula><p>where 𝐾 is the number of students in a batch, 𝜃 𝑣 𝑘 ∈ [0, 1] is the component of the action vector corresponding to the ESCS level of the k-th student, and 𝑠 𝑘 ∈ [0, 1] is the score of the k-th student. It is worth noting that the two metrics of interest conflict: SAE drives 𝜃 𝑣 𝑘 towards zero to maintain the original ranking, whereas GeDI requires 𝜃 𝑣 𝑘 &gt; 0 for some 𝑘 to mitigate discrimination. Given that the action vector must have a unit L1 norm, the trivial solution of 𝜃 𝑣 𝑘 = 0 for all 𝑘, which would nullify both metrics, is not allowed.</p><p>Target Dynamic System. As we aim to meet the metric thresholds while maintaining longterm stability, we define our desired behavior by means of following dynamic system, which defines a smooth evolution of the target metrics toward the thresholds:</p><formula xml:id="formula_6">ȳ 𝑡+1 = 𝜆 ⊙ ( ȳ 𝑡 − 𝜇) + ȳ 𝑡 ,<label>(7)</label></formula><p>where ȳ 𝑡 represent the metric values in the target system, 𝜇 is the vector of thresholds, 𝜆 ∈ (0, 2) 𝑛 , and ⊙ refers to the Hadamard (element-wise) product. Given that we are focusing on two metrics (GeDI and SAE), 𝜆 is a 2-dimensional vector, with its values determined through a preliminary experiment detailed in Section 5.</p><p>Distance Function and Optimization Method. We use Equation ( <ref type="formula" target="#formula_3">4</ref>) -Euclidean distance -as the distance function, optimizing it with the scipy implementation of Sequential Least Squares Programming (SLSQP) optimizer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Grounding with Continuous Actions</head><p>Set of Actions. To avoid the discretization of the protected attribute ESCS, we define the set of possible actions as a family of polynomial functions 𝑊 𝛽 parameterized by 𝛽 ∈ ℝ 𝑑+1 , where 𝑑 is the order of the polynomial<ref type="foot" target="#foot_1">2</ref> . The functions map each value of ESCS to a real number, which is then used as a multiplicative discount factor to modify the student's score. First, we rescale ESCS into the domain [0, 1], then we impose two constraints on the family of polynomial functions 𝑊 𝛽 , namely: 1) their integral must be unitary over domain in order to avoid degenerate solutions, and 2) their roots must lie outside the domain in order to guarantee that each discount factor 𝑊 𝛽 (𝑧 𝑘 ) is strictly positive for all 𝑧 𝑘 ∈ [0, 1]. These constraints enhance the interpretability of the mitigation strategy by simplifying the comparison between the selected polynomial functions. Additionally, they prevent the trivial solution of a constant function equal to zero, which would nullify the fairness metric.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Metrics of Interest.</head><p>As in Discrete Actions, we use GeDI as fairness metric to deal with the continuous nature of ESCS. The ranking performance is measured by the mean squared error between the original and modified scores <ref type="foot" target="#foot_2">3</ref> :</p><formula xml:id="formula_7">MSE(𝛽) = 1 𝐾 𝐾 ∑ 𝑘=1 (𝑠 𝑘 − 𝑊 𝛽 (𝑧 𝑘 ) ⋅ 𝑠 𝑘 ) 2 . (<label>8</label></formula><formula xml:id="formula_8">)</formula><p>where 𝑧 𝑘 is the ESCS value of the k-th student, and 𝑊 𝛽 (𝑧 𝑘 ) the weighting polynomial function evaluated on 𝑧 𝑘 . As in Discrete Actions, the two metrics of interest conflict since MSE pushes 𝑊 𝛽 to be close to the constant function 𝑊 𝛽 = 1 while GeDI forces 𝑊 𝛽 to deviate from it.</p><p>Target Dynamic System. We rely on the same dynamic system in Equation ( <ref type="formula" target="#formula_6">7</ref>), as our goal is to stably evolve the two metrics of interest below the predefined thresholds.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Distance Function and Optimization Method.</head><p>As before, we use Equation ( <ref type="formula" target="#formula_3">4</ref>) -Euclidean distance -as a distance function. However, when optimizing it, we rely on the scipy implementation of the Trust Region Method (trust-constr), as it proved to be more reliable in the solution, although at the expense of a slightly higher computational time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental Results</head><p>This section outlines the empirical evaluation performed on the case study described in Section 4. We first define the evaluation procedure and then report the numerical results<ref type="foot" target="#foot_3">4</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Evaluation</head><p>We compare each of the two groundings with a baseline method focusing on metrics of interest and action smoothness (m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 ) described below. For each approach, we report the mean and standard deviation of the metrics across batches to assess performance and stability over time.</p><p>Action Smoothness. To evaluate the stability of the chosen actions over time, we compute the cosine distance between actions performed on consecutive batches. For the Discrete Actions grounding, m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 is defined as follows:</p><formula xml:id="formula_9">m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 = 1 𝑁 𝑁 −1 ∑ 𝑡=1 (1 − 𝜃 𝑡 ⋅ 𝜃 𝑡+1 √ 𝜃 2 𝑡 ⋅ √ 𝜃 2 𝑡+1 )<label>(9)</label></formula><p>where 𝑁 is the number of incoming batches and 𝜃 𝑡 is the action vector of the 𝑡-th batch. For the Continuous Actions grounding, m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 is computed by evaluating the weighting polynomial functions on a fine-grained discretization of the interval [0, 1]. Formally, it is defined as:</p><formula xml:id="formula_10">m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 = 1 𝑁 𝑁 −1 ∑ 𝑡=1 (1 − 𝑊 𝛽 𝑡 ⋅ 𝑊 𝛽 𝑡+1 √ 𝑊 2 𝛽 𝑡 ⋅ √ 𝑊 2 𝛽 𝑡+1 ) (<label>10</label></formula><formula xml:id="formula_11">)</formula><p>where 𝑁 is the number of incoming batches and 𝑊 𝛽 𝑡 is the evaluation of the polynomial function chosen for the 𝑡-th batch.</p><p>Baseline Approach. We compare FAiRDAS in its Discrete Actions grounding against a baseline approach that focuses on finding the optimal action vector that minimizes:</p><formula xml:id="formula_12">ℒ (𝜃) = max (GeDI(𝜃), 𝜇 𝐺𝑒𝐷𝐼 ) + max (SAE(𝜃), 𝜇 𝑆𝐴𝐸 )<label>(11)</label></formula><p>where 𝜇 𝐺𝑒𝐷𝐼 and 𝜇 𝑆𝐴𝐸 are the metrics' thresholds. The action vector 𝜃 is the same described in Section 4.1, and it is optimized via the SLSQP method, as for FAiRDAS. For the Continuous Actions grounding, the baseline approach searches for the optimal polynomial function 𝑊 𝛽 that satisfies the constraints described in Section 4.2 and minimizes:</p><formula xml:id="formula_13">ℒ (𝛽) = max (GeDI(𝛽), 𝜇 𝐺𝑒𝐷𝐼 ) + max (MSE(𝛽), 𝜇 𝑀𝑆𝐸 ) .<label>(12)</label></formula><p>with 𝜇 𝐺𝑒𝐷𝐼 and 𝜇 𝑀𝑆𝐸 are the metrics' thresholds. As for FAiRDAS, we rely on the Trust Region Methods to tackle the optimization problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Numerical Results</head><p>As a preliminary step, we examine how the eigenvalues 𝜆 of the FAiRDAS dynamic system influence action smoothness to determine their optimal values for the experiments. We conduct multiple runs with a fixed threshold while varying the eigenvalues (Table <ref type="table">1</ref>). As expected, based on the theoretical characteristics of the dynamic state under consideration, lower eigenvalues result in more stable actions in both groundings. For our experiments, we select the eigenvalues corresponding to the inflection point of the action smoothness metric. Next, we compare the performance of FAiRDAS and the baseline under different pairs of thresholds for the metrics of interest. For both Discrete Actions and Continuous Actions settings, the threshold pair {0, 2} represents an extreme scenario where fairness is prioritized over ranking performance. Subsequently, we examine a loose threshold pair, {0.7, 0.7}, and a stringent pair, {0.5, 0.5}. Finally, we investigate a pair of thresholds, {0.2, 0.2}, that cannot be reached.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Mean and standard deviation of the action smoothness computed over the batches for FAiRDAS. We analyse the results for 5 different eigenvalues (𝜆) with a fixed threshold pair {0.5, 0.5}. For each eigenvalue, we run eight experiments. We select 𝜆 = 0.2 as the elbow of the curve for both groundings (in bold). Results with Discrete Actions. Table <ref type="table" target="#tab_0">2</ref> presents the mean and standard deviation of the metrics throughout 100 batches for both baseline and FAiRDAS approach in Discrete Actions setting. Across all threshold pairs, the two methods achieve comparable levels of the metrics of interest (GeDI and SAE). However, the baseline exhibits notably higher levels of instability in the chosen actions (higher m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 ) compared to FAiRDAS, especially with stringent thresholds. This finding confirms the ability of FAiRDAS to maintain both performance effectiveness and fairness over time while also avoiding drastic actions that may raise ethical concerns. The increased stability of FAiRDAS approach is demonstrated in Figure <ref type="figure" target="#fig_0">1</ref>, which shows the action vectors selected by both approaches in an experiment with stringent thresholds. This figure provides a component-wise comparison of the baseline and FAiRDAS action vectors across all 100 batches. As detailed in Section 4.1, each component of the action vectors affects students from the corresponding ESCS level and acts as penalizing factors on their scores, potentially altering their ranking. Higher values indicate more significant penalization, while values near zero mean the student's score remains untouched. The baseline method tends to favor rapid and drastic interventions, indicated by 1) the sudden color change between batches and 2) action components close to one (lighter color). In contrast, FAiRDAS exhibits a more moderated and balanced behavior, with action vectors evolving smoothly over the experiment (gradual color changes along x-axis) and similar penalization across groups (uniform color along y-axis). Results with Continuous Actions. In Table <ref type="table" target="#tab_1">3</ref> we report the mean and standard deviation of the metrics over 100 batches for both baseline and FAiRDAS approach under Continuous Actions setting. As with Discrete Actions, the numerical results confirm FAiRDAS's capability to maintain both performance effectiveness and fairness over time while avoiding drastic actions. FAiRDAS and the baseline achieve similar levels for the metrics of interest (GeDI and MSE) across all thresholds, but FAiRDAS reaches lower values of action smoothness (m 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 ). This result is  <ref type="figure" target="#fig_1">2</ref>, where we present an example of the polynomial functions selected by FAiRDAS and the baseline throughout 100 batches. Each column displays the function chosen for the corresponding batch, evaluated over the ESCS domain [0, 1] (y-axis). As described in Section 4.2, the functions influence the ranking based on the students' ESCS value, serving as a penalizer on their scores: lower values correspond to more substantial penalization, whereas values close to one indicate that the student's score is unaffected. As for Discrete Actions, we observe that the baseline method tends to favor rapid and drastic actions, as indicated by 1) the abrupt color changes between batches and 2) the high penalization values (higher contrast).</p><p>Conversely, FAiRDAS demonstrates a more moderated and balanced behaviour, with polynomial functions evolving smoothly throughout the batches (gradual color changes along x-axis) and more consistent penalization across different ESCS values (smooth color changes along y-axis).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>We introduced a novel approach that integrates state-of-the-art techniques to address longterm fairness in the presence of continuous protected attributes. This is achieved by pairing FAiRDAS <ref type="bibr" target="#b14">[15]</ref>, a framework aimed at ensuring long-term fairness in ranking systems while preserving stable actions, with the Generalized Disparate Impact (GeDI) indicator <ref type="bibr" target="#b21">[22]</ref>, a fairness metric specifically designed to handle continuous protected attributes. Our contribution includes the definition of two possible sets of actions to handle continuous attributes. The first set prioritizes interpretability but introduces discretization, whereas the second set maintains the continuity of actions at the expense of interpretability. The selection of the set of actions to apply depends on the specific requirements and constraints of the application context. We validated our methodology through a case study in the domain of AI and Education, where we compared the performance and stability of FAiRDAS against a baseline method. Our analysis demonstrates that the integration of FAiRDAS and GeDI with our defined actions presents a robust solution for addressing long-term fairness under continuous protected attributes.</p><p>To the best of our knowledge, this is the first work that tackles long-term fairness and stability in ranking with continuous attributes. Thus, we believe that it could lay the groundwork for further research and applications in several domains where handling continuous attributes and stability are of key importance, yet currently understudied.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: An example of the action vectors selected by the baseline (a) and FAiRDAS (b) in an experiment with thresholds ({0.2, 0.2}). Each row shows the progression of the corresponding action vector component throughout 100 batches. FAiRDAS exhibits a more moderated and balanced behaviour, with action vectors evolving smoothly over the experiment.</figDesc><graphic coords="11,91.81,89.17,202.10,171.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: An example of the polynomial functions selected by the baseline (a) and FAiRDAS (b) in an experiment with thresholds ({0.2, 0.2}). Each column represents the polynomial function selected for the corresponding batch evaluated on domain [0, 1].</figDesc><graphic coords="12,91.81,89.17,202.09,137.33" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>Mean and standard deviation of the metrics computed over batches for Discrete Actions. We run eight experiments for each pair of thresholds and report the results for the baseline and FAiRDAS approach.</figDesc><table><row><cell></cell><cell></cell><cell cols="2">Discrete Actions</cell><cell></cell><cell></cell><cell cols="2">Continuous Actions</cell></row><row><cell></cell><cell>𝜆</cell><cell>m Actions</cell><cell>𝜎 m Actions</cell><cell></cell><cell cols="2">m Actions</cell><cell>𝜎 m Actions</cell></row><row><cell></cell><cell>1.0</cell><cell>0.201 ± 0.042</cell><cell cols="2">0.265 ± 0.050</cell><cell cols="2">0.044 ± 0.023</cell><cell>0.166 ± 0.032</cell></row><row><cell></cell><cell>0.5</cell><cell>0.054 ± 0.045</cell><cell cols="2">0.128 ± 0.106</cell><cell cols="2">0.011 ± 0.003</cell><cell>0.053 ± 0.016</cell></row><row><cell></cell><cell>0.2</cell><cell cols="3">0.036 ± 0.048 0.115 ± 0.116</cell><cell cols="3">0.006 ± 0.002 0.023 ± 0.010</cell></row><row><cell></cell><cell>0.1</cell><cell>0.031 ± 0.048</cell><cell cols="2">0.105 ± 0.123</cell><cell cols="2">0.005 ± 0.002</cell><cell>0.025 ± 0.012</cell></row><row><cell></cell><cell>0.01</cell><cell>0.029 ± 0.049</cell><cell cols="2">0.101 ± 0.126</cell><cell cols="2">0.001 ± 0.001</cell><cell>0.002 ± 0.002</cell></row><row><cell>Thresholds</cell><cell>Approach</cell><cell>GeDI</cell><cell>𝜎 GeDI</cell><cell></cell><cell>SAE</cell><cell>𝜎 SAE</cell><cell>m Actions</cell><cell>𝜎 m Actions</cell></row><row><cell>{0, 2}</cell><cell>Baseline FAiRDAS</cell><cell cols="7">.388 ± .146 .269 ± .104 .469 ± .124 .636 ± .087 .262 ± .047 .015 ± .010 .054 ± .058 .606 ± .172 .644 ± .117 .550 ± .090 .139 ± .014 .183 ± .010</cell></row><row><cell>{0.7, 0.7}</cell><cell>Baseline FAiRDAS</cell><cell cols="2">.456 ± .108 .294 ± .103 .487 ± .115 .642 ± .124</cell><cell cols="2">.608 ± .159 .612 ± .102</cell><cell cols="3">.568 ± .127 .261 ± .062 .046 ± .060 .144 ± .131 .211 ± .065 .288 ± .063</cell></row><row><cell>{0.5, 0.5}</cell><cell>Baseline FAiRDAS</cell><cell cols="7">.510 ± .157 .281 ± .097 .484 ± .111 .627 ± .103 .263 ± .096 .036 ± .048 .115 ± .116 .694 ± .151 .627 ± .149 .660 ± .123 .273 ± .046 .301 ± .034</cell></row><row><cell>{0.2, 0.2}</cell><cell>Baseline FAiRDAS</cell><cell cols="7">.579 ± .158 .289 ± .121 .511 ± .133 .638 ± .119 .377 ± .076 .027 ± .015 .079 ± .057 .743 ± .168 .639 ± .157 .736 ± .130 .358 ± .043 .334 ± .013</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>Mean and standard deviation of the metrics computed over batches for Continuous Actions. We run eight experiments for each pair of thresholds and report the results for the baseline and FAiRDAS.</figDesc><table><row><cell>Thresholds</cell><cell>Approach</cell><cell>GeDI</cell><cell>𝜎 GeDI</cell><cell>MSE</cell><cell>𝜎 MSE</cell><cell>m Actions</cell><cell>𝜎 m Actions</cell></row><row><cell>{0, 2}</cell><cell>Baseline FAiRDAS</cell><cell>.629 ± .267 .449 ± .</cell><cell>1.202 ± .337</cell><cell>.572 ± .197</cell><cell>.690 ± .078</cell><cell>.015 ± .005</cell><cell>.101 ± .056</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>162 .717 ± .145 .546 ± .202 .684 ± .087 .005 ± .001 .024 ± .006</head><label></label><figDesc></figDesc><table><row><cell>{0.7, 0.7}</cell><cell>Baseline FAiRDAS</cell><cell>.629 ± .235 .676 ± .194</cell><cell>.970 ± .325 .782 ± .272</cell><cell>.550 ± .200 .532 ± .198</cell><cell>.688 ± .082 .697 ± .088</cell><cell>.017 ± .004 .005 ± .003 .020 ± .017 .108 ± .039</cell></row><row><cell>{0.5, 0.5}</cell><cell>Baseline FAiRDAS</cell><cell cols="2">.644 ± .213 1.011 ± .298 .712 ± .275 .875 ± .375</cell><cell>.555 ± .198 .536 ± .204</cell><cell>.688 ± .08 .689 ± .087</cell><cell>.017 ± .005 .006 ± .002 .023 ± .010 .107 ± .048</cell></row><row><cell>{0.2, 0.2}</cell><cell>Baseline FAiRDAS</cell><cell>.675 ± .233 .607 ± .201</cell><cell>1.152 ± .312 .839 ± .284</cell><cell>.567 ± .194 .534 ± .200</cell><cell>.688 ± .079 .692 ± .088</cell><cell>.015 ± .005 .008 ± .002 .034 ± .013 .100 ± .057</cell></row><row><cell cols="2">exemplified in Figure</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Dataset: https://zenodo.org/records/11171863.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">In our application, we choose 𝑑 = 4 as it provides a sufficient trade-off between the expressiveness of the function and the known numerical instability of polynomial kernels, along with their higher computational workload.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">We rely on MSE and not on SAE to avoid the computation of an absolute error.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">The source code to reproduce the experiments can be found at https://github.com/EleMisi/FairRanking under MIT license.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">Disclaimer: This paper reflects only the authors' views. The European Commission is not responsible for any use that may be made of the information it contains.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The work has been partially supported by the AEQUITAS project funded by the European Union's Horizon Europe Programme (Grant Agreement No. 101070363), and by PNRR -M4C2 -Investimento 1.3, Partenariato Esteso PE00000013 -"FAIR -Future Artificial Intelligence Research" -Spoke 8 "Pervasive AI", funded by the European Commission under the NextGeneration EU programme <ref type="bibr" target="#b4">5</ref> .</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">A survey on bias and fairness in machine learning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mehrabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Morstatter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Galstyan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>ACM</publisher>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="1" to="35" />
			<pubPlace>New York, NY, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Zehlike</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Stoyanovich</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2103.14000</idno>
		<title level="m">Fairness in ranking: A survey</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Delayed impact of fair machine learning</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">T</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rolf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Simchowitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v80/liu18c.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th International Conference on Machine Learning, ICML 2018</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<meeting>the 35th International Conference on Machine Learning, ICML 2018<address><addrLine>Stockholmsmässan, Stockholm, Sweden; PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">July 10-15, 2018. 2018</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="3156" to="3164" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A survey on bias and fairness in machine learning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mehrabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Morstatter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Galstyan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3457607</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3457607" />
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Data preprocessing techniques for classification without discrimination</title>
		<author>
			<persName><forename type="first">F</forename><surname>Kamiran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Calders</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10115-011-0463-8</idno>
		<ptr target="http://dx.doi.org/10.1007/s10115-011-0463-8.doi:10.1007/s10115-011-0463-8" />
	</analytic>
	<monogr>
		<title level="j">Knowledge and Information Systems</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1" to="33" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Optimized pre-processing for discrimination prevention</title>
		<author>
			<persName><forename type="first">F</forename><surname>Calmon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vinzamuri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Natesan Ramamurthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Varshney</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2017/file/9a49a25d845a483fae4be7e341368e36-Paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><forename type="middle">V</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vishwanathan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">30</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Data preprocessing to mitigate bias: A maximum entropy based approach</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">E</forename><surname>Celis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Keswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vishnoi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1349" to="1359" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Fairness-aware classifier with prejudice remover regularizer</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kamishima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Akaho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Asoh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sakuma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Knowledge Discovery in Databases</title>
				<editor>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Flach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>De Bie</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Cristianini</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="35" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Nonconvex optimization for regression with fairness constraints</title>
		<author>
			<persName><forename type="first">J</forename><surname>Komiyama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Takeda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Honda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Shimao</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v80/komiyama18a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<meeting>the 35th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="2737" to="2746" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Y</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gao</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2210.12546</idno>
		<title level="m">Policy optimization with advantage regularization for long-term fairness in decision systems</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Towards long-term fairness in recommendation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th ACM international conference on web search and data mining</title>
				<meeting>the 14th ACM international conference on web search and data mining</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="445" to="453" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Three naive bayes approaches for discrimination-free classification</title>
		<author>
			<persName><forename type="first">T</forename><surname>Calders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Verwer</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10618-010-0190-x</idno>
	</analytic>
	<monogr>
		<title level="j">Data Min. Knowl. Discov</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="277" to="292" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Equality of opportunity in supervised learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Price</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Price</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Srebro</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Lee</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Sugiyama</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">29</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Fair and optimal classification via post-processing</title>
		<author>
			<persName><forename type="first">R</forename><surname>Xian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="37977" to="38012" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Fairdas: Fairness-aware ranking as dynamic abstract system</title>
		<author>
			<persName><forename type="first">E</forename><surname>Misino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lombardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Milano</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3523/paper5.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Tubella</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>González-Castañé</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Dignum</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Milano</surname></persName>
		</editor>
		<meeting>the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)<address><addrLine>Kraków, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-10-01">October 1st, 2023. 2023</date>
			<biblScope unit="volume">3523</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Fa* ir: A fair top-k ranking algorithm</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zehlike</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bonchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Castillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hajian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Megahed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Baeza-Yates</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 ACM on Conference on Information and Knowledge Management</title>
				<meeting>the 2017 ACM on Conference on Information and Knowledge Management</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1569" to="1578" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Fairness-aware ranking in search &amp; recommendation systems with application to linkedin talent search</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Geyik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ambler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kenthapadi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th acm sigkdd international conference on knowledge discovery &amp; data mining</title>
				<meeting>the 25th acm sigkdd international conference on knowledge discovery &amp; data mining</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2221" to="2231" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Fairness of exposure in rankings</title>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Joachims</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</title>
				<meeting>the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="2219" to="2228" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Equity of attention: Amortizing individual fairness in rankings</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Biega</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The 41st international acm sigir conference on research &amp; development in information retrieval</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="405" to="414" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Fairness-aware learning for continuous attributes and treatments</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Calauzènes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">E</forename><surname>Karoui</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v97/mary19a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Chaudhuri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Salakhutdinov</surname></persName>
		</editor>
		<meeting>the 36th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="4382" to="4391" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Fairness-aware neural rényi minimization for continuous features</title>
		<author>
			<persName><forename type="first">V</forename><surname>Grari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lamprier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Detyniecki</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2020/313</idno>
		<ptr target="https://doi.org/10.24963/ijcai.2020/313.doi:10.24963/ijcai.2020/313,maintrack" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, International Joint Conferences on Artificial Intelligence Organization</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Bessiere</surname></persName>
		</editor>
		<meeting>the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, International Joint Conferences on Artificial Intelligence Organization</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="2262" to="2268" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Generalized disparate impact for configurable fairness solutions in ML</title>
		<author>
			<persName><forename type="first">L</forename><surname>Giuliani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Misino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lombardi</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v202/giuliani23a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 40th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Brunskill</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Engelhardt</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Sabato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Scarlett</surname></persName>
		</editor>
		<meeting>the 40th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">202</biblScope>
			<biblScope unit="page" from="11443" to="11458" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Learning optimal and fair decision trees for nondiscriminative decision-making</title>
		<author>
			<persName><forename type="first">S</forename><surname>Aghaei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Azizi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vayanos</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v33i01.33011418</idno>
		<ptr target="https://doi.org/10.1609/aaai.v33i01.33011418.doi:10.1609/aaai.v33i01.33011418" />
	</analytic>
	<monogr>
		<title level="m">The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019</title>
				<meeting><address><addrLine>Honolulu, Hawaii, USA</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2019-02-01">January 27 -February 1, 2019. 2019</date>
			<biblScope unit="page" from="1418" to="1426" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The measure of socio-economic status in pisa: a review and some suggested improvements</title>
		<author>
			<persName><forename type="first">F</forename><surname>Avvisati</surname></persName>
		</author>
		<idno type="DOI">10.1186/s40536-020-00086-x</idno>
		<ptr target="http://dx.doi.org/10.1186/s40536-020-00086-x.doi:10.1186/s40536-020-00086-x" />
	</analytic>
	<monogr>
		<title level="m">Large-scale Assessments in Education</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">8</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
