<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fair Enough? A Map of the Current Limitations of the Requirements to Have Fair Algorithms</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Daniele</forename><surname>Regoli</surname></persName>
							<email>daniele.regoli@intesasanpaolo.com</email>
							<affiliation key="aff0">
								<orgName type="department">Data &amp; Artificial Intelligence Office</orgName>
								<orgName type="institution">Intesa Sanpaolo S.p.A</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Castelnovo</surname></persName>
							<email>alessandro.castelnovo@intesasanpaolo.com</email>
							<affiliation key="aff0">
								<orgName type="department">Data &amp; Artificial Intelligence Office</orgName>
								<orgName type="institution">Intesa Sanpaolo S.p.A</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nicole</forename><surname>Inverardi</surname></persName>
							<email>nicole.inverardi@intesasanpaolo.com</email>
							<affiliation key="aff0">
								<orgName type="department">Data &amp; Artificial Intelligence Office</orgName>
								<orgName type="institution">Intesa Sanpaolo S.p.A</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gabriele</forename><surname>Nanino</surname></persName>
							<email>naninogabriele@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="institution">Scuola Superiore Sant&apos;Anna</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ilaria</forename><surname>Penco</surname></persName>
							<email>ilaria.penco@intesasanpaolo.com</email>
							<affiliation key="aff0">
								<orgName type="department">Data &amp; Artificial Intelligence Office</orgName>
								<orgName type="institution">Intesa Sanpaolo S.p.A</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Fair Enough? A Map of the Current Limitations of the Requirements to Have Fair Algorithms</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">CD7DC9BB225CF2B8FE6DBF8A0313402C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Fairness</term>
					<term>Bias</term>
					<term>Artificial Intelligence</term>
					<term>Machine Learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In recent years, the increase in the usage and efficiency of Artificial Intelligence and, more in general, of Automated Decision-Making systems (ADM) has brought with it an increasing and welcome awareness of the risks associated with such systems. One of such risks is that of perpetuating or even amplifying bias and unjust disparities present in the data from which many of these systems learn. This awareness has on the one hand encouraged several scientific communities to come up with more and more appropriate ways and methods to assess, quantify, and possibly mitigate such biases and disparities. On the other hand, it has prompted more and more layers of society, including policy makers, to call for fair algorithms. We believe that while many excellent and multidisciplinary research is currently being conducted, what is still fundamentally missing is the awareness that having fair algorithms is per se a nearly meaningless requirement that needs to be complemented with many additional social choices to become actionable. Namely, there is a hiatus between what the society is demanding from ADM, and what this demand actually means in real-world scenarios. In this work, we outline the key features of such a hiatus and identify a set of crucial open points that we as a society must address in order to give a concrete meaning to the increasing demand of fairness in ADM.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>and for practitioners and developers of ADM systems to meet the societal requirement of having fair algorithms.</p><p>In particular, we build our critical analysis of fair-AI by distinguishing two broad aspects that are roots of several ambiguities:</p><p>1. Choice of sensitive attributes: It is not that discrimination per se is unjust, only discrimination with respect to some attributes that we have either to list and agree upon or to define with some reasonable criterion. (so-called sensitive or protected attributes) 2. What is the true meaning of unfair discrimination?: We need to clarify what we mean by making decisions involving such attributes, and in what cases such decision-making represent an unjust discrimination.</p><p>In Table <ref type="table" target="#tab_0">1</ref>, we summarise the key points raised throughout the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Choice of sensitive attributes</head><p>In a nutshell, we claim that there is a fundamental ambiguity about what characteristics should be considered as protected for a given case and domain. High level non-discrimination principles can be found in several legislative frameworks, such as art 21,1 of the EU Charter of Fundamental Rights. But there is no clear specific legislation that organically discuss what individual characteristics should be considered protected. As an example, Table <ref type="table" target="#tab_1">2</ref> tries to summarise the European non-discrimination laws, most of which consider specific domains of application and specific protected characteristics, thus making it unclear what to do with other potentially sensitive features and other domains. Something not dissimilar can be found in the US legislation; we refer to Barocas et al. <ref type="bibr">[11, chap 6]</ref> and Barocas and Selbst <ref type="bibr" target="#b11">[12]</ref> for more details. In general, we claim that there is no ethical or legal consensus on what are the dimensions (or the criteria to identify them) with respect to which we should assess and, eventually, mitigate for possible biases and discrimination. This can be summarised in the following:</p><p>Open Point 1 (Protected Groups). Given a specific phenomenon, what are the groups of people that we should consider as protected, and with respect to which we therefore have to take care of assessing and avoiding any unjust discrimination?</p><p>In turn, this ambiguity reveals a number of additional nuanced points. For example, it is unclear whether a fairness assessment should be conducted with regard to all protected groups even when the ADM system is not actually collecting such protected information. Indeed, while it is fairly common to </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Open Point 2 (Sensitive data collection). Should developers of ADM systems keep track of all the sensitive attributes that they would not otherwise record, for the sole purpose of assessing unjust discrimination with respect to those attributes?</head><p>Moreover, there is ambiguity regarding the very definition of (protected) group. For instance, age can be aggregated in multiple ways, and assessing fairness with respect to different aggregations can potentially lead to very different results. On a more abstract level, there are concerns and discussions about the prospect of placing people in rigid and exclusive categories <ref type="bibr" target="#b12">[13]</ref>. For instance, multiracial individuals come from various racial groupings. Indeed, at least on a biological/genetic level, race and ethnicity are now seen as extremely fluid and nuanced ideas rather than simple categorical attributes. Gender and sexual orientation are subject to very comparable criticism.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Open Point 3 (Group aggregation). The specific identification of most attributes that are commonly considered protected depends on alternative ways of aggregating individuals: what strategy should developers follow to choose the proper aggregation when assessing unjust discrimination?</head><p>Finally, even if there are some proposals in the literature on how to fix this, there is still no consensus on how to deal with the exponentially growing number of subgroups to take into account when considering the intersection of several protected attributes at the same time <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>: Open Point 4 (Intersectional bias). Is it fair enough to evaluate unjust discrimination with respect to sensitive attributes separately? If not, which combinations of sensitive characteristics should we give priority to (given that we cannot realistically hope to assess all possible combinations)?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">What's the true meaning of unfair discrimination?</head><p>We believe that the crucial fuzziness around fair-AI lies in the fact that there is no consensus regarding what does it mean to unfairly discriminate a group of people with respect to others. Both legislative and ethical literature make the distinction between direct and indirect discrimination, <ref type="foot" target="#foot_0">2</ref> the former indicating an explicit use of protected characteristics to make a decision, while the latter being a discrimination through characteristics somehow associated to protected ones, but not protected per se. A possible example of indirect gender discrimination may be that of using income as a variable to make decisions on loan approvals, given that income is a variable typically correlated to gender. However useful, this distinction raises a set of open problems, the first reflected in the fact that it is not always possible to avoid both direct and indirect discrimination, and that some kind of balance should be somehow tolerated or even desirable (think, e.g., of affirmative action, which is indeed a strategy to remove indirect discrimination through explicit -i.e., direct -discrimination):</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Open Point 5 (Direct vs Indirect discrimination). When evaluating unjust discrimination, should developers of ADM systems take into account all the potential direct and indirect ways by which sensitive characteristics may have affected the outcome? Is it acceptable to engage in direct discrimination in order to prevent indirect discrimination?</head><p>In particular, the concept of indirect discrimination is complex and hides many subtleties. In fact, most legislative frameworks actually admit that some characteristics can be legitimately used to make decisions, even when associated with protected attributes, since they represent a "business need" (such as the income in the previous example on loan approval). <ref type="foot" target="#foot_1">3</ref></p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Open Point 6 (Legitimate Business needs). What qualities should an attribute have, if any, to be eligible for use in automatic judgements, even if it serves as a basis for indirect discrimination?</head><p>In this respect, we would like to point out that a possible way out for Open Point 6 about legitimate business needs, would be that of identifying ex-ante a set of variables as the only legitimate features to be used in particularly delicate domains (such as job recruiting). Namely, this can be seen as the counterpoint of Fairness Through Unawareness, or Blindness [see, e.g. 23]. Blindness consists in building ADM systems that are not exposed directly to sensitive attributes. While this strategy prevents the possibility of direct discrimination, it leaves room for indirect discrimination through the use of variables associated to sensitive characteristics. Given this context, rather than offering a list of unusable attributes, we could offer a list of attributes that are the only allowed for a particular domain/case, assuming that we consider those attributes relevant for the task at hand and will therefore accept any discrepancies that may result from the association of such variables with sensitive attributes. <ref type="foot" target="#foot_2">4</ref>In order to capture different concepts of (direct and indirect) discrimination, the fair-AI literature has developed a wide range of observational metrics <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref>. However, on the one hand it has been proved that most of such metrics are mutually incompatible <ref type="bibr" target="#b25">[26]</ref>, and on the other hand metrics that are solely observational are blind to the real underlying mechanism and only provide a static picture of an often very intricate phenomenon. For a given use-case, choosing only one observational metric is, at best, exceedingly challenging and most likely simplistic. Some works have proposed guidelines -usually in the form of decision trees or diagrams-to help finding the most appropriate statistical metric given domain-specific constraints [see, e.g., <ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>. However, as the authors of such works clearly acknowledge, the process of following the proposed decision diagrams is itself complicated, involving necessarily multi-disciplinary competencies, and in any case they warn not to take these diagrams too categorically or as a set of well-established prescriptions. To make things even more blurry, if it is true that having perfect parity with respect to different metric classes is mathematically impossible, allowing a limited level of disparity may be attainable with multiple metrics at the same time <ref type="bibr" target="#b30">[31]</ref>, suggesting that focusing too much on a single metric maybe counter-productive after all. Furthermore, employing such metrics creates more uncertainty about the numerical threshold that indicates the true existence of unfairness, or even about the specific form of metric that should be used (e.g., taking ratios vs. differences of significant quantities -see, e.g., Ruggieri et al. <ref type="bibr" target="#b1">[2]</ref>).</p><p>Open Point 7 (Over-reliance on observational metrics). Purely observational fairness metrics should be taken with a grain of salt. At best, they can be used as a means for a deeper reasoning on the mechanisms underlying a phenomenon, rather than a final word on the presence or lack of unjust discrimination. A clear connection between quantitative metrics and unjust discrimination is still missing.</p><p>One of the problems of observational metrics is that they are blind to the causal structure of the underlying phenomenon, with the risk of attributing to bias and discrimination what is instead spurious correlation. Indeed, is true that using a causality-aware approach makes it possible to transparently disentangle between direct, indirect, and spurious effects, as brilliantly showcased by Plečko and Bareinboim <ref type="bibr" target="#b31">[32]</ref>.</p><p>However, identifying the causal structure of a given use case is far from straightforward. Moreover, there is an open philosophical debate regarding the very notion of human attributes as causes <ref type="bibr" target="#b32">[33]</ref>. Therefore, even if we welcome the causal analysis of the underlying phenomenon in order to better assess for the presence of bias and discrimination, we would like to raise the attention on the following point:</p><p>Open Point 8 (Assumptions of causal structure). Causal tools require strong, often unverifiable, assumptions. Downstream consequences of wrong assumptions can lead to wrong or even harmful actions. Therefore, particular care must be taken when relying on such tools.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Conclusions</head><p>In this work, we support the view that the understanding of the landscape of unjust discrimination in ADM, despite the impressive work done in the last decade or so by the fair-AI community, is not yet mature enough to be "put into practice". In particular, there are gaps at the intersection between the mathematical and statistical tools developed by statisticians and AI researchers, the legal nondiscrimination provisions, and the ethical and social notions of fairness. We believe these gaps are deep, and not so easy to bridge, in part precisely because they lie at the boundaries between quite different worlds.</p><p>We believe that the requirement to develop fair algorithms is still too vague and that, in order to be put in place, we have to clarify and be aware of a set of open points, most of which are societal in nature rather than technical. Given the strong multidisciplinary content, this challenge will be best addressed and discussed when researchers and practitioners from the various fields involved (e.g., statisticians, AI experts, ethicists, and legal experts) collaborate toward the goal, potentially creating a shared vocabulary and set of working notions. In fact, this is the spirit that also guided this work.</p><p>As a final remark, notice that there are of course other problematic aspects of unjust discrimination in ADM systems that are somehow out of the scope of this work and would have required a much broader analysis. We can cite, e.g., how to face the challenge of bias in models with unstructured data such as images and text, especially in generative AI models such as the modern Large Language Models; or more technological problems, such as those raised by the modularity of AI systems, that are usually composed of several steps and components, making it complicated to clarify how fairness issues may propagate through the process.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Schematic summary of current open points of fairness requirements for ADM.</figDesc><table><row><cell>broad aspect</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Schematic summary of protected categories explicitly covered by EU Directives on non-discrimination. See also the EU non-discrimination website.</figDesc><table><row><cell>Directive</cell><cell>year</cell><cell>Domain of application</cell><cell>Protected categories</cell></row><row><cell></cell><cell></cell><cell>employment, social protection,</cell><cell></cell></row><row><cell>Race Equality Directive [Directive 2000/43/EC]</cell><cell>2000</cell><cell>healthcare, education, access to and supply of goods and services</cell><cell>race and ethnic origin</cell></row><row><cell></cell><cell></cell><cell>which are available to the public</cell><cell></cell></row><row><cell>Employment Directive [Directive 2000/78/EC]</cell><cell>2000</cell><cell>working environment</cell><cell>religion or belief, disability, age, sexual orientation</cell></row><row><cell>Gender Access Directive [Directive 2004/113/EC]</cell><cell>2004</cell><cell>access to and supply of goods and services</cell><cell>gender</cell></row><row><cell>Gender Equality Directive [Directive 2006/54/EC]</cell><cell>2006</cell><cell>employment</cell><cell>gender</cell></row><row><cell cols="4">record information about gender or age, it is much less common to collect data about political opinions</cell></row><row><cell>or religious belief.</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">Direct vs. indirect discrimination is more common in EU legislative frameworks -see, e.g., Directive 2006/54/EC and Directive 2000/43/EC, and also<ref type="bibr" target="#b20">[21]</ref> -while U.S. anti-discrimination laws rely on a similar distinction between disparate treatment and disparate impact [see, e.g.,<ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b10">11]</ref>.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_1">See, e.g., Directive 2006/54/EC that explicitly refers to "objective justification", or US Civil Rights Act that talks about "business necessities".</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_2">Notice that this a notion of process fairness, and is similar, in spirit, to the "feature-apriori fairness" introduced by Grgic-Hlaca et al.<ref type="bibr" target="#b23">[24]</ref>.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Health and Digital Executive Agency (HaDEA). Neither the European Union nor the granting authority can be held responsible for them. Grant Agreement no. 101120763 -TANGO.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Fair enough? a map of the current limitations of the requirements to have fair algorithms</title>
		<author>
			<persName><forename type="first">D</forename><surname>Regoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Castelnovo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Inverardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Nanino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Penco</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2311.12435.arXiv:2311.12435" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Can We Trust Fair-AI?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Alvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pugnana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>State</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v37i13.26798</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="15421" to="15430" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Shahabi</surname></persName>
		</author>
		<idno type="DOI">10.1137/1.9781611977653.ch110</idno>
		<title level="m">Missed Opportunities in Fair AI</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="961" to="964" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Inherent limitations of ai fairness</title>
		<author>
			<persName><forename type="first">M</forename><surname>Buyl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">De</forename><surname>Bie</surname></persName>
		</author>
		<idno type="DOI">10.1145/3624700</idno>
		<ptr target="https://doi.org/10.1145/3624700.doi:10.1145/3624700" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="page" from="48" to="55" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A sociotechnical view of algorithmic fairness</title>
		<author>
			<persName><forename type="first">M</forename><surname>Dolata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Feuerriegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Schwabe</surname></persName>
		</author>
		<idno type="DOI">10.1111/isj.12370</idno>
		<ptr target="https://onlinelibrary.wiley.com/doi/pdf/10.1111/isj.12370" />
	</analytic>
	<monogr>
		<title level="j">Information Systems Journal</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="754" to="818" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Fairness and abstraction in sociotechnical systems</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Selbst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Boyd</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Friedler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Venkatasubramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vertesi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3287560.3287598</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* &apos;19</title>
				<meeting>the Conference on Fairness, Accountability, and Transparency, FAT* &apos;19<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="59" to="68" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Emergent unfairness in algorithmic fairness-accuracy trade-off research</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Cooper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Abrams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Na</surname></persName>
		</author>
		<idno type="DOI">10.1145/3461702.3462519</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;21</title>
				<meeting>the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES &apos;21<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="46" to="54" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse, Information</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Hoffmann</surname></persName>
		</author>
		<idno type="DOI">10.1080/1369118X.2019.1573912</idno>
	</analytic>
	<monogr>
		<title level="j">Communication &amp; Society</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="900" to="915" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Fairwashing: the risk of rationalization</title>
		<author>
			<persName><forename type="first">U</forename><surname>Aivodji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Arai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Fortineau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gambs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tapp</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v97/aivodji19a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Chaudhuri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Salakhutdinov</surname></persName>
		</editor>
		<meeting>the 36th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="161" to="170" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A12012P%2FTXT" />
		<title level="m">The European Parliament, the Council and the Commission, Charter of Fundamental Rights of the European Union</title>
				<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narayanan</surname></persName>
		</author>
		<ptr target="http://www.fairmlbook.org" />
		<title level="m">Fairness and Machine Learning: Limitations and Opportunities, fairmlbook</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Big data&apos;s disparate impact</title>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Selbst</surname></persName>
		</author>
		<idno type="DOI">10.15779/Z38BG31</idno>
	</analytic>
	<monogr>
		<title level="j">California law review</title>
		<imprint>
			<biblScope unit="page" from="671" to="732" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Subverting machines, fluctuating identities: Re-learning human categorization</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mckee</surname></persName>
		</author>
		<idno type="DOI">10.1145/3531146.3533161</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;22</title>
				<meeting>the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;22<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1005" to="1015" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Multi-dimensional discrimination in law and machine learninga comparative overview</title>
		<author>
			<persName><forename type="first">A</forename><surname>Roy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Horstmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ntoutsi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3593013.3593979</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23</title>
				<meeting>the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="89" to="100" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Are &quot;intersectionally fair&quot; ai algorithms really fair to women of color? a philosophical analysis</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Kong</surname></persName>
		</author>
		<idno type="DOI">10.1145/3531146.3533114</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;22</title>
				<meeting>the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;22<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="485" to="494" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<ptr target="https://commission.europa.eu/aid-development-cooperation-fundamental-rights/your-rights-eu/know-your-rights/equality/non-discrimination_en" />
		<title level="m">The European Commission, Directorate-General for Communication, Non-discrimination</title>
				<imprint>
			<date type="published" when="2023-08-22">2023-08-22</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000L0043:en:HTML" />
		<title level="m">implementing the principle of equal treatment between persons irrespective of racial or ethnic origin</title>
				<imprint>
			<date type="published" when="2000-06">June 2000. 2000</date>
		</imprint>
	</monogr>
	<note>The Council of The European Union, Council Directive 2000/43/EC of 29</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32000L0078" />
		<title level="m">The Council of The European Union, Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation</title>
				<imprint>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32004L0113" />
		<title level="m">The Council of The European Union, Council Directive</title>
				<imprint>
			<date type="published" when="2004">2004. 2004</date>
		</imprint>
	</monogr>
	<note>access to and supply of goods and services</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32006L0054" />
		<title level="m">5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast)</title>
				<imprint>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
	<note>The European Parliament and the Council of The European Union, Directive 2006/54</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Substantive equality</title>
		<author>
			<persName><forename type="first">C</forename><surname>Barnard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hepple</surname></persName>
		</author>
		<idno type="DOI">10.1017/S0008197300000246</idno>
	</analytic>
	<monogr>
		<title level="j">The Cambridge Law Journal</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="562" to="585" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<ptr target="Law88-352;78Stat.241" />
	</analytic>
	<monogr>
		<title level="m">Civil Rights Act of 1964</title>
				<imprint>
			<publisher>U.S. Government Publishing Office</publisher>
			<date type="published" when="1964">1964. -95. December 10, 2015</date>
		</imprint>
	</monogr>
	<note>Amended Through P.L. 114</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Counterfactual fairness</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Kusner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Loftus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Silva</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2017/hash/1271a7029c9df08643b631b02cf9e116-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">30</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The case for process fairness in learning: Feature selection for fair decision making</title>
		<author>
			<persName><forename type="first">N</forename><surname>Grgic-Hlaca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Zafar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Weller</surname></persName>
		</author>
		<ptr target="https://www.mlandthelaw.org/papers/grgic.pdf" />
	</analytic>
	<monogr>
		<title level="m">NIPS symposium on machine learning and the law</title>
				<meeting><address><addrLine>Barcelona, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">11</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">A clarification of the nuances in the fairness metrics landscape</title>
		<author>
			<persName><forename type="first">A</forename><surname>Castelnovo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Crupi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Regoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">G</forename><surname>Penco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Cosentini</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41598-022-07939-1</idno>
	</analytic>
	<monogr>
		<title level="j">Scientific Reports</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">4209</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Algorithmic fairness: Choices, assumptions, and definitions</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Potash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>D'amour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lum</surname></persName>
		</author>
		<idno type="DOI">10.1146/annurev-statistics-042720-125902</idno>
	</analytic>
	<monogr>
		<title level="j">Annual Review of Statistics and Its Application</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="141" to="163" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">On the applicability of machine learning fairness notions</title>
		<author>
			<persName><forename type="first">K</forename><surname>Makhlouf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhioua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Palamidessi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3468507.3468511</idno>
	</analytic>
	<monogr>
		<title level="j">SIGKDD Explor. Newsl</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="14" to="23" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Machine learning fairness notions: Bridging the gap with real-world applications</title>
		<author>
			<persName><forename type="first">K</forename><surname>Makhlouf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhioua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Palamidessi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ipm.2021.102642</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1016/j.ipm.2021.102642" />
	</analytic>
	<monogr>
		<title level="j">Information Processing &amp; Management</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page">102642</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Promises and pitfalls of algorithm use by state authorities</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Haeri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hartmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sirsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wenzelburger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Zweig</surname></persName>
		</author>
		<idno type="DOI">10.1007/s13347-022-00528-0</idno>
	</analytic>
	<monogr>
		<title level="j">Philosophy &amp; Technology</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page">33</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Scoping fairness objectives and identifying fairness metrics for recommender systems: The practitioners&apos; perspective</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Beattie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Cramer</surname></persName>
		</author>
		<idno type="DOI">10.1145/3543507.3583204</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM Web Conference 2023</title>
				<meeting>the ACM Web Conference 2023<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="3648" to="3659" />
		</imprint>
	</monogr>
	<note>WWW &apos;23</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">The possibility of fairness: Revisiting the impossibility theorem in practice</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bynum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Drushchak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zakharchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rosenblatt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Stoyanovich</surname></persName>
		</author>
		<idno type="DOI">10.1145/3593013.3594007</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23</title>
				<meeting>the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;23<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="400" to="422" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Causal fairness analysis: A causal toolkit for fair machine learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Plečko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bareinboim</surname></persName>
		</author>
		<idno type="DOI">10.1561/2200000106</idno>
	</analytic>
	<monogr>
		<title level="j">Foundations and Trends® in Machine Learning</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="304" to="589" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">What&apos;s sex got to do with fair machine learning?</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kohler-Hausmann</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2006.01770</idno>
		<idno type="arXiv">arXiv:2006.01770</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
