<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">&quot;20% Increase in fairness for Black applicants&quot;: A Critical Examination of Fairness Measurements Offered by Startups</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Corinna</forename><surname>Hertweck</surname></persName>
							<email>corinna.hertweck@zhaw.ch</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Zurich</orgName>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">Zurich University of Applied Sciences</orgName>
								<address>
									<settlement>Zurich</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maya</forename><surname>Guido</surname></persName>
							<email>maya.guido@uzh.ch</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Zurich</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<settlement>Zurich</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<address>
									<settlement>Mainz</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">&quot;20% Increase in fairness for Black applicants&quot;: A Critical Examination of Fairness Measurements Offered by Startups</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">B18BB823854AB0127FD6DCC8069C298C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>fairness</term>
					<term>observability</term>
					<term>startups</term>
					<term>fairness metrics</term>
					<term>fairness criteria</term>
					<term>demographic parity</term>
					<term>statistical parity</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Companies using machine learning are increasingly obligated to integrate fairness considerations, often driven by regulatory imperatives and public discourse. This has given rise to a startup ecosystem focused on or at least integrating fairness measurement into their ML observability platforms. However, fairness is a complex concept and there are still many open questions in research. We therefore investigate how startups deal with this and present preliminary results of our ongoing analysis of the fairness startup landscape. In our analysis, we review publicly available material (such as websites) from these companies. We find two notable gaps: (1) the gap between fairness measurement in the algorithmic fairness literature and what startups actually implement and (2) the gap between the claims made by these startups and their actual practices. Based on our findings, we make recommendations for academia, policymakers, and industry stakeholders to advance the cause of fairness in machine learning collaboratively.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Through the increasing use of machine learning, there is also an increasing awareness of potential discrimination through automated decision-making systems. This has led to more regulation in this space (e.g., in the EU AI Act <ref type="bibr" target="#b0">[1]</ref>) and thereby to more pressure on companies that are using machine learning. Consequently, ML observability platforms are starting to incorporate fairness metrics into their offerings. Some of these platforms even prioritize fairness as their primary concern. However, it is unclear if these platforms' claims match what they can actually offer -especially since we know that the field of algorithmic fairness still has a lot of open questions to answer on the research side. Inspired by <ref type="bibr" target="#b1">[2]</ref>, we want to evaluate these platforms' "claims and practices". Our focus is specifically on startups that integrate some form of off-the-shelf fairness measurement into their platforms. We do not consider consulting companies that do not offer stand-alone platforms and instead provide services such as consultation or manual audits. For an overview of the AI audit ecosystem, we refer readers to <ref type="bibr" target="#b2">[3]</ref>. We also do not consider open source platforms, which <ref type="bibr" target="#b3">[4]</ref> has reviewed. Our goal is to provide an overview of the fairness measurement startup ecosystem and to discuss how these startups implement fairness measurement in practice. We aim to highlight the gaps between current implementations and existing research and suggest potential improvements in both research and implementation to guide algorithmic fairness in practice.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methods</head><p>We collected relevant startups specializing in fairness evaluations from "The ethical AI database", Google search and Crunchbase, using a set of predefined keywords related to algorithmic fairness. We then filtered this list for startups that claim to offer fairness metrics. This resulted in a list of 21 startups, which we are currently investigating. Since their platforms are proprietary products, we were not able to easily access them to check what types of fairness measurements are implemented. We therefore rely on startups' publicly available material, such as their website, documentation, white papers and video material. We review this material to document how these startups implement fairness measurement and also take note of the claims that they are making about their products. The startups that we have analyzed so far are Arize <ref type="bibr" target="#b4">[5]</ref>, Etiq AI <ref type="bibr" target="#b5">[6]</ref>, FairPlay <ref type="bibr" target="#b6">[7]</ref>, Fiddler AI <ref type="bibr" target="#b7">[8]</ref>, Mona <ref type="bibr" target="#b8">[9]</ref> and SolasAI <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Preliminary Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Fairness Measurement</head><p>For Fiddler AI, Arize and Etiq AI, we were able to find a clear list of the implemented fairness criteria (see <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>). FairPlay uses one metric in all their reports, which we therefore assume is the only one that their platform measures although they mention two more metrics on their website's FAQ section <ref type="bibr" target="#b13">[14]</ref>. For Mona and SolasAI, we could not find documentation that listed the implemented fairness metrics, so access to the platform would be required to evaluate this further. Note that these platforms also implement other metrics (e.g., label distribution) for evaluating different aspects. However, we focus specifically on fairness metrics and how users are guided to choose between them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Focus on standard group fairness criteria</head><p>Of the platforms with information on which concrete fairness criteria are implemented, all but one of the implemented criteria belong to the group fairness category. Only Etiq AI mentions individual fairness <ref type="bibr" target="#b12">[13]</ref>. However, there is no explanation of how these are implemented or how the issue of defining similarity between individuals is addressed. All other implemented fairness metrics are group fairness metrics. This is a clear majority that resembles what we see in the open source landscape <ref type="bibr" target="#b3">[4]</ref>. We assume that the reason for this is that group fairness is very easy to implement and requires no further input from users whereas individual fairness or causal definitions of fairness require domain-specific input from the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Implemented fairness criteria</head><p>Let us now summarize which fairness criteria we know to be implemented. <ref type="foot" target="#foot_0">1</ref>• Statistical parity / demographic parity: selection rate (probability of receiving a positive decision) equal across socio-demographic groups; implemented by all four startups • Equal opportunity: true positive rate equal across socio-demographic groups; implemented by three startups (Fiddler AI, Arize, Etiq AI) • False positive rate parity: false positive rate equal across socio-demographic groups;</p><p>implemented by one startup (Arize) • Equalized odds: both equal opportunity and false positive rate parity<ref type="foot" target="#foot_1">2</ref> fulfilled; implemented by one startup (Etiq AI) • Group benefit parity: ratio of positive decisions to positive labels equal across sociodemographic groups; implemented by one startup (Fiddler AI) • Denial odds parity ratio of negative decisions to positive decisions equal across sociodemographic groups. The ratio of two groups' denial odds is described as a fairness metric in FairPlay's FAQ section <ref type="bibr" target="#b13">[14]</ref>, but it is doubtful whether it is actually implemented.</p><p>The first four of these criteria are well-known group fairness criteria that are commonly found in the literature. However, they have also received criticism: One common theme is that these fairness criteria only look at statistics relating to the decision but not at the consequences of the decision <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>. However, what is relevant for fairness is how a decision affects decision subjects. This mismatch can mean that enforcing some fairness metrics could hurt marginalized groups as shown in <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b18">19]</ref>. There has thus been a call for welfare-based fairness criteria, which the analyzed tools have not implemented yet.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lack of guidance</head><p>Choosing an appropriate fairness metric represents multiple value judgments about the situation at hand. This moral choice is difficult to make, but particularly hard if one is not familiar with fairness and justice discussions -which we would expect to be the case for practitioners using these platforms. We therefore sought documentation from all platforms that guide users in choosing fairness metrics. Along with the specification of the fairness metrics that are implemented, Fiddler AI, Arize, Etiq AI and FairPlay all provided more information on these metrics. However, in three cases (Fiddler AI, Etiq AI and FairPlay) this information is purely formal and descriptive. They simply describe the statistical metric in words instead of using a formula. What is provided is not actual guidance, but something that merely appears to be guidance at first. See, for example, Fiddler AI's "guidance" on two fairness criteria (the others are described similarly) in <ref type="bibr" target="#b10">[11]</ref>:</p><p>• Group benefit: "If the two groups are treated equally, the group benefit should be the same. " • Equal opportunity: "If the two groups are treated equally, the TPR should be the same. " Wanting groups to be treated equally seems like a good goal, which according to Fiddler AI would mean having to fulfill both the group benefit and equal opportunity criterion -which Fiddler AI (incorrectly) claims to be "impossible". 3 The given information is not only confusing to users but also not backed up by research.</p><p>In a blog post <ref type="bibr" target="#b22">[23]</ref>, Arize provides a decision tree through which users are supposed to find appropriate fairness criteria. This tree strongly resembles the one proposed by Aequitas [24]. 4 With questions such as "Does your business problem require fairness to address disparate representation or disparate errors in your ML model?", the tree would (similar to Aequitas' tree, cmp. <ref type="bibr" target="#b3">[4]</ref>) still be difficult to use for an uninitiated user of a fairness toolkit as they assume that a user already knows what fairness requires in their context.</p><p>With access limited to the platforms' websites and documentation, it's unclear if more guidance is available on the actual platforms. Given the unclear documentation, we do not expect this to be the case.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Critical View on Claims</head><p>In our analysis, we came across various claims about fairness measurement and bias mitigation capabilities of startups. Some startups give the impression that fairness is fully quantifiable with a definite metric to measure bias, even though a single fairness metric cannot capture the complexity of fairness. <ref type="bibr" target="#b23">[25]</ref>. For bias mitigation, it is common to insinuate that mitigation techniques are a solution or fix for discrimination -a techno-solutionist message <ref type="bibr" target="#b24">[26,</ref><ref type="bibr" target="#b25">27]</ref>. One example that combines both is the following claim found on FairPlay's website, advertising why customers should use FairPlay's platform: "20% Increase in fairness for Black applicants" <ref type="bibr" target="#b26">[28]</ref>. These kinds of claims carry the risk that third parties using these platforms build on the claims of the startups to ethics-wash their product.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion</head><p>As we have seen, most implemented fairness metrics are standard group fairness metrics. While group fairness metrics have the advantage of being easy to implement, this also bears the danger that they are used without much reflection. This issue is worsened by the platform providers not offering any sort of moral guidance for choosing fairness metrics. Moreover, many startups make misleading claims about their fairness capabilities that promote a techno-solutionist view, reducing fairness to a single number. Although some startups have shown admirable intentions in practical fairness solutions, they are inherently driven by customer demand -which is in this case often a reaction to prevalent regulations. Therefore, achieving substantive fairness must be a collective responsibility that extends beyond these platforms and encompasses policymakers, researchers, the industry and society at large. 3 Fiddler AI writes "An important point to make is that it's impossible to optimize all the metrics at the same time. This is something to keep in mind when analyzing fairness metrics. " With this, Fiddler AI hints at the impossibility theorems <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref>, which mathematically show the impossibility of fulfilling specific criteria at the same time under certain conditions. However, they only showed this impossibility for certain metrics and, for example, did not include group benefit. 4 Although we note that the work of Aequitas is not cited by Arize.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Note that because we only have access to the documentation and white papers, but not the platforms themselves, there could be discrepancies that we cannot account for.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Etiq AI actually uses equal opportunity and true negative rate parity, but by fulfilling true negative rate parity, one also fulfills false positive rate parity.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206" />
		<title level="m">Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021) 206 final)</title>
				<imprint>
			<date type="published" when="2021">2021. 2024-01-03</date>
		</imprint>
	</monogr>
	<note>European Commission</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Mitigating bias in algorithmic hiring: Evaluating claims and practices</title>
		<author>
			<persName><forename type="first">M</forename><surname>Raghavan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kleinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Levy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 conference on fairness, accountability, and transparency</title>
				<meeting>the 2020 conference on fairness, accountability, and transparency</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="469" to="481" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem</title>
		<author>
			<persName><forename type="first">S</forename><surname>Costanza-Chock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">D</forename><surname>Raji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Buolamwini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2022 ACM Conference on Fairness, Accountability, and Transparency</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1571" to="1583" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The landscape and gaps in open source fairness toolkits</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S A</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 CHI conference on human factors in computing systems</title>
				<meeting>the 2021 CHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">The AI Observability &amp; LLM Evaluation Platform</title>
		<author>
			<persName><forename type="first">Arize</forename></persName>
		</author>
		<ptr target="https://arize.com/" />
		<imprint>
			<date type="published" when="2024-03-28">2024. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Etiq</surname></persName>
		</author>
		<ptr target="https://etiq.ai/" />
		<title level="m">ML Testing For Everyone</title>
				<imprint>
			<date type="published" when="2024-03-28">2024. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><surname>Fairplay</surname></persName>
		</author>
		<ptr target="https://fairplay.ai/" />
		<title level="m">Fairness for People, Profits, and Progress</title>
				<imprint>
			<date type="published" when="2024-03-28">2024. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">AI Observability</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Fiddler</surname></persName>
		</author>
		<ptr target="https://www.fiddler.ai/" />
		<imprint>
			<date type="published" when="2024-03-28">2024. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">Mona</forename></persName>
		</author>
		<ptr target="https://www.monalabs.io/" />
		<title level="m">The Most Intelligent AI Monitoring Platform</title>
				<imprint>
			<date type="published" when="2023">2023. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><surname>Solasai</surname></persName>
		</author>
		<ptr target="https://www.solas.ai/" />
		<title level="m">Reduce your algorithmic discrimination regulatory, legal and reputational risk</title>
				<imprint>
			<date type="published" when="2022">2022. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Fiddler</surname></persName>
		</author>
		<author>
			<persName><surname>Fairness</surname></persName>
		</author>
		<ptr target="https://docs.fiddler.ai/docs/fairness" />
		<imprint>
			<date type="published" when="2023-11-13">2023. 2023-11-13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Bias Tracing (Fairness)</title>
		<author>
			<persName><forename type="first">Arize</forename></persName>
		</author>
		<ptr target="https://docs.arize.com/arize/tracing-and-troubleshooting/11.-bias-tracing-fairness" />
		<imprint>
			<date type="published" when="2023-11-25">2023. 2023-11-25</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Etiq</surname></persName>
		</author>
		<author>
			<persName><surname>Bias</surname></persName>
		</author>
		<ptr target="https://docs.etiq.ai/scan-types/bias" />
		<imprint>
			<date type="published" when="2023-12-01">2023. 2023-12-01</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Frequently Asked Questions</title>
		<author>
			<persName><surname>Fairplay</surname></persName>
		</author>
		<ptr target="https://fairplay.ai/faq/" />
		<imprint>
			<date type="published" when="2024-02-26">2024. 2024-02-26</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Bridging machine learning and mechanism design towards algorithmic fairness</title>
		<author>
			<persName><forename type="first">J</forename><surname>Finocchiaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Maio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Monachou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Patro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raghavan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-A</forename><surname>Stoica</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tsirtsis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2021 ACM Conference on Fairness, Accountability, and Transparency</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="489" to="503" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Fairness in machine learning: Lessons from political philosophy</title>
		<author>
			<persName><forename type="first">R</forename><surname>Binns</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v81/binns18a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Conference on Fairness, Accountability and Transparency</title>
				<editor>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Friedler</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Wilson</surname></persName>
		</editor>
		<meeting>the 1st Conference on Fairness, Accountability and Transparency<address><addrLine>PMLR, New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="page" from="149" to="159" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">On the moral justification of statistical parity</title>
		<author>
			<persName><forename type="first">C</forename><surname>Hertweck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Heitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Loi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3442188.3445936</idno>
		<idno>doi:10.1145/3442188.3445936</idno>
		<ptr target="https://doi.org/10.1145/3442188.3445936" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;21</title>
				<meeting>the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT &apos;21<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="747" to="757" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Does the End Justify the Means? On the Moral Justification of Fairness-Aware Machine Learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Weerts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Royakkers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pechenizkiy</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2202.08536</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Not so fair: The impact of presumably fair machine learning models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Jorgensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Richert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Black</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Criado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Such</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2023 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="297" to="311" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Fair classification and social welfare</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2020 Conference on Fairness, Accountability, and Transparency</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="535" to="545" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Kleinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mullainathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raghavan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1609.05807</idno>
		<title level="m">Inherent trade-offs in the fair determination of risk scores</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Fair prediction with disparate impact: A study of bias in recidivism prediction instruments</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chouldechova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Big data</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="153" to="163" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Evaluating Model Fairness</title>
		<author>
			<persName><forename type="first">S.-A</forename><surname>Delucia</surname></persName>
		</author>
		<ptr target="https://arize.com/blog/evaluating-model-fairness/" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Measurement and fairness</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Z</forename><surname>Jacobs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM conference on fairness, accountability, and transparency</title>
				<meeting>the 2021 ACM conference on fairness, accountability, and transparency</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="375" to="385" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Morozov</surname></persName>
		</author>
		<title level="m">To save everything, click here: The folly of technological solutionism</title>
				<imprint>
			<publisher>Publi-cAffairs</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Roles for computing in social change</title>
		<author>
			<persName><forename type="first">R</forename><surname>Abebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kleinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raghavan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Robinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 conference on fairness, accountability, and transparency</title>
				<meeting>the 2020 conference on fairness, accountability, and transparency</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="252" to="260" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><surname>Fairplay</surname></persName>
		</author>
		<ptr target="https://fairplay.ai/for-banks/" />
		<title level="m">Increase Fairness, Boost Profits</title>
				<imprint>
			<date type="published" when="2024-03-28">2024. 2024-03-28</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
