<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Rethinking Bias and Fairness in AI Through the Lens of Gender Studies</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gabriele</forename><surname>Nino</surname></persName>
							<email>gabriele.nino@uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">DIRIUM Dept</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Piazza Umberto I</addrLine>
									<postCode>70121</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francesca</forename><forename type="middle">Alessandra</forename><surname>Lisi</surname></persName>
							<email>francescaalessandra.lisi@uniba.it</email>
							<affiliation key="aff1">
								<orgName type="department">DiB Dept. &amp; CISCuG</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>via E. Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Rethinking Bias and Fairness in AI Through the Lens of Gender Studies</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">4EF5F99D179B08BF3F470CC8E8AD01B2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:21+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Gender Bias</term>
					<term>Performative Theory</term>
					<term>AI Ethics</term>
					<term>Algorithmic Fairness</term>
					<term>Gender Studies1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper examines the main approaches that have been put forth to contrast the emergence of biases in AI systems, namely causal, counterfactual reasoning, and constructivist methodology. The objective is to demonstrate the necessity of supplementing this technical solution with a more comprehensive social analysis of the genesis of discriminatory practices. To investigate this sphere, we leverage results from the field of Gender Studies. In particular, we apply the theory of gender performativity as theorized by Judith Butler. This illustrates how AI functions within the social fabric, manifesting patriarchal configurations of gender through an analysis of the notorious case of the COMPAS system for predictive justice. This approach enables an expansion of the interpretation of the concept of fairness, thereby reflecting the complex dynamics of gender production. In conclusion, the gender dimension needs to be reconsidered not as an individual feature but as a performative process. Moreover, it enables the identification of pivotal issues that must be addressed during the design, development, testing, and evaluation phases of AI systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Over the past decades, Machine Learning (ML) technologies have spread at great speed, helping to bring about a process of transformation and change in the social fabric. ML is a major area of the Artificial Intelligence (AI) field of study which concerns the design and the development of algorithms that can improve their performance on specific tasks (especially classification and prediction) once trained on large amounts of data <ref type="bibr" target="#b0">[1]</ref>. The interest in ML stems mainly from the fact that they are the first form of technology ever developed that has a large reserve of agency and autonomy <ref type="bibr" target="#b1">[2]</ref>. Consider, for example, the famous case of Chat GPT, developed by Open-AI, a generative AI system capable of performing certain tasks by understanding natural language <ref type="bibr" target="#b2">[3]</ref>. But that is not all. The vast computational power of ML allows it to analyze large amounts of data, surpassing human cognitive abilities in terms of accuracy, speed and processing power <ref type="bibr" target="#b3">[4]</ref>. As a result, ML is opening a wide range of applications, from medicine to architecture, from engineering to finance, and so on.</p><p>Nevertheless, the outcomes yielded by these algorithms are not always equitable or transparent. In some instances, they may even serve to exacerbate the existing prevalence of discriminatory practices within the social fabric. The potential for software to malfunction, to be biased, and thus to perpetuate social discrimination is currently the subject of rigorous critical examination by some institutional bodies <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b7">[8]</ref>. Indeed, as of 2018, the European Union Agency for Fundamental Rights (FRA) has published a series of reports examining the various forms of discrimination perpetuated by these technologies. In the survey published in 2020, we find the following: "Discrimination is a crucial topic when it comes to the use of AI, because the very purpose of machine learning algorithms is to categorise, classify and separate" <ref type="bibr">[8, p. 68]</ref>. This aspect is of paramount importance as it links the issue of discrimination to "pattern recognition," which is the process of identifying clusters that represent the behavioral patterns of individuals by modeling their preferences. But why is it possible for forms of discrimination to lurk behind this process?</p><p>The relationship between the construction of identity and the social value attached to it has been a long-standing topic of inquiry within the fields of Feminist Theory and Gender Studies <ref type="bibr" target="#b8">[9]</ref>. Indeed, it has been observed that individuals who are identified as women, people of color, homosexuals, or indigenous people are situated within a symbolic order that ascribes a negative value to them <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b10">[11]</ref>. To be different from the universal subject of Cartesian inspiration is to be "worth less" <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr" target="#b12">[13]</ref>. For example, women have historically been regarded as non-rational beings, as a consequence of which they have been perceived as too sensitive, as being slanted by the oblative spirit that defines them <ref type="bibr" target="#b13">[14]</ref>. This has resulted in the conclusion that they are not worthy of access to the universality of male reason. As Simone de Beauvoir posits: "Woman is seen as different from man, not man as different from woman: She represents the inessential in relation to the essential. He is the Subject, the Absolute; she is the Other." <ref type="bibr">[15, p. 6]</ref>. In this sense, the American philosopher Judith Butler has highlighted that in the patriarchal view is produced an essential link of mutual exclusion, whereby Women is the mimetic correlate of the universal signifier Man <ref type="bibr">[16, p. 10]</ref>. This dichotomous logic also results in the exclusion from the symbolic order of all forms of subjectivity that do not conform to gender binarism, including non-binary, trans*, and intersex subjectivities. In fact, according to the Italian philosopher Chiara Bottici, all these life forms can fall into the category of "second sexes" because "in comparison to cismen, women, two-spirited, third gender, and LGBTQI+ folks […] in the current predicament are all excluded from the category of "first sex," and that they are thus mainly the object rather than the perpetrators of gender violence" <ref type="bibr">[17, p. 275]</ref>. This is why identification practices can be dangerous and expose individuals to the risk of experiencing forms of discrimination and surveillance. If the social sphere is hierarchy structured, identification processes can lead to the marginalization or exclusion of certain individuals based on their relations with a particular group.</p><p>In the field of AI, this issue is addressed through the study and modeling of algorithmic fairness <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b18">[19]</ref>. It is widely acknowledged that there are various methods to guarantee the impartiality of software and to avert its potential for discriminatory outcomes. However, our objective is to draw upon a range of concepts derived from the field of Gender Studies to enhance the discourse on fairness with a sociological and philosophical lens. In particular, we wish to put forth a reinterpretation of the theory of gender performativity, as developed by Judith Butler in 1990 <ref type="bibr" target="#b15">[16]</ref>. She demonstrated how the categories we typically utilize to identify individuals, particularly those pertaining to gender and race, are not merely conceptual representations of reality; rather, they are instruments through which power relations are articulated <ref type="bibr" target="#b15">[16]</ref>. In other words, they are tools that shape our actions. Given the growing autonomy and capacity for action of ML algorithms, it is worth considering whether they might also be capable of conveying social norms that influence human behavior.</p><p>The objective is to present a more intricate definition of the gender dimension in the context of the discussion on algorithmic fairness in AI/ML. As will be demonstrated in the ensuing discussion, it cannot be reduced to a simple individual attribute <ref type="bibr" target="#b19">[20]</ref>; rather, it encompasses a multiplicity of references and domains that must be deployed and taken into account.</p><p>Discrimination processes are therefore highly complex, intersecting multiple vectors <ref type="bibr" target="#b20">[21]</ref>. Gender discrimination, racial discrimination, class discrimination, forms of ableism, ageism, and so on do not run parallel; rather, they reinforce each other dramatically. A feminist approach from Gender Studies to algorithmic fairness requires unpacking this level of complexity and using the computational power of ML algorithms to change the social structures that produce these forms of oppression <ref type="bibr" target="#b21">[22]</ref>, <ref type="bibr" target="#b22">[23]</ref>. It is imperative to integrate the concept of intersectionality into the concept of algorithmic fairness <ref type="bibr" target="#b23">[24]</ref>. This entails recognizing and addressing the various forms of social oppression that undermine the democratic structure of our societies. Sexism, racism, homophobia, and xenophobia are complex social processes that reinforce one another. The report commissioned by European Digital Rights (EDRi) in 2021 demonstrated how an excessive focus on the problem of debiasing in AI and ML systems has resulted in an "algorithm-centred view" that obscures the political scope behind the development of these software <ref type="bibr">[25, p. 50]</ref>. In this sense, the authors call for an intersectional approach that can enable us to address the complexity of the phenomena of discrimination. On the contrary, it is not explicated the way in which we can theorize and enact the concept of intersectionality to the case of ML. However, it is true that intersectionality represents the most fundamental lens through which to examine the diverse ways in which these processes of discrimination and oppression converge. To this end, we believe that it is necessary to demonstrate that ML technologies operate in a performative way according to the gender performativity theory developed by Butler. In this sense it is possible to describe how ML systems operate a process of signification of the body inscribing in it a specific normative criterion. To summarize then, the objective of this study is to introduce an intersectional perspective to biases in ML by reformulating the gender performativity theory developed by Butler.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Structure of the Paper</head><p>First of all, we present a comprehensive framework for elucidating the ways in which ML can potentially give rise to forms of bias and discrimination. In Section 2, we undertake a detailed examination of the ML loop, demonstrating how the learning process can be adversely affected by the presence of a multitude of forms of bias and spurious correlations. This, in turn, gives rise to a discussion of the inherent opacity of many models.</p><p>In response to the opacity of ML systems, a set of ethical and normative concepts has been developed under the name of Explainable AI (XAI). Fairness, Accountability and Transparency are the main requirements that need to be met in order to establish the ethicality of an AI system. In particular, in Section 3 we analyze with a qualitative and historical approach the concept of fairness and the main ways in which it has been formalized at a technical level. We look at causal, counterfactual reasoning and genealogical argument and show the limitations of these approaches in relation to an adequate understanding of the social dimension of gender.</p><p>In Section 4, we leverage the method of gender performativity developed by Butler, applying it to the case of AI. This is done to integrate the social dimension within the sphere of fairness. At the conclusion of this analysis, we examine the COMPAS case, a software program utilized for calculating the probability of recidivism of a defendant, as an illustrative example of this method.</p><p>Finally, we demonstrate the need to redefine the ethical principles that guide the development and evaluation of AI in order to properly understand the multiplicity of ways in which gender or racial dimensions are expressed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Ethical Problems in AI/ML</head><p>In order to axiomatize the question of the propagation of forms of discrimination in ML, it is first necessary to understand its inner workings. As mentioned earlier, ML refers to computer programs that are able to improve their performance to perform a given set of tasks as optimally as possible <ref type="bibr" target="#b0">(Mitchell, 1997)</ref>. But what does it mean in concrete terms for software to be able to learn to perform tasks and to improve its own performance?</p><p>First, the ML process can be represented as a loop <ref type="bibr" target="#b17">[18]</ref>. The first stage of the process is to measure and collect data relevant to understanding a phenomenon. This data is used to train a model to extract patterns and generalities. After the training phase, a program, if developed correctly, can make predictions about the chosen phenomenon. Often a system can also record external feedback to improve its performance.</p><p>We are thus faced with a circular process: a social phenomenon is isolated and made the object of measurement in order to train a model capable of making useful predictions and interacting with the phenomenon in question. So it comes evident that ML operates at two levels: the technical and the social <ref type="bibr" target="#b25">[26]</ref>. Therein lies its agency <ref type="bibr" target="#b26">[27]</ref>. Nevertheless, it is important not to consider these two levels as ontologically distinct. As we show more in detail in Section 4.2, starting from the '90s a conspicuous stream of works has demonstrated the impossibility of disconnecting these levels <ref type="bibr" target="#b27">[28]</ref>, <ref type="bibr" target="#b28">[29]</ref>, <ref type="bibr" target="#b29">[30]</ref>, <ref type="bibr" target="#b30">[31]</ref>, <ref type="bibr" target="#b31">[32]</ref>. Lucy Suchman in particular describes how every technological intervention produces a new configuration and organization of social life. For this reason, it is possible to understand the continuity and the homogeneity between the technical, the material, the social organization of the world, and their constitutive entanglement <ref type="bibr" target="#b32">[33]</ref>. In this sense, Orlikowski affirms that "the social and the material are considered to be inextricably related -there is no social that is not also material, and no material that is not also social" <ref type="bibr">[32, p. 1437]</ref>.</p><p>Sexism, racism and sexual discrimination against LGBT+ communities are the main forms of social inequality that affect and slow down the democratization processes of liberal societies <ref type="bibr" target="#b33">[34]</ref>. So, if the social sphere is made up of a series of unequal and discriminatory relationships between individuals, we must also look at the technological apparatus that can reinforce and spread these particular configurations of material life. In this sense, if ML algorithms are not designed, implemented and tested properly, they can exacerbate these differences, rather than mitigate or even eliminate them.</p><p>The philosopher Bernard Stiegler uses the Platonic term pharmakon to describe this dual mechanism <ref type="bibr" target="#b34">[35]</ref>. In ancient Greek, pharmakon means both poison and remedy <ref type="bibr" target="#b35">[36]</ref>. More generally, technology can be a means of promoting social justice or of destroying human bonds, as in the case of the Holocaust or the atomic bomb. So how does AI fit into this dynamic?</p><p>Each of the stages in the ML lifecycle can be affected by measurement or modelling errors that lead to the production of inaccurate results. These are commonly referred to as biases. Ferrara <ref type="bibr" target="#b36">[37]</ref> isolates at least four main forms of bias that can affect the proper functioning of ML, each of which is located at a precise stage in the loop he has illustrated above. Let's see what they are:</p><p>Representation bias is when a dataset does not correctly represent the social set of individuals. Sampling bias is when the training data does not consider the diversity of the population.</p><p>Measurement bias is when the data collection systematically over-or under-represents certain groups.</p><p>Algorithmic bias results from the design of an algorithm that may prioritize certain attributes leading to unfair outcomes.</p><p>This brief analysis shows that the data collection and processing phases seem to be the most critical. In fact, it is well known how ML programs have extracted spurious forms of correlations between certain features from big data sets <ref type="bibr" target="#b37">[38]</ref>. One of the most striking cases in this sense is Amazon Recruitment Software, which was developed by the American big tech company to automate and improve the efficiency of recruitment. The software, developed in 2015, was trained on the CVs of employees hired by the company over the previous decade. As the IT sector was then, and still is, male-dominated, the system gave lower scores to female candidates, creating a false correlation between a person's gender and their technical competence. While the company tried to mitigate the problem, it decided to stop using the tool in 2017 <ref type="bibr" target="#b38">[39]</ref>.</p><p>But why is the problem of spurious correlation and its unfair results so difficult to eliminate that a multi-billion-dollar company like Amazon is forced to dismiss its program?</p><p>This brings us to the second major ethical issue raised by ML. Very often, ML algorithms are defined as opaque, meaning that it is impossible to follow their inner workings. The behavior of the software can only be judged and evaluated by its results <ref type="bibr" target="#b39">[40]</ref>.</p><p>In the '60s the Argentine-Canadian philosopher Mario Bunge has isolated this problem under the concept of black-box. He says: «The constitution and structure of the box are altogether irrelevant to the approach under consideration, which is purely external or phenomenological. In other words, only the behavior of the system will be accounted for» <ref type="bibr">[41, p. 346]</ref>.</p><p>Finally, Wendy Chun has demonstrated that ML technologies can be defined as homophilic. According with the Greek roots of the term, which means love of the same, the author reveals how statistical inference performed by ML tends to reproduce the same, what has always been <ref type="bibr" target="#b41">[42]</ref>. In this light, one of the dangers of ML is its possible conservativeness meaning that it can predict future phenomena only replicating the past.</p><p>All these elements show that ML is not a neutral technology, but that it sometimes has a negative impact on people's lives. This is why, in recent decades, the parallel proliferation of AI systems has been accompanied by the drafting of countless ethical and legal documents aimed at regulating and controlling this field <ref type="bibr" target="#b42">[43]</ref>. In the following section, we analyze one of the main ethical approaches that have been developed to address the problem of opacity in systems and to provide possible oversight on ML's work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">A Methodology for Analyzing the Algorithmic Fairness in AI</head><p>The methodology employed to conduct the subsequent analyses reported in this article is exclusively historical and qualitative in nature. In other words, we have considered the concept of algorithmic fairness as a means of addressing the issue of discrimination in AI-based decision making. We begin by tracing the history of this concept, that did not originate in the field of computer science. From a qualitative perspective, the most significant techniques through which the concept of fairness has been formalized are analyzed. In this regard, causal and counterfactual reasoning have been identified as the two most prevalent methods for evaluating the fairness of an algorithm, notably in ML. Our qualitative inquiry, however, is aimed at investigating how the gender dimension is treated in these formalizations. In this way, we were able to identify a constitutive difference in the way the gender dimension is treated in the technical computing field compared to the field of gender studies. For this reason, we attempt to develop an interdisciplinary and a comparative approach that can enable us to connect these two areas of research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Historical Aspects</head><p>In 2018, the Association for Computing Machinery (ACM) proposed the use of three principles that must be followed to counter the opacity of systems: Fairness, Accountability and Transparency <ref type="bibr" target="#b18">[19]</ref>. These principles have become hegemonic in the field of so-called XAI <ref type="bibr" target="#b43">[44]</ref>, <ref type="bibr" target="#b44">[45]</ref>, <ref type="bibr" target="#b45">[46]</ref>, <ref type="bibr" target="#b46">[47]</ref>. A detailed analysis of each of these principles is beyond the scope of this study. Instead, we are interested in focusing our analysis on the notion of fairness.</p><p>The concept of fairness has a well-established history <ref type="bibr" target="#b47">[48]</ref>. It is a concept that originated in political philosophy, within classical liberal theories, and was brought into vogue by John Rawls's important 1985 work Justice as Fairness <ref type="bibr" target="#b48">[49]</ref>. Since the 1990s, the concept of fairness has also been used in sociology and economics <ref type="bibr" target="#b49">[50]</ref>. It is only in recent times that algorithmic fairness has also begun to be discussed. However, there is fundamental disagreement about its definition in the algorithmic field.</p><p>To overcome this problem, the notion of fairness is generally conceived in terms of a descriptive phenomenological state. That is, an algorithm is said to be 'fair' if it can be said to produce no forms of discrimination and to promote equity between subjectivities <ref type="bibr" target="#b50">[51]</ref>.</p><p>We will now analyze the main formalizations at the technical level that have been produced to assess the fairness of an algorithm. In particular, we will consider how the notion of gender is treated in these perspectives to highlight the aporias and limitations, and to show the need to graft a feminist reasoning within the discussion on fairness and, more generally, on AI ethics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Causal and Counterfactual Reasoning</head><p>The Israeli American computer scientist Judea Pearl was the first to formalize the importance of causal reasoning in ML to overcome the danger of spurious correlations. Through his famous works, he expressed the need and necessity to move away from "reasoning by association" to "causal reasoning" <ref type="bibr" target="#b51">[52]</ref>. So, what does causal reasoning consist of and how does it interact with the sphere of gender? To answer this question, let us look at the famous case of the admission rates at the University of Berkeley in 1973 <ref type="bibr" target="#b52">[53]</ref>.</p><p>In the early 1970s, UC Berkeley's graduate school faced scrutiny when it was observed that 44% of male applicants were admitted compared to only 35% of female applicants. This apparent disparity suggested a systemic bias against female applicants. However, when admissions data were disaggregated by department, a different pattern emerged. In statistics this effect is called Simpson's paradox <ref type="bibr" target="#b53">[54]</ref>.</p><p>Simpson's Paradox occurs when a trend apparent in aggregated data reverses when the data are divided into groups. In the Berkeley case, most departments actually had higher admission rates for female than for male applicants. The aggregate data misrepresented the situation due to differences in application patterns: more women applied to highly competitive departments with lower admission rates, while men applied to less competitive departments with higher admission rates.</p><p>Judea Pearl has extensively discussed the Berkeley case in his works. He uses the case to highlight the pitfalls of misinterpreting statistical data without considering underlying causal relationships. He writes: «Department after department, the admissions decisions were consistently more favorable to women than to men» <ref type="bibr">[55, p. 311</ref>]. Thus, according to him, to properly understand the phenomena of discrimination and social order, it is necessary to consider the causal relationship between the various features of which it is composed. In this case, the concept of gender must be placed in a directed acyclic graph (DAG) in order to calculate precisely the statistical relationship between three nodes: gender, choice of department and admission <ref type="bibr">[55, p. 312]</ref>.</p><p>In this sense causality can be seen as a method to detect discrimination and assure fairness. Causal reasoning can identify whether disparities in algorithmic outcomes are due to discriminatory practices or other factors. For example, by using causal diagrams, researchers can determine if a protected attribute (e.g., race, gender) directly influences the decision outcome or if the influence is mediated by other variables <ref type="bibr" target="#b45">[46]</ref>.</p><p>However, Pearl's causal reasoning is divided into three important stages: The first is called association, where the statistical inference is given by the relationship between gender and admission. The second is intervention, where the relationship is modulated in a DAG between gender, departmental choice and admission. Finally, Pearl argues for the need to develop a counterfactual approach based on the question "What if I had acted differently?" which could be translated as "What if men applied to more competitive departments?" <ref type="bibr" target="#b55">[56]</ref>.</p><p>Counterfactual reasoning therefore makes it possible to break through the opacity of a program and better understand the possible discriminatory dynamics underlying it, thus fulfilling the requirement of fairness.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Limitations of these Approaches</head><p>Despite the importance of this pioneering work, especially from the fields of philosophy and law, some criticism has been levelled at this theoretical framework.</p><p>First, Ziosi et al. showed how, in XAI, the notion of fairness is measured only a posteriori, i.e. on the basis of a program's performance. However, as we have already anticipated, AI systems are sociotechnical systems, i.e. a set of practices that cannot be reduced to the technical aspect alone. The authors show the need to adopt a genealogical approach, i.e. an a priori perspective that can show how the social and the technical are linked, and to take into account the different forms of discrimination that a system might produce <ref type="bibr" target="#b56">[57]</ref>.</p><p>What Ziosi et al.'s genealogical reflection does not reveal, however, is that the phenomena of discrimination, which include, for example, gender and race dimensions, are multifaceted. For this reason, Hu and Kohler-Hausmann have shown how the discussion of fairness fails to consider in advance the social ontology, i.e. the dimension in which forms of discrimination are constructed <ref type="bibr" target="#b57">[58]</ref>. In fact, the authors show how causal and counterfactual reasoning treats the gender dimension as a separate thing that exists on its own. This means that gender is only considered as an individual characteristic, and discrimination affects the group of people who share the same characteristic. In addition, Bjerring and Busch have shown how the gender dimension is thus statistically reduced to a discrete attribute possessed by a "statistical individuality" <ref type="bibr" target="#b19">[20]</ref>.</p><p>Therefore, there is an urgent need to combine the genealogical aspect of discrimination studies with a reflection on the social ontology that enables its development and dissemination. We propose below to adopt the performative approach developed by Judith Butler and Karen Barad to explore the social ontology and understand how discrimination in ML is constructed around gender <ref type="bibr" target="#b58">[59]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Research Findings from Gender Studies</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Gender Performative Theory</head><p>It has been demonstrated that the eradication of discrimination cannot be achieved solely through a technical approach, rather a comprehensive strategy necessitates the integration of social perspectives, complementing the technical analysis. In particular, there is a notable divergence in the conceptualization of the gender dimension between Gender Studies and AI ethics. As has been demonstrated, gender is viewed as a "sensitive feature," that is, an individual attribute. Feminist theory posits that we examine the social ontology in which discriminatory processes are produced. The concept of gender is not merely an indication of a biological distinction between individuals (for example, between men and women). Rather, it serves as an indicator of the manner in which social relations between diverse subjectivities are organized (for instance, between women and men, heterosexuals and homosexuals). The nature of these relationships is significantly influenced by the existence of power relations that result in the marginalization or exclusion of specific subjectivities. The work of American philosopher Judith Butler is oriented in this direction.</p><p>In Gender Trouble <ref type="bibr" target="#b15">[16]</ref>, the author rejects the hypothesis developed by Gayle Rubin that sex is a natural, biological expression of the human, and gender is a cultural representation of the former <ref type="bibr" target="#b59">[60]</ref>. There is no mimetic continuity between sex and gender. Sexual categories, i.e. male and female, are not representations of a pre-existing reality. They are the result of social production. In this sense she writes "Gender ought not to be conceived merely as the cultural inscription of meaning on a pregiven sex (a juridical conception); gender must also designate the very apparatus of production whereby the sexes themselves are established. As a result, gender is not to culture as sex is to nature; gender is also the discursive/cultural means by which sexed nature or a natural sex is produced and established as prediscursive, prior to culture, a politically neutral surface on which culture acts" <ref type="bibr">[16, p. 11]</ref>.</p><p>This ordinary conception is in fact based on the ordinary view of metaphysics, which assumes that there is a substance which can be predicated of various attributes, but which essentially preexists the action of attribution. Gender markers are thus not the result of the process of attributing an independent reality, the biological body, but the process by which bodies inscribe themselves within a process of signification. The gender dimension, understood in this way, is a matrix of intelligibility that allows the body to be read according to certain social norms. The author writes: "In this sense, gender is not a noun, but neither is it a set of freefloating attributes, for we have seen that the substantive effect of gender is performatively produced and compelled by the regulatory practices of gender coherence. Hence, within the inherited discourse of the metaphysics of substance, gender proves to be performative-that is, constituting the identity it is purported to be. In this sense, gender is always a doing, though not a doing by a subject who might be said to preexist the deed" <ref type="bibr">[16, p. 33]</ref>.</p><p>Thus, we cannot conceive gender as an individual attribute of human beings, but rather as a performative process implemented through the iterative and citational action of certain social norms. This is the social ontology we need to start from if we really want to understand how gender discrimination is produced and spread in society, and if we really want to develop an effective discourse on AI ethics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Performativity in Science and Technology Studies (STS)</head><p>Nevertheless, Butler did not directly address the subject of technology. Instead, performativity theory has been extensively utilized and examined within the domain of Science and Technology Studies (STS) <ref type="bibr" target="#b60">[61]</ref>, <ref type="bibr" target="#b61">[62]</ref>. The primary objectives of this field are inherent in the examination of the epistemological premises that underpin diverse scientific methodologies and practices and the ethical and political ramifications that are produced in the social domain as a result <ref type="bibr" target="#b62">[63]</ref>.</p><p>In particular, Bruno Latour showed how scientific activity does not operate as a process of representing nature but is also a way of constructing scientific phenomena. Latour's polemical target is the modern construction of the phenomenon <ref type="bibr" target="#b63">[64]</ref>. From a Kantian perspective, for example, the phenomenon is that which manifests itself to the transcendental subject through its categorical apparatus. In fact, modern perspective is grounded on two different transcendental poles: the objective and the subjective. The division of the world into two poles establishes a series of other dichotomies: nature/culture, necessity/freedom, immanence/transcendence, science/politics and finally non-human/human. Instead, he shows that a scientific fact never pre-exists its construction. Through the concept of the network, he breaks through the ontological barrier that opposes the subjective to the objective. The network is precisely the complex assemblage of natural forces, technical apparatuses, human and non-human agents that are connected in a more or less stable way <ref type="bibr" target="#b28">[29]</ref>. So scientific practices do not represent anything, they produce new connections and new hybrids. It is only when a network stabilizes, i.e. becomes permanent, that the processes of signification of its constituent elements can be traced.</p><p>Latour's commitment to denouncing the simplicity of the modern stance in scientific practice has been taken up and reinterpreted from a feminist perspective by many theorists, most notably Donna Haraway and Karen Barad.</p><p>Haraway takes up Butler's notion of materialization. In Bodies that Matter, the author writes: "the notion of matter [should not be considered] as site or surface, but as a process of materialization that stabilizes over time to produce the effect of boundary, fixity, and surface we call matter" <ref type="bibr">[65, p. 10</ref>]. The body is not a neutral surface on which cultural meanings are recorded. Rather, it is the result of a process of constant production of its boundaries. Think, for example, of technologies that make it possible to visualize the fetus in the womb. They produce a certain materialization of the child's gendered body, inscribing it into a binary symbolic framework even before birth <ref type="bibr" target="#b65">[66]</ref>.</p><p>In this regard, Donna Haraway speaks of body production apparatuses to indicate the process by which the boundaries of the object of knowledge are established in the interaction between different actors. She writes: "bodies as objects of knowledge are materialsemiotic generative nodes. Their boundaries materialize in social interaction among humans and non-humans, including the machines and other instruments that mediate exchanges at crucial interfaces and that function as delegates for other actors' functions and purposes". <ref type="bibr">[67, p. 298]</ref>.</p><p>Finally, Karen Barad shows how every process of measurement in scientific practice is a device for the production of meaning. She writes: "the measurement apparatus is the condition of possibility for determinate meaning for the concept in question [e.g. gender], as well as the condition of possibility for the existence of determinately bounded and propertied" <ref type="bibr">[33, p. 128</ref>].</p><p>Thus, performative theory, as used by these authors, shows how the phenomena of discrimination are complex and involve processes of symbolic meaning production. The latter emerges from the nexus of human, non-human and technological components <ref type="bibr" target="#b58">[59]</ref>.</p><p>Understanding the discrimination, the symbolic and material violence that materializes in the nodes that shape our societies means working on the creation of those nodes. It means adopting a diffractive perspective, i.e. dislocating the spokes that connect the elements of a network in relationships of domination and oppression <ref type="bibr">[67, p. 302]</ref>. From these perspectives, the investigation of the union between the human and the technical emerges. Rather than being conceived of as ontologically distinct, these two dimensions are understood as part of a processual continuum. It is precisely from this continuum that the process of symbolic production can be analyzed, thus enabling an assessment of the structure of human-AI relations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Examining the COMPAS Case Using Performative Theory</head><p>In this section, we briefly try to show how it is therefore possible to use the gender performativity theory to re-read one of the most exemplary cases of bias in ML that has emerged in recent years: The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software developed by Northpoint. This case has not only attracted the attention of specialists but has also gained considerable traction in the public debate. It is precisely for this reason that we have chosen to analyze it, as we believe that the perspective of gender performativity allows us to bring to light some novel aspects, thus enriching the debate on discrimination in AI. Furthermore, it is evident that intersectionality, defined as the interaction between multiple forms of discrimination at the societal level, is a prominent factor in this case. The interconnection between gender and race dimensions is particularly noteworthy. We show that they appear to reinforce each other in a dynamic that can be described as a form of performative reinforcement.</p><p>The COMPAS software, developed in the early 2000s, was designed with the objective of calculating the probability of recidivism for a given defendant. The measure has been adopted in several states, including New York, Wisconsin, and California <ref type="bibr" target="#b67">[68]</ref>.</p><p>The software employs a variety of indicators from the subject's past, including a history of violence, substance abuse, and social environment, to categorize them according to a criminal typology <ref type="bibr" target="#b68">[69]</ref>. This enables the prediction of the probability of future violent or law-breaking behavior. The program was trained using a dataset comprising over 30,000 samples, which were collected between 2004 and 2005 as part of a company-wide initiative involving prisons, probation, and parole facilities across the United States <ref type="bibr" target="#b69">[70]</ref>. From this data set, the programmers identified two primary categories of criminal behavior, differentiated by gender, which are further subdivided into distinct subcategories <ref type="bibr" target="#b68">[69]</ref>. For this reason, we have the following typologies, 8 for each gender:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Male Typology</head><p>Female Typology 1.</p><p>Chronic drug abusers -most non-violent Drug problems and anti-social sub-cultural influences -some with relationship conflicts 2.</p><p>Low risk situational -fighting/domestic violence caution</p><p>Family disorganization and inadequate parenting -residential instability and minor non-violent offences 3.</p><p>Chronic alcohol problems -DUI, domestic violence Chronic substance abusers -women with higher social resources than other groups 4.</p><p>Socially marginalized -poor, uneducated, stressed, habitual offenders Marginalized poor and isolated older women -economic survival crimes 5.</p><p>Criminally versatile -young marginalized persons often gang affiliated</p><p>Young antisocial poorly educated women with some violent offences and early delinquency onset 6.</p><p>Socially isolated long term substance abuse -multiple minor and mostly non-violent offenses Chronic long term criminal history Amultiple co-occurring social and psychological risk factors 7.</p><p>Serious versatile high risk individuals Chronic long term criminal history Bmultiple co-occurring problems and high risk 8.</p><p>Low risk situational accidental category Late starters with multiple strengths and fewer risk factors -minor non-violent offence history</p><p>In 2016, the independent editorial office ProPublica initiated an investigation to ascertain the degree of reliability of COMPAS' predictions. The findings revealed that the overall accuracy of the results was approximately 63.3%. Of greater significance was the observation that individuals identified as Black were 77% more likely to be classified as high-risk and to perpetrate a future criminal act <ref type="bibr" target="#b70">[71]</ref>. The data, which is truly staggering in its implications, revealed that the software was inherently affected by the presence of biases that generated processes of racial discrimination. It was observed that the program is unable to meet the demand for fairness, as it is incapable of ensuring the generation of a fair output regarding all social groups <ref type="bibr" target="#b71">[72]</ref>, <ref type="bibr" target="#b72">[73]</ref>, <ref type="bibr" target="#b73">[74]</ref>. The statistical parity component is not met due to the inadequacy of the data utilized for training the program, which contributes to the reinforcement and propagation of racial stereotypes that render Black subjectivities the most susceptible to criminality.</p><p>From a performative perspective, the problem of fairness can be rephrased as follows: If, as it has been demonstrated previously through the application of causal and counterfactual approaches, the aspect of fairness is guaranteed when certain sensitive features do not compromise the result produced by the algorithm, then why are certain features, such as gender and race, decisive in this specific context?</p><p>The COMPAS case illustrates the necessity of viewing gender and race not as individual attributes, but as inscribed in a broader institutional and judicial context. In this sense, COMPAS is configured precisely as an apparatus of bodily production, whereby bodies are materialized in accordance with specific codes. In this case, the production of the body follows the reiteration and citation of some isolated normative patterns in the sixteen typologies that match male and female subjects. This indicates that the algorithm performs a process of constructing criminal subjectivity, which is intimately connected to the normative criteria. The sixteen proposed typifications thus become the normative lenses through which the software produces its judgment. The concepts of gender and race do not exist in and of themselves; rather, they are constituted through a process of materialization that produces and naturalizes certain subjectivities on the basis of specific extrinsic characteristics, they are intersected. In this sense, the process of racialization is perpetrated and repeated within the framework of the long institutional and legal history of violence against subjectivities of color <ref type="bibr" target="#b74">[75]</ref>, <ref type="bibr" target="#b75">[76]</ref>. The COMPAS system establishes a norm based on the historical datasets through which it was trained, thereby reproducing the observed effects. In this sense, it can be seen to contribute to an epistemic injustice that criminalizes minority subjectivities, such as those of women of color and immigrants, by automating and thereby making this process more efficient. It reinforces the existing discriminatory social norms embedded in the American legal system, thereby contributing to the creation of an inequitable and undemocratic network. It is therefore necessary to connect the social ontology in which these actors-software, the U.S. institutional legal apparatus, and so on-can redefine the instance of fairness <ref type="bibr" target="#b76">[77]</ref>, <ref type="bibr" target="#b77">[78]</ref>. In this case, gender and race cannot simply be regarded as sensitive categories; rather, ways must be found through which the computational power of ML can be employed to modify the constituent elements of the network to break down processes of sexual and racial discrimination. This approach thus enables not only the diagnosis of the way disparate social actors are connected, but also the comprehension of the processes of signification attributed to the bodies within this mechanism. As stated in the EDRi report "Verifying that a system is fair with the current focus on models' outputs is, then, not enough, as we also need to analyze the negative impact the new system might have on the entire, original environment" <ref type="bibr">[25, p. 63]</ref>. For this reason, an intersectional approach to algorithmic fairness can be achieved if we frame it in a previous consideration on how the network is established and which processes of performative signification it implies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>As can be seen from the discussion above, it is difficult to confine the issue of gender or even racial discrimination to the technical sphere. In fact, AI in general, and ML in particular, fits perfectly into the mechanism of meaning production described above <ref type="bibr" target="#b78">[79]</ref>. In fact, Hoffmann writes: "algorithms do not merely shape distributive outcomes, but they are also intimately bound up in the production of particular kinds of meaning, reinforcing certain discursive frames over others" <ref type="bibr">[80, p. 908]</ref>. It is desirable to make software as free from bias as possible, but this alone is not enough to contrast discrimination.</p><p>Discriminatory phenomena arise from the intertwining of human and technological components. This is why the causal and counterfactual reasoning used to guarantee the fairness of a program is not sufficient <ref type="bibr" target="#b80">[81]</ref>. Gender is not an attribute and reality in itself, but the spectrum against which we measure the way in which relationships between humans, non-humans, technologies and the environment are structured according to power relations.</p><p>The discussion on fairness should therefore not only consider gender as an attribute or a sensitive characteristic to be managed but must also consider the social ontology from which it emerges, becomes problematic and produces an asymmetrical relationship between people. Kohler-Hausmann invites us to make the same argument in relation to processes of racialization, writing: "We often lose sight of the practices and meanings that constitute the very categories of race because one of the properties of this social category is to appear as a natural fact about bodies instead of the effect of persistent social stratification and meaning-making" <ref type="bibr">[82, p. 1225]</ref>.</p><p>ML has a powerful capacity for agency and can therefore prove to be an excellent tool for modifying the many social conditions that make the concept of gender relevant in a given context. However, this is an operation that must be carried out from time to time for each type of program that will be developed <ref type="bibr" target="#b58">[59]</ref>.</p><p>It is therefore still necessary to understand how the gender dimension is constructed and used from time to time in ML, as Rode invites us to do with the concept of gender position <ref type="bibr" target="#b82">[83]</ref>. She shows how every technological innovation, such as the widespread diffusion of household appliances, is always accompanied by a redefinition of social roles.</p><p>In this sense, this study tries to underscore the critical need to address gender bias in AI systems through a multifaceted approach that integrates technical rigor with a deep understanding of social dynamics. For this reason, based on the findings, three recommendations can be made in order to build more equitable AI systems:</p><p>1. Promote Interdisciplinary Collaboration: First, it is important to highlight the importance of creating an interdisciplinary program able to bridge computer science with the humanities. AI developers should work closely with social scientists, ethicists, and feminist scholars to ensure that ethical considerations, especially those relating to gender and intersectionality, are embedded in AI development processes. 2. Redefine AI Ethics to Incorporate Social Ontology: To this end, AI ethics should be expanded beyond technical fairness metrics to incorporate the discussion on how social ontology is constructed. This involves, as the first step, recognizing that gender is not a fixed attribute but a social construct that influences how AI systems are developed and deployed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Continuous Monitoring and Evaluation of AI Systems:</head><p>It is recommended to implement continuous monitoring and post-implementation evaluation mechanisms for AI systems to identify and correct any discriminatory effects that may emerge over time. This could be facilitated by establishing independent ethics committees that regularly assess the operation of AI systems going beyond the actual procedures of auditing and debiasing.</p><p>We aim to contribute to the field by bridging the gap between technical AI research and critical Gender Studies, offering a comprehensive framework for understanding and addressing intersectional biases. This contribution lays the groundwork for future research that further explores the intersections of AI, gender and social justice, advancing the development of more equitable AI technologies.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work was partially supported by the project FAIR -Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Machine Learning</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Mitchell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">McGraw-Hill series in computer science</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>McGraw-Hill</publisher>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<title level="m">The ethics of artificial intelligence: principles, challenges, and opportunities</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Inside the Mind of an AI: Materiality and the Crisis of Representation</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Hayles</surname></persName>
		</author>
		<idno type="DOI">10.1353/nlh.2022.a898324</idno>
	</analytic>
	<monogr>
		<title level="j">nlh</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="635" to="666" />
			<date type="published" when="2022-09">Sep. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Hayles</surname></persName>
		</author>
		<title level="m">Unthought: the power of the cognitive nonconscious</title>
				<meeting><address><addrLine>Chicago ; London</addrLine></address></meeting>
		<imprint>
			<publisher>The University of Chicago Press</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle</title>
		<author>
			<persName><forename type="first">H</forename><surname>Suresh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guttag</surname></persName>
		</author>
		<idno type="DOI">10.1145/3465416.3483305</idno>
	</analytic>
	<monogr>
		<title level="m">Equity and Access in Algorithms, Mechanisms, and Optimization</title>
				<meeting><address><addrLine>--NY USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2021-10">Oct. 2021</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m">#BigData: Discrimination in data-supported decision making</title>
				<imprint>
			<publisher>European Union Agency for Fundamental Rights</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m">Bias in algorithms -Artificial intelligence and discrimination</title>
				<imprint>
			<publisher>European Union Agency for Fundamental Rights</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights" />
		<title level="m">Getting the future right -Artificial intelligence and fundamental rights</title>
				<imprint>
			<publisher>European Union Agency for Fundamental Rights</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Beyond identity: Feminism, identity and identity politics</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hekman</surname></persName>
		</author>
		<idno type="DOI">10.1177/14647000022229245</idno>
	</analytic>
	<monogr>
		<title level="j">Feminist Theory</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="289" to="308" />
			<date type="published" when="2000-12">Dec. 2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Bourdieu</surname></persName>
		</author>
		<title level="m">Masculine domination</title>
				<meeting><address><addrLine>Cambridge</addrLine></address></meeting>
		<imprint>
			<publisher>Polity Press</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Gilroy</surname></persName>
		</author>
		<title level="m">The black Atlantic: modernity and double consciousness</title>
				<meeting><address><addrLine>Cambridge, Mass</addrLine></address></meeting>
		<imprint>
			<publisher>Harvard Univ. Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
	<note>8. print</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Irigaray</surname></persName>
		</author>
		<title level="m">Speculum of the other woman</title>
				<meeting><address><addrLine>Ithaca, N.Y</addrLine></address></meeting>
		<imprint>
			<publisher>Cornell University Press</publisher>
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">The posthuman</title>
		<author>
			<persName><forename type="first">R</forename><surname>Braidotti</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>Polity Press</publisher>
			<pubPlace>Cambridge, UK</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Cavarero</surname></persName>
		</author>
		<title level="m">Inclinations: a critique of rectitude</title>
				<meeting><address><addrLine>Stanford (Calif</addrLine></address></meeting>
		<imprint>
			<publisher>Stanford university press</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note>Square one</note>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>De Beauvoir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Capisto-Borde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Malovany-Chevallier</surname></persName>
		</author>
		<title level="m">The second sex, First Vintage Books</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Vintage Books</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Butler</surname></persName>
		</author>
		<title level="m">Gender trouble: feminism and the subversion of identity</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
	<note>Routledge classics</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Bottici</surname></persName>
		</author>
		<title level="m">Anarchafeminism</title>
				<meeting><address><addrLine>London; New York</addrLine></address></meeting>
		<imprint>
			<publisher>Bloomsbury Academic</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narayanan</surname></persName>
		</author>
		<title level="m">Fairness and machine learning: limitations and opportunities</title>
				<meeting><address><addrLine>Cambridge, Massachusetts</addrLine></address></meeting>
		<imprint>
			<publisher>The MIT Press</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Role of fairness, accountability, and transparency in algorithmic affordance</title>
		<author>
			<persName><forename type="first">D</forename><surname>Shin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">J</forename><surname>Park</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.chb.2019.04.019</idno>
	</analytic>
	<monogr>
		<title level="j">Computers in Human Behavior</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page" from="277" to="284" />
			<date type="published" when="2019-09">Sep. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Artificial intelligence and identity: the rise of the statistical individual</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Bjerring</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Busch</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00146-024-01877-4</idno>
	</analytic>
	<monogr>
		<title level="j">AI &amp; Soc</title>
		<imprint>
			<date type="published" when="2024-03">Mar. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Stereotyping, prejudice, and discrimination at the intersection of race and gender: An intersectional theory primer</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T J</forename><surname>Hudson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Myer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">C</forename><surname>Berney</surname></persName>
		</author>
		<idno type="DOI">10.1111/spc3.12939</idno>
	</analytic>
	<monogr>
		<title level="j">Social &amp; Personality Psych</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2024-02">Feb. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Hill</forename><surname>Collins</surname></persName>
		</author>
		<title level="m">Intersectionality, Second edition. in Key concepts</title>
				<meeting><address><addrLine>Cambridge Medford, Mass</addrLine></address></meeting>
		<imprint>
			<publisher>Polity press</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics</title>
		<author>
			<persName><forename type="first">K</forename><surname>Crenshaw</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Feminist legal theories</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="23" to="51" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Differential Fairness: An Intersectional Framework for Fair AI</title>
		<author>
			<persName><forename type="first">R</forename><surname>Islam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">N</forename><surname>Keya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Sarwate</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Foulds</surname></persName>
		</author>
		<idno type="DOI">10.3390/e25040660</idno>
	</analytic>
	<monogr>
		<title level="j">Entropy</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">660</biblScope>
			<date type="published" when="2023-04">Apr. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Beyond Debiasing: Regulating AI and its inequalities</title>
		<author>
			<persName><forename type="first">A</forename><surname>Balayn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gürses</surname></persName>
		</author>
		<ptr target="https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf" />
	</analytic>
	<monogr>
		<title level="m">European Digital Rights (EDRi)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Technology&apos;s In-Betweeness</title>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<idno type="DOI">10.1007/s13347-013-0106-y</idno>
	</analytic>
	<monogr>
		<title level="j">Philos. Technol</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="111" to="115" />
			<date type="published" when="2013-06">Jun. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A sociotechnical view of algorithmic fairness</title>
		<author>
			<persName><forename type="first">M</forename><surname>Dolata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Feuerriegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Schwabe</surname></persName>
		</author>
		<idno type="DOI">10.1111/isj.12370</idno>
	</analytic>
	<monogr>
		<title level="j">Information Systems Journal</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="754" to="818" />
			<date type="published" when="2022-07">Jul. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Suchman</surname></persName>
		</author>
		<title level="m">Plans and situated actions: the problem of human-machine communication, Nachdr. in Learning in doing: social, cognitive, and computational perspectives</title>
				<meeting><address><addrLine>Cambridge</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge Univ. Press</publisher>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Latour</surname></persName>
		</author>
		<title level="m">Reassembling the social: an introduction to Actor-Network-Theory</title>
				<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford Univ. Press</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note>Clarendon lectures in management studies</note>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">On technical mediation</title>
		<author>
			<persName><forename type="first">B</forename><surname>Latour</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Common knowledge</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="29" to="64" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter</title>
		<author>
			<persName><forename type="first">K</forename><surname>Barad</surname></persName>
		</author>
		<idno type="DOI">10.1086/345321</idno>
	</analytic>
	<monogr>
		<title level="j">Signs: Journal of Women in Culture and Society</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="801" to="831" />
			<date type="published" when="2003-03">Mar. 2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Sociomaterial Practices: Exploring Technology at Work</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">J</forename><surname>Orlikowski</surname></persName>
		</author>
		<idno type="DOI">10.1177/0170840607081138</idno>
	</analytic>
	<monogr>
		<title level="j">Organization Studies</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1435" to="1448" />
			<date type="published" when="2007-09">Sep. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Barad</surname></persName>
		</author>
		<title level="m">Meeting the universe halfway: quantum physics and the entanglement of matter and meaning</title>
				<meeting><address><addrLine>Durham</addrLine></address></meeting>
		<imprint>
			<publisher>Duke University Press</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Why AI undermines democracy and what to do about it</title>
		<author>
			<persName><forename type="first">M</forename><surname>Coeckelbergh</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2024">2024</date>
			<publisher>Polity Press</publisher>
			<pubPlace>Medford</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Stiegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ross</surname></persName>
		</author>
		<title level="m">What makes life worth living: on pharmacology</title>
				<meeting><address><addrLine>Cambridge, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Polity</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note>English edition</note>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Plato</surname></persName>
		</author>
		<author>
			<persName><surname>Derrida</surname></persName>
		</author>
		<title level="m">Phèdre, Nouvelle édition corrigée et mise à jour</title>
				<meeting><address><addrLine>Paris</addrLine></address></meeting>
		<imprint>
			<publisher>GF Flammarion</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies</title>
		<author>
			<persName><forename type="first">E</forename><surname>Ferrara</surname></persName>
		</author>
		<idno type="DOI">10.3390/sci6010003</idno>
	</analytic>
	<monogr>
		<title level="j">Sci</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2023-12">Dec. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">The Deluge of Spurious Correlations in Big Data</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Calude</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10699-016-9489-4</idno>
	</analytic>
	<monogr>
		<title level="j">Found Sci</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="595" to="612" />
			<date type="published" when="2017-09">Sep. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Gender Bias in Hiring: An Analysis of the Impact of Amazon&apos;s Recruiting Algorithm</title>
		<author>
			<persName><forename type="first">X</forename><surname>Chang</surname></persName>
		</author>
		<idno type="DOI">10.54254/2754-1169/23/20230367</idno>
	</analytic>
	<monogr>
		<title level="j">AEMPS</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="134" to="140" />
			<date type="published" when="2023-09">Sep. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<title level="m" type="main">The black box society: the secret algorithms that control money and information</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pasquale</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>Harvard University Press</publisher>
			<pubPlace>Cambridge, Massachusetts London, England</pubPlace>
		</imprint>
	</monogr>
	<note>paperback edition</note>
</biblStruct>

<biblStruct xml:id="b40">
	<monogr>
		<title level="m" type="main">A General Black Box Theory</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bunge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1963">1963</date>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="346" to="358" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">H K</forename><surname>Chun</surname></persName>
		</author>
		<title level="m">DISCRIMINATING DATA: correlation, neighborhoods, and the new politics of recognition</title>
				<imprint>
			<publisher>MIT PRESS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">The Rise of AI Ethics</title>
		<author>
			<persName><forename type="first">P</forename><surname>Boddington</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-19-9382-4_2</idno>
	</analytic>
	<monogr>
		<title level="m">AI Ethics, in Artificial Intelligence: Foundations, Theory, and Algorithms</title>
				<meeting><address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature Singapore</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="35" to="89" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Explainable AI (XAI): Core Ideas, Techniques, and Solutions</title>
		<author>
			<persName><forename type="first">R</forename><surname>Dwivedi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3561048</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1" to="33" />
			<date type="published" when="2023-09">Sep. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Algorithmic fairness datasets: the story so far</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fabris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Messina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Silvello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Susto</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10618-022-00854-z</idno>
	</analytic>
	<monogr>
		<title level="j">Data Min Knowl Disc</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="2074" to="2152" />
			<date type="published" when="2022-11">Nov. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Explanation in artificial intelligence: Insights from the social sciences</title>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artint.2018.07.007</idno>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">267</biblScope>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2019-02">Feb. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges</title>
		<author>
			<persName><forename type="first">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhu</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-32236-6_51</idno>
	</analytic>
	<monogr>
		<title level="m">Natural Language Processing and Chinese Computing</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Tang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M.-Y</forename><surname>Kan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Zan</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">11839</biblScope>
			<biblScope unit="page" from="563" to="574" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Fairness and Philosophy</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ryan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Social Research</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="597" to="606" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Rawls</surname></persName>
		</author>
		<title level="m">Justice as fairness: a restatement</title>
				<meeting><address><addrLine>Cambridge, Mass</addrLine></address></meeting>
		<imprint>
			<publisher>Belknap Press of Harvard University Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
	<note>3. printing</note>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">V-Fairness</title>
		<author>
			<persName><forename type="first">J</forename><surname>Broome</surname></persName>
		</author>
		<idno type="DOI">10.1093/aristotelian/91.1.87</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Aristotelian Society</title>
				<meeting>the Aristotelian Society</meeting>
		<imprint>
			<date type="published" when="1991-06">Jun. 1991</date>
			<biblScope unit="volume">91</biblScope>
			<biblScope unit="page" from="87" to="102" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">Fairness as Equal Concession: Critical Remarks on Fair AI</title>
		<author>
			<persName><forename type="first">R</forename><surname>Van Nood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yeomans</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11948-021-00348-z</idno>
	</analytic>
	<monogr>
		<title level="j">Sci Eng Ethics</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">6</biblScope>
			<date type="published" when="2021-12">Dec. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<monogr>
		<title level="m" type="main">Artificial Intelligence is stupid and causal reasoning won&apos;t fix it</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Bishop</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2008.07371</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">Sex Bias in Graduate Admissions: Data from Berkeley: Measuring bias is harder than is usually assumed, and the evidence is sometimes contrary to expectation</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Bickel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Hammel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>O'connell</surname></persName>
		</author>
		<idno type="DOI">10.1126/science.187.4175.398</idno>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">187</biblScope>
			<biblScope unit="issue">4175</biblScope>
			<biblScope unit="page" from="398" to="404" />
			<date type="published" when="1975-02">Feb. 1975</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Simpson&apos;s paradox: A statistician&apos;s case study</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">H</forename><surname>Chu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">J</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pelecanos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Brown</surname></persName>
		</author>
		<idno type="DOI">10.1111/1742-6723.12943</idno>
	</analytic>
	<monogr>
		<title level="j">Emerg Medicine Australasia</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="431" to="433" />
			<date type="published" when="2018-06">Jun. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<monogr>
		<title level="m" type="main">The book of why: the new science of cause and effect</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pearl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mackenzie</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Basic Books</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
	<note>First edition</note>
</biblStruct>

<biblStruct xml:id="b55">
	<analytic>
		<title level="a" type="main">CAUSALITY: MODELS, REASONING, AND INFERENCE, by Judea Pearl</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pearl</surname></persName>
		</author>
		<idno type="DOI">10.1017/S0266466603004109</idno>
	</analytic>
	<monogr>
		<title level="j">Econ. Theory</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">04</biblScope>
			<date type="published" when="2000-08">2000. Aug. 2003</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b56">
	<analytic>
		<title level="a" type="main">A Genealogical Approach to Algorithmic Bias</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ziosi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Watson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<idno type="DOI">10.2139/ssrn.4734082</idno>
	</analytic>
	<monogr>
		<title level="j">SSRN Journal</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b57">
	<analytic>
		<title level="a" type="main">What&apos;s sex got to do with machine learning?</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kohler-Hausmann</surname></persName>
		</author>
		<idno type="DOI">10.1145/3351095.3375674</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2020 Conference on Fairness, Accountability, and Transparency<address><addrLine>Barcelona Spain</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2020-01">Jan. 2020</date>
			<biblScope unit="page" from="513" to="513" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b58">
	<analytic>
		<title level="a" type="main">AI that Matters: A Feminist Approach to the Study of Intelligent Machines</title>
		<author>
			<persName><forename type="first">E</forename><surname>Drage</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Frabetti</surname></persName>
		</author>
		<idno type="DOI">10.1093/oso/9780192889898.003.0016</idno>
	</analytic>
	<monogr>
		<title level="m">Feminist AI</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Browne</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Cave</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Drage</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Mcinerney</surname></persName>
		</editor>
		<imprint>
			<publisher>Oxford University PressOxford</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="274" to="289" />
		</imprint>
	</monogr>
	<note>1st ed</note>
</biblStruct>

<biblStruct xml:id="b59">
	<analytic>
		<title level="a" type="main">The Traffic in Women: Notes on the &apos;Political Economy&apos; of Sex</title>
		<author>
			<persName><forename type="first">G</forename><surname>Rubin</surname></persName>
		</author>
		<idno type="DOI">10.1215/9780822394068-002</idno>
	</analytic>
	<monogr>
		<title level="m">Deviations</title>
				<imprint>
			<publisher>Duke University Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="33" to="65" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b60">
	<analytic>
		<title level="a" type="main">Chapter II: The Performative Turn</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bachmann-Medick</surname></persName>
		</author>
		<idno type="DOI">10.1515/9783110402988-004</idno>
	</analytic>
	<monogr>
		<title level="m">Cultural Turns</title>
				<imprint>
			<publisher>De Gruyter</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="73" to="102" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b61">
	<analytic>
		<title level="a" type="main">THE &apos;PERFORMATIVE TURN&apos; IN SCIENCE AND TECHNOLOGY STUDIES: Towards a linguistic anthropology of &apos;technology in action</title>
		<author>
			<persName><forename type="first">C</forename><surname>Licoppe</surname></persName>
		</author>
		<idno type="DOI">10.1080/17530350.2010.494122</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Cultural Economy</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="181" to="188" />
			<date type="published" when="2010-07">Jul. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b62">
	<monogr>
		<title level="m" type="main">Handbook of Science and Technology Studies</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Jasanoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G E E</forename><surname>Markle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C C</forename><surname>Peterson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T J</forename><surname>Pinch</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001</date>
			<publisher>SAGE Publications</publisher>
			<pubPlace>Thousand Oaks</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b63">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Latour</surname></persName>
		</author>
		<title level="m">We have never been modern</title>
				<meeting><address><addrLine>Cambridge, Massachusetts</addrLine></address></meeting>
		<imprint>
			<publisher>Harvard University Press</publisher>
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b64">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Butler</surname></persName>
		</author>
		<title level="m">Bodies that matter: on the discursive limits of</title>
				<meeting><address><addrLine>Abingdon, Oxon ; New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
	<note>Routledge classics</note>
</biblStruct>

<biblStruct xml:id="b65">
	<monogr>
		<title level="m" type="main">Undoing gender</title>
		<author>
			<persName><forename type="first">J</forename><surname>Butler</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Routledge</publisher>
			<pubPlace>New York; London</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b66">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Haraway</surname></persName>
		</author>
		<title level="m">Manifestly Haraway</title>
				<meeting><address><addrLine>Minneapolis</addrLine></address></meeting>
		<imprint>
			<publisher>University of Minnesota Press</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b67">
	<analytic>
		<title level="a" type="main">It&apos;s not the algorithm, it&apos;s the data</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kirkpatrick</surname></persName>
		</author>
		<idno type="DOI">10.1145/3022181</idno>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="21" to="23" />
			<date type="published" when="2017-01">Jan. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b68">
	<monogr>
		<title level="m" type="main">Practitioner&apos;s Guide to COMPAS Core</title>
		<author>
			<persName><surname>Northpoint</surname></persName>
		</author>
		<ptr target="https://s3.documentcloud.org/documents/2840784/Practitioner-s-Guide-to-COMPAS-Core.pdf" />
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b69">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Vaccaro</surname></persName>
		</author>
		<title level="m">Algorithms in human decision-making: A case study with the COMPAS risk assessment software</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b70">
	<monogr>
		<title level="m" type="main">How We Analyzed the COMPAS Recidivism Algorithm</title>
		<author>
			<persName><forename type="first">J</forename><surname>Angwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mattu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kirchner</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b71">
	<analytic>
		<title level="a" type="main">The accuracy, fairness, and limits of predicting recidivism</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dressel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Farid</surname></persName>
		</author>
		<idno type="DOI">10.1126/sciadv.aao5580</idno>
	</analytic>
	<monogr>
		<title level="j">Sci. Adv</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">5580</biblScope>
			<date type="published" when="2018-01">Jan. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b72">
	<analytic>
		<title level="a" type="main">Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems</title>
		<author>
			<persName><forename type="first">F</forename><surname>Gursoy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Kakadiaris</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICDMW58026.2022.00027</idno>
	</analytic>
	<monogr>
		<title level="m">2022 IEEE International Conference on Data Mining Workshops (ICDMW)</title>
				<meeting><address><addrLine>Orlando, FL, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022-11">Nov. 2022</date>
			<biblScope unit="page" from="137" to="146" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b73">
	<analytic>
		<title level="a" type="main">Algorithmic fairness through group parities? The case of COMPAS-SAPMOC</title>
		<author>
			<persName><forename type="first">F</forename><surname>Lagioia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rovatti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI &amp; SOCIETY</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="459" to="478" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b74">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Browne</surname></persName>
		</author>
		<idno type="DOI">10.1215/9780822375302</idno>
		<title level="m">Dark Matters: On the Surveillance of Blackness</title>
				<imprint>
			<publisher>Duke University Press</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b75">
	<analytic>
		<title level="a" type="main">Reform predictive policing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Shapiro</surname></persName>
		</author>
		<idno type="DOI">10.1038/541458a</idno>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">541</biblScope>
			<biblScope unit="issue">7638</biblScope>
			<biblScope unit="page" from="458" to="460" />
			<date type="published" when="2017-01">Jan. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b76">
	<analytic>
		<title level="a" type="main">Blind injustice: The Supreme Court, implicit racial bias, and the racial disparity in the criminal justice system</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Clemons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Am. Crim. L. Rev</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page">689</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b77">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">M</forename><surname>Young</surname></persName>
		</author>
		<title level="m">Justice and the politics of difference, Nachdr</title>
				<meeting><address><addrLine>Princeton, NJ</addrLine></address></meeting>
		<imprint>
			<publisher>Princeton Univ. Press</publisher>
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b78">
	<analytic>
		<title level="a" type="main">The Performativity of AI-powered Event Detection: How AI Creates a Racialized Protest and Why Looking for Bias Is Not a Solution</title>
		<author>
			<persName><forename type="first">E</forename><surname>Drage</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Frabetti</surname></persName>
		</author>
		<idno type="DOI">10.1177/01622439231164660</idno>
	</analytic>
	<monogr>
		<title level="j">Science, Technology, &amp; Human Values</title>
		<imprint>
			<biblScope unit="page">016224392311646</biblScope>
			<date type="published" when="2023-03">Mar. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b79">
	<analytic>
		<title level="a" type="main">Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Hoffmann</surname></persName>
		</author>
		<idno type="DOI">10.1080/1369118X.2019.1573912</idno>
	</analytic>
	<monogr>
		<title level="j">Information, Communication &amp; Society</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="900" to="915" />
			<date type="published" when="2019-06">Jun. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b80">
	<analytic>
		<title level="a" type="main">Can We Trust Fair-AI?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Alvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pugnana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>State</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v37i13.26798</idno>
	</analytic>
	<monogr>
		<title level="j">AAAI</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page" from="15421" to="15430" />
			<date type="published" when="2023-06">Jun. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b81">
	<analytic>
		<title level="a" type="main">The Dangers of Counterfactual Causal Thinking about Detecting Racial Discrimination</title>
		<author>
			<persName><forename type="first">I</forename><surname>Kohler-Hausmann</surname></persName>
		</author>
		<idno type="DOI">10.2139/ssrn.3050650</idno>
	</analytic>
	<monogr>
		<title level="j">SSRN Journal</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b82">
	<analytic>
		<title level="a" type="main">A theoretical agenda for feminist HCI</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Rode</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.intcom.2011.04.005</idno>
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="393" to="400" />
			<date type="published" when="2011-09">Sep. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
