<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">&quot;My Kind of Woman&quot;: Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Doh</forename><surname>Miriam</surname></persName>
							<email>miriam.doh@umons.ac.be</email>
							<affiliation key="aff0">
								<orgName type="laboratory">ISIA Lab -Université de Mons (UMONS)</orgName>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="laboratory">IRIDIA Lab</orgName>
								<orgName type="institution">Université Libre de Bruxelles (ULB)</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Karagianni</forename><surname>Anastasia</surname></persName>
							<email>anastasia.karagianni@vub.be</email>
							<affiliation key="aff2">
								<orgName type="department">Law, Science, Technology &amp; Society (LSTS) Research Group</orgName>
								<orgName type="institution">Vrije Universiteit Brussels</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">&quot;My Kind of Woman&quot;: Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A583EC6F7818933FF2EA39DA380F678D</idno>
					<note type="submission">with the support of the Brussels Capital Region (Innoviris and Paradigm</note>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Gender Bias</term>
					<term>Facial Analysis</term>
					<term>Generative AI</term>
					<term>EU AI ACT</term>
					<term>GDPR</term>
					<term>Data Fairness</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations. Drawing on the "averageness theory," which suggests a relationship between a face's attractiveness and the human ability to ascertain its gender, we explore the potential propagation of human bias into artificial intelligence (AI) systems. Utilising the AI model Stable Diffusion 2.1, we have created a dataset containing various connotations of attractiveness to test whether the correlation between attractiveness and accuracy in gender classification observed in human cognition persists within AI. Our findings indicate that akin to human dynamics, AI systems exhibit variations in gender classification accuracy based on attractiveness, mirroring social prejudices and stereotypes in their algorithmic decisions. This discovery underscores the critical need to consider the impacts of human perceptions on data collection and highlights the necessity for a multidisciplinary and intersectional approach to AI development and AI data training. By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness under the scope of the AI Act 1 and GDPR 2 , reaffirming how psychological and feminist legal theories can offer valuable insights for ensuring the protection of gender equality and non-discrimination in AI systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Language, the cornerstone of human interaction, encapsulates our thoughts, knowledge, experiences, and creative endeavours. It reflects our societies' historical and cultural complexities, perpetuating values, structures, and, inevitably, stereotypes <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. Stereotypes, first dissected in the Social Sciences by Lippman in 1922, delineate the "typical image" or representation conjured when referring to specific groups or situations <ref type="bibr" target="#b2">[3]</ref>. Despite their role in simplifying our understanding of the world, stereotypes often carry negative connotations, particularly in the realm of gender stereotypes, where gender identities are considered vastly different, echoing assumptions of biologically determined roles that dictate societal positions based on physical attributes or emotional capacities <ref type="bibr" target="#b3">[4]</ref>.</p><p>Feminist theory critically examines gender stereotypes, exploring how social norms and expectations shape cis-normative perceptions of femininity and masculinity <ref type="bibr" target="#b3">[4]</ref>. In particular, A. Oakley contributed to exploring gender as a social construct, challenging the reduction of gender to mere biological determinants and asserting its foundation in social, economic, and cultural constructs <ref type="bibr" target="#b4">[5]</ref>.</p><p>Given the deeply rooted nature of gender stereotypes in societal and cultural contexts, it is essential to examine how these biases are transferred into and expressed within digital technologies, particularly AI. While AI systems hold immense potential to revolutionize various aspects of our lives, they are not immune to embedding discrimination in subtle yet pervasive ways. Historically dominated by masculine perspectives, the technological field prompts concerns about inclusivity and the possibility that AI may perpetuate existing gender biases <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>For instance, in 2021, it was revealed that API image labelling services generated sexist labels <ref type="bibr" target="#b7">[8]</ref>. In a dataset where all individuals had visible hair and wore professional attire, the Google Cloud Vision image labelling algorithm consistently paid more attention to women's hairstyles and fashion than men's, even though both had the same occupation as members of Congress. These algorithms labelled men as "officials, " "entrepreneurs, " and "military officers, " while women were often associated with labels like "hairstyle" or "beauty" <ref type="bibr" target="#b7">[8]</ref>. Additionally, Microsoft's NSFW service detected female subjects as adult content at a higher rate <ref type="bibr" target="#b8">[9]</ref>.</p><p>Such disparities underscore the urgency of our exploration into AI and gender bias, aiming to uncover whether these digital advancements are catering exclusively to male-dominated narratives or forging a path toward inclusivity.</p><p>Humans can introduce biases that become embedded in AI systems by determining which datasets, variables, and rules the algorithms learn from to make predictions. In this context, a critical approach towards the data collection process is sought to be adopted by our work, fully aware of how this can be influenced by human perception and acknowledged that according to a 2019 report by the European Union Agency for Fundamental Rights <ref type="bibr" target="#b9">[10]</ref>, data quality is an important risk factor for bias in AI.</p><p>As demonstrated in the aforementioned cases, the datasets used in these systems and their filtering processes highlight the interaction between words, images, and stereotypical connotations, which can negatively influence AI. To understand these mechanisms, this work aims to identify potential human biases in gender classification systems.</p><p>To lead this exploration, we adopt an approach aligned with the field of "Artificial Cognition" <ref type="bibr" target="#b10">[11]</ref>. This area of research is based on the hypothesis that just as cognitive psychology has been used to understand the human mind as a 'black box', its theories can serve as a starting point for understanding the mechanisms of AI systems in their opacity. Consequently, this field suggests that by applying cognitive psychology theories to AI, we can gain insights into how human biases might be reflected in AI's decision-making processes. Specifically, we explore the "averageness theory" <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15]</ref> within cognitive psychology, examining how perceptions of attractiveness could influence AI's gender classification accuracy. Through a dataset generated by the AI model Stable Diffusion 2.1, theoretical insight and empirical analysis are blended to scrutinise the manifestations of human gender bias in AI classification systems.</p><p>To contextualise our empirical investigation, we examine how the AI Act and GDPR address cognitive gender bias in classification systems. Moreover, this examination underscores the importance of regulation in addressing the legal challenges posed by generative AI-based data augmentation, particularly those related to gender bias and data protection principles such as collection limitation, purpose specification, use limitation, data minimisation, transparency, data quality, access and correction, retention limitation, automated decision-making, and profiling <ref type="bibr" target="#b15">[16]</ref>.</p><p>This investigation, rooted in a deep understanding of gender dynamics based on the feminist legal theory <ref type="bibr" target="#b16">[17]</ref> and averageness theory, seeks to shed light on how unconscious assumptions and biases can shape future technologies, highlighting the need for careful ethical and legal consideration in the design and implementation of AI.</p><p>The work is structured as follows: in Section 1, an introduction of the work is presented; in Section 2, state-of-the-art studies about bias in gender classification systems and text-to-image generation methods are presented; in Section 3 the core idea of the investigation is presented; in Section 4 the experiment set up is described; in Section 5, the results of the experiments are included; in Section 6 provides a legal analysis of the issue; and in Section 7, the work is concluded.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Gender Bias in AI Classification Systems</head><p>In transitioning to a detailed analysis of gender bias within AI systems, particularly in image gender classification, it's evident that these biases extend and amplify existing societal imbalances. One of the critical works in the debate on gender bias in AI systems is "Gender Shade" <ref type="bibr" target="#b17">[18]</ref>, which revealed a trend of gender classification systems, such as those offered by APIs like Microsoft or IBM, showing more significant inaccuracies in classifications related to women and, in particular, women of colour. Furthermore, it emerged that such inaccuracies tend to increase with darker skin tones, highlighting a problematic combination of gender and racial biases. However, understanding the causes of these biases is a complex task. AI systems often operate as "black boxes, " and the data used to train them are opaque, making it difficult to identify underlying behaviour.</p><p>Another consideration is that different gender classification services use different training data and infrastructure requirements. This suggests that each service may employ unique training data and have specific infrastructure requirements that influence gender classification <ref type="bibr" target="#b18">[19]</ref>. This diversity among services raises questions about the consistency and reliability of gender classifications produced by AI systems. One of the main theories explaining gender bias in AI systems suggests that the discriminated demographic group may be underrepresented within the training data sets used for model training, as supported by the authors <ref type="bibr" target="#b17">[18]</ref>, showing that the most common data sets used in gender classification lacked diversity in terms of skin colour. However, this theory has been debunked by studies showing that balancing the training dataset did not eliminate bias <ref type="bibr" target="#b19">[20]</ref>. In the search for further causes, an important discovery is that skin type does not seem to be the determining factor in the accuracy of gender classification. The problem seems more complex and is related to the persistence of stereotypes within this classification. This could make the issue even more intersectional, as it could harm multiple social classes. For example, <ref type="bibr" target="#b20">[21]</ref> highlighted that makeup and eye features are significant predictive factors for classifying a face as female, raising concerns about the perpetuation of gender stereotypes. Another study <ref type="bibr" target="#b21">[22]</ref> has suggested that besides makeup, features such as hairstyle, facial structure, and clothing could be more relevant than skin type in determining gender, justifying that women were more prone to false non-match rate (FNMR) than men in a face recognition context. <ref type="bibr" target="#b22">[23]</ref> reported that gender differences in FNMR are not universal. Gender social conventions related to hairstyle and makeup, by definition, can vary significantly among social groups, so it seems likely that they manifest in various ways. Social conventions related to hairstyle and makeup also change with a person's age, and therefore, they play a role in understanding how facial recognition accuracy varies across age groups.</p><p>Furthermore, an interesting study <ref type="bibr" target="#b18">[19]</ref> attempted to analyse these algorithms by considering the transgender community and trying to classify transwomen and transmen. One of the most exciting results of this research is that transgender men had the lowest true positives in gender classification, suggesting that there is a more cis-normative male representation for the male category, which is less subject to variety and diversity in the training data. Outside the world of research and computer science, the public has also begun to wonder about the causes of misclassifications or certain system decisions. In 2021, an algorithmic artist and transgender individual, Ada Ada Ada <ref type="bibr" target="#b23">[24]</ref>, attempted to test how algorithms perceive gender. By using her own transgender body, the artist found several methods for tricking gender recognition technology into seeing a gender different from the initial judgment. For example, she discovered how varying her emotional expression, head tilt, hair and beard, eyes, and nose could lead to being classified as a specific gender. This result comes after a series of machine tests and post-analysis of the results obtained. This project once again demonstrates how these classifications are based on social stereotypes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">BIAS in image generation</head><p>Regarding text-to-image (TTI) systems, academic inquiry has underscored a marked prevalence of demographic biases, especially those pertaining to gender and race. These biases are evidenced through stereotypical depictions across assorted domains, such as vocations and personal traits, underscoring an inclination towards the over-representation of attributes linked with whiteness and masculinity <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref>. Examinations have elucidated that a principal factor contributing to these biases is the instructional content utilised to train models, typically sourced from the internet, which mirrors the stereotypes and prejudices extant within society <ref type="bibr" target="#b24">[25]</ref>. Despite recognising their inherent biases, employing models like CLIP <ref type="bibr" target="#b26">[27]</ref> to steer the generative process in apparatuses such as Stable Diffusion further exacerbates the issue, as it amplifies the perpetuation of biases <ref type="bibr" target="#b27">[28]</ref>. Furthermore, the exploration of representations engendered by TTI models has divulged that biases and stereotypes are not confined to the portrayal of individuals but also extend to objects, clothes, and even national identities <ref type="bibr" target="#b28">[29]</ref>, reflecting a wide spectrum of demographic biases. Whilst endeavours have been undertaken to ameliorate these biases, for instance, through the analysis of models' latent spaces to render the generated images more representative, the efficacy of such measures remains in question <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed idea: From Human Bias to Machine Bias</head><p>This work is inspired by the theories of cognitive psychology concerning gender perception and embarks upon a translational exploration from human to machine.</p><p>Gender discrimination within human cognitive processes has been extensively probed by cognitive psychology, with initial examinations focusing on how distinct facial features between males and females influence gender perception. Studies such as <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b32">33]</ref> have illuminated that facial dimensions, nose shape, prominence of jaws and eyebrows, and the structure of cheekbones contribute to gender perception, yet it is acknowledged that no single trait definitively dictates it. Indeed, it is highlighted that the complexity of gender perception surpasses mere physiognomy, showcasing how generalisations can readily evolve into stereotypes. This nuance in defining gender underscores the necessity of critically evaluating stereotypical visual representations of men and women-a phenomenon that has been extensively documented across various media <ref type="bibr" target="#b33">[34,</ref><ref type="bibr" target="#b34">35,</ref><ref type="bibr" target="#b35">36</ref>], yet remains underexplored in the digital domain <ref type="bibr" target="#b36">[37]</ref>.</p><p>In this context, a pivotal role is played by the averageness theory, suggesting that faces deemed attractive, due to their prototypical nature, are more easily classified by gender <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15]</ref>. This theory is supported by findings that facial attractiveness facilitates classification in adults. However, this correlation is observed to vary across genders, with attractive female faces generally perceived as highly feminine, unlike their male counterparts <ref type="bibr" target="#b37">[38,</ref><ref type="bibr" target="#b38">39,</ref><ref type="bibr" target="#b39">40]</ref>. These dynamics introduce the concept of "face space", where faces are categorised in a multidimensional space based on the variance of facial traits and their encoding <ref type="bibr" target="#b40">[41]</ref>, with attractive and prototypically feminine faces positioned at the centre of this space.</p><p>Building on these foundations, our study explores how cognitive biases manifest within AI systems, specifically focusing on gender classification mechanisms. We delve into the potential propagation of gender stereotypes and investigate the role of attractiveness in classification accuracy. Recent research has largely focused on identifying biases related to gender, ethnicity, makeup usage, and skin colour. However, our approach aims to innovate by emphasising the impact of attractiveness-a composite attribute influenced by multiple facial features-which plays a crucial role in the perception and classification of gender.</p><p>Our analysis of bias is approached from two dimensions:</p><p>• First Level of Representation <ref type="bibr" target="#b41">[42]</ref>: We question whether classification systems consistently achieve the same level of accuracy across all analysed groups (attractive/unattractive, women/men). • Conditional Demographic Parity <ref type="bibr" target="#b42">[43]</ref>: We consider scenarios where biases may arise if the system systematically produces only a subset of possible labels, even if the algorithm's output is correct. For example, if men and women in a sample are dressed similarly, an unbiased algorithm would be expected to return the "clothing" label with equal frequency for each gender.</p><p>To summarise, our primary research question is: RQ1: How does the averageness theory influence the performance of AI algorithms in gender classification?</p><p>Several secondary questions support this:</p><p>• RQ1.a: Is there a difference in classification accuracy between attractive and unattractive faces within gender groups in AI algorithms? • RQ1.b: Do the performances of AI algorithms in gender classification maintain uniformity across different demographic groups, particularly when considering attractiveness? • RQ1.c: Do gender classification algorithms exhibit gender stereotypes, and in what forms do these stereotypes manifest?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental Setup</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Rationale for Synthetic Dataset Generation</head><p>When addressing the subject of attraction, a frequent critique is raised concerning its inherently subjective nature. However, it is noteworthy that datasets have been developed to tackle this aspect, exemplified by the HotOrNot dataset <ref type="bibr" target="#b43">[44,</ref><ref type="bibr" target="#b44">45,</ref><ref type="bibr" target="#b45">46]</ref>, the SCUT-FBP dataset <ref type="bibr" target="#b46">[47]</ref>, and the CelebA dataset <ref type="bibr" target="#b47">[48]</ref>. In particular, The HotOrNot dataset was created by collecting user ratings of attractiveness from the HotOrNot website, a site launched in the 2000s where users rated pictures of individuals on a scale from 1 to 10, generating a large dataset of images paired with attractiveness scores. For example, a dataset version was formally presented in <ref type="bibr" target="#b45">[46]</ref>, where researchers used it to improve image annotation in attractiveness task scenarios. The SCUT-FBP dataset was developed to provide a more systematic set of facial images for beauty prediction tasks. Like the previous dataset, SCUT-FBP contains facial images with beauty scores annotated by multiple human raters. This dataset aimed to create a more controlled environment for studying facial attractiveness by ensuring in-scene and face expression conditions. Lastly, the CelebA dataset is a large-scale face attributes dataset with celebrity images, each annotated with 40 attribute labels, including one related to attractiveness. Unlike the previous two datasets, CelebA was designed to facilitate general research in face analysis (detection, attribute prediction, and other fields like beauty prediction). Despite these efforts, datasets in this domain exhibit significant variability, especially a lack of representation across various ethnicities or groups of people (e.g., SCUT-FBP is limited to 500 samples and focuses primarily on Asian and Caucasian faces, or CelebA contains only celebrity individuals). Moreover, HotOrNot is based on data collected from a website where people voluntarily uploaded photos of themselves to be rated; there was no control over the data type collected. This resulted in a lack of consistent distribution across ethnicity and gender, leading to an uncontrolled environment.</p><p>Consequently, for this study, a compact synthetic dataset has been compiled in light of these limitations, generated using Stable Diffusion <ref type="bibr" target="#b48">[49,</ref><ref type="bibr" target="#b49">50]</ref> as shown in Figure <ref type="figure" target="#fig_0">1</ref>. This approach's choice is deeply rooted in Stable Diffusion's comprehensive training across diverse image and label datasets. This training equips the model with a grasp of the subjective nuances of human attractiveness, allowing it to reflect the varied interpretations of facial attractiveness found in datasets. Moreover, the rationale for using a synthetic dataset lies in the ability to control and vary the generated images' attributes systematically. This approach addresses the limitations of existing datasets by ensuring a balanced representation of different ethnicities and providing a consistent framework for studying the subjective aspects of attractiveness. Additionally, synthetic data is gaining traction in data augmentation <ref type="bibr">[51?</ref> ], making it intriguing to explore the type of representations these datasets can provide for this study. Finally, this method allows the investigation of the two dimensions of bias presented in section 3. In particular, these dimensions are analysed by testing various gender classification models, observing the variation in accuracy (First Level of Representation), and qualitatively analysing the results generated by Stable Diffusion for the requested prompt (Conditional Demographic Parity). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Dataset Creation and Processing</head><p>As mentioned, to create a balanced and diverse dataset, Stable Diffusion was tasked with generating images based on the following prompts: 'frontal photograph of an attractive/unattractive ethnicity man/woman'. Within these prompts, the descriptor ethnicity was systematically alternated with "White", "Black", and "Asian", ensuring a diverse representation of ethnicities within the resulting dataset.</p><p>Following this approach, the Stable Diffusion API was utilised to generate the dataset; specifically, version 2.1 of the stability/stable-diffusion model <ref type="bibr" target="#b49">[50]</ref>, accessible through the demo provided by Hugging Face, was employed. The default guidance scale value of 9 was kept. This process resulted in 200 images per class, resulting in 2400 images. The images were subsequently cropped to focus on the face, using the Multi-Task Cascade CNN (MTCNN) <ref type="bibr" target="#b51">[52]</ref> face detection algorithm to standardise the size of the face portions in all images. This process made lose some of the sample generated since the face was not detect during the process. The final dataset contains 2324 crop images distributed according to specific proportions, as shown in Table <ref type="table" target="#tab_0">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Gender classification models and Metrics</head><p>Regarding the accuracy analysis, the models considered are those offered by Amazon Rekognition by Amazon Web Services (AWS) <ref type="bibr" target="#b52">[53]</ref>, the DeepFace library <ref type="bibr" target="#b53">[54]</ref>, and the InsightFace one <ref type="bibr" target="#b54">[55]</ref>. Our selection included a commercial API, Amazon, and two projects widely recognised on GitHub: DeepFace (with 9.6k stars) and InsightFace (with 20.9k stars). Among these, DeepFace is the only one that provides data on the accuracy of its gender recognition model, achieving an accuracy of 97.44%.</p><p>To evaluate the model performances we take into account the following metrics:</p><p>• PPV (Positive Predictive Value): Positive Predictive Value, or PPV, is a metric used to analyse diagnostic test results or predictive models. In this formula, "TP" stands for true positives, cases where the test or model correctly predicted membership in a category. In contrast, "FN" stands for false negatives, where the test or model incorrectly predicted that an item does not belong to a class when it does. PPV measures the proportion of correct positive predictions relative to the total positive predictions and is expressed as a percentage.</p><formula xml:id="formula_0">𝑃 𝑃 𝑉 = 𝑇 𝑃 𝑇 𝑃 + 𝐹 𝑁<label>(1)</label></formula><p>• ER (Error Rate): Error Rate is another metric used to evaluate the performance of tests or prediction models. In this formula, "FP" stands for false positives, i.e., cases where the test or model erroneously predicted membership in a category when it does not, and "FN" still represents false negatives. Error Rate measures the proportion of incorrect predictions relative to the total predictions and is expressed as a percentage. A lower error rate indicates a higher accuracy of the test or model.  The metrics of interest spanned gender, attractiveness, and ethnicity, as shown in the tab. 2, 3 where A stands for attractive, "U" for unattractive, "M" for men and "W" for women. An analysis of gender classification revealed a noticeable difference in model performance, particularly when examining the gradient of attractiveness. Whether deemed attractive or not, the model's accuracy showed little variation for male subjects. However, when classifying female subjects, the models displayed a change in accuracy between attractive and unattractive groups. InsightFace's Positive Predictive Value (PPV) decreased from 85.61% for attractive women to 62.74% for those considered unattractive. The disparity was even more pronounced for DeepFace, which saw a PPV drop from 67.5% for attractive women to 21.22% for unattractive women. Amazon Rekognition generally emerged as the most robust model; nonetheless, a slight performance difference was observed: from 100% PPV for attractive women to 88.96% for unattractive women. Regarding ethnicity, the most disadvantaged group for InsightFace and DeepFace was unattractive black women, with an error rate of 44.44% for InsightFace and 85.86% for DeepFace. Conversely, for Amazon Rekognition, black subjects were less disadvantaged, while unattractive Asian women showed a more significant performance degradation.</p><formula xml:id="formula_1">𝐸𝑟𝑟𝑜𝑟 𝑅𝑎𝑡𝑒 = 𝐹 𝑃 + 𝐹 𝑁 𝑇 𝑃 + 𝐹 𝑃 + 𝐹 𝑁<label>(2)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Analysis of Gender Classification</head><p>While InsightFace and DeepFace showed variable performance between attractive and unattractive females, Amazon Rekognition consistently maintained high accuracy across gender and attractiveness attributes. The calculated error rate disparities across Amazon Rekognition, InsightFace, and DeepFace models underscore distinct biases concerning gender and perceived attractiveness, as documented in the figure <ref type="figure" target="#fig_2">2</ref>. InsightFace exhibited a minimal gap for men, suggesting uniform performance across levels of attractiveness. However, a substantial disparity was observed for women, with unattractive women experiencing significantly higher error rates. DeepFace revealed the starkest contrast in error rates among women, with unattractive women facing dramatically higher error rates, highlighting a potential bias towards attractiveness in female gender recognition. Deepface presents a higher difference between the error rate gaps between the male and female samples, 45.80.</p><p>These findings indicate a persistent trend where women, especially those categorised as unattractive, are at a disadvantage due to higher error rates. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Quantitative Analysis of Physical Characteristics and Facial Expressions in a Stable Diffusion-Generated Face Dataset -"Conditional demographic parity" (RQ1.c)</head><p>To understand the kind of physiognomy generated by the Stable Diffusion model in response to the prompt, a qualitative analysis of the dataset was conducted by examining images of individuals, leading to several observations. Initially, average faces were created by overlaying images of various groups of attractive and nonattractive individuals, both women and men. This analysis observed expressive differences between averagely attractive and non-attractive faces. The average attractive face invariably appears smiling or calm, whereas the average non-attractive face tends to exhibit a more severe expression. Furthermore, attractive average faces seem younger than their non-attractive counterparts (fig. <ref type="figure" target="#fig_3">3</ref>). From this initial analysis of averages, with scrutiny applied to each image in turn, three interesting observations have emerged:</p><p>• Differences in makeup: There is a significant incidence of makeup on the average attractive and non-attractive face of women. Attractive women show signs of makeup with a more pronounced application, especially on the lips. On the other hand, non-attractive women have lighter or even absent makeup, especially Asian women. An exception is Black women, who are rarely generated without makeup regardless of attractiveness. • Similarities among Black subjects:It is interesting to note that there are no significant differences between the average attractive and non-attractive faces for Black men and women, except for a slightly more smiling expression in attractive Black subjects. Moreover, Black subjects are never generated without a beard, as shown by the average faces, which are attractive and non-attractive in the presence of the beard. • Age Gap: Among women of all ethnicities, youth appears to be a distinctive trait of attractiveness.</p><p>White women deemed attractive tend to show youthful visual characteristics, with rare exceptions of white hair, suggesting a strong link between youthfulness and the perception of beauty. Conversely, non-attractive women of the same ethnicity appear older, with a greater frequency of white hair, suggesting that ageing may negatively impact their aesthetic perception. A similar trend is observed among Asian women, where youthful features prevail among those considered attractive, while their non-attractive counterparts show more marked signs of ageing. However, Black women seem to follow the trend but with a less pronounced age gap. Regarding men, the link between age and attractiveness manifests less markedly than in women but remains significant. Attractive white men can vary in age, displaying both youthful characteristics and signs of ageing, such as white hair or wrinkles, indicating a broader range of attractive traits. Non-attractive men, however, tend to be characterised by a seemingly more advanced age. This pattern of age-related variation is also reflected among Asian men. As with women, the trend is respected for Black men but with a smaller gap.</p><p>After conducting the primary analysis, it was decided to validate the observations concerning makeup differences and the age gap using attribute classification systems. Amazon Rekognition was utilized for the age attribute, as the API also provided age detection. Among all the evaluated models, it exhibited the fewest errors in gender classification. A trained lightened moon Mxnet model was employed for facial attributes <ref type="bibr" target="#b55">[56]</ref>, especially the pre-trained on the CelebA dataset's attribute labels <ref type="bibr" target="#b56">[57]</ref>.</p><p>• Age analysis: Amazon Rekognition allows the detection of an age class. Since this range is variable, to facilitate the analysis, we decided to adopt a set of fixed ages ranging from 0 to "≥ 70", proceeding by decades as reported in the tab.   • Attribute analysis: Some earlier observations can be validated by examining the datasets through attribute detection. Initially, the age gap is validated by the observation that for both women and men, the percentage of the "Young" attribute decreases when moving from attractive to unattractive subjects, supporting the notion that unattractive individuals are typically older. Furthermore, the difference in makeup application is also evident. Among attractive women, "Wearing Lipstick" and "Heavy Makeup" are observed as among the top four detected attributes, with respective percentages of 91.50% and 71.74%. Conversely, for unattractive women, makeup does not present as a significantly detected attribute, with only a "Wearing Lipstick" percentage of 16.15% being observed (tab 5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>(%) U (%) A (%) U (%) A (%) U (%) A (%) U (%) A (%) U (%) A (%) U (%)</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Controversial images</head><p>Some controversial images were generated during the generation of images using Stable Diffusion. Despite the explicit prompt instructing the model to display a face, often the generated images represented bodies or body parts instead. One noteworthy observation is that body parts such as lips and prominently emphasized breasts were often generated for attractive women 4.B. Another remarkable case was the image generated in response to the prompt for unattractive Black women, as shown in figure <ref type="figure" target="#fig_4">4</ref>.A. The image primarily depicts a partially nude, censored in intimate areas, pregnant abdomen.  Additionally, there is a broader case of curvy faces among the non-attractive groups (figure 4.C). For men, this is also evident in the attribute detection results showing the attribute "chubby" among the top 10 detected. This last observation resonates with the stigmatization of fat bodies, which are often seen as unproductive and inefficient in Western culture <ref type="bibr" target="#b57">[58]</ref>. This connection again underscores societal biases' role in shaping AI outputs, where attributes such as 'chubby' linked to 'unattractive' become markers of negative judgment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">A legal analysis of the essential requirements of gender classification systems' operation 6.1. Data (e)quality under the scope of the AI Act and GDPR</head><p>The analysis of the averageness theory revealed that specific physical attributes associated with attractiveness, such as facial features or perceived age, inadvertently influence gender classification within AI systems. Following the Gender Shades study <ref type="bibr" target="#b17">[18]</ref> hypothesis, it is possible that this result is linked to the type of descriptive features of the class most present in the training data.</p><p>The EU Non-discrimination legislation is crucial for safeguarding a high level of (e)quality in AI development and implementation settings. The obligation to respect the principle of non-discrimination is enshrined in EU primary law, in Article 2 of the Treaty on European Union (TEU) 1 , Article 10 of the Treaty on the Functioning of the European Union (TFEU) 2 (requiring the Union to combat discrimination on several grounds) and Articles 20 and 21 of the EU Charter of Fundamental Rights (equality before the law and non-discrimination based on a non-exhaustive list of grounds) 3 . All prohibited grounds of discrimination, as listed in the Charter, are relevant regarding using algorithms.</p><p>In the AI context, algorithmic discrimination has become one of the critical points in the discussion about the consequences of an intensively datafied world <ref type="bibr" target="#b58">[59]</ref>. The quality of the datasets used to train machine learning algorithms is of prime importance to the performance of AI systems, as "an algorithm is only as good as the data it works with" <ref type="bibr" target="#b59">[60]</ref>. When data is gathered, it may contain socially constructed biases, inaccuracies or errors that must be tackled before any training based on this dataset <ref type="bibr" target="#b60">[61]</ref>. One of the reasons explaining the existence of bias in datasets is the "choice of subjects to the omission of certain characteristics or variables that properly capture the phenomenon we want to predict, to changes over time, place or situation, to the way training data is selected" <ref type="bibr" target="#b61">[62]</ref>. AI algorithms trained on poor-quality information -both from the quantitative and qualitative point of view-can negatively affect the outputs or decisions of these mechanisms, leading to "incorrect model predictions" <ref type="bibr" target="#b62">[63]</ref>. It is worth recalling that a dataset "is always a reflection of the society from which the information has been obtained. If the society contains discriminatory elements and structures, these are also in the training data set" <ref type="bibr" target="#b63">[64]</ref>.</p><p>More than that, datasets used for training AI systems "may suffer from the inclusion of inadvertent historical bias, incompleteness and bad governance models". The perpetuation of such biases could lead to inadvertent (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation <ref type="bibr" target="#b59">[60]</ref>. For instance, AI training datasets may exclude or under/misrepresent people from different geographical areas, neglecting or misconstruing their interests and needs. This exclusion can potentially exacerbate current inequalities and further marginalise these communities. <ref type="bibr" target="#b64">[65]</ref>.</p><p>The AI Act mandates specific requirements and restrictions regarding the use of data for the development, training and testing of AI systems, encompassing factors like the quality, relevance, accuracy, representativeness and diversity of the data (Recitals 14a, 28a, 38, 43,44, 45 AI Act etc.), as well as the respect for the rights and interests of the data subjects and the data providers (Articles 9, 10, 54 and 55 AI Act). AI training and testing can be made either by datasets of real data or synthetic data (fake data)-like in our case. Synthetic data is artificial data generated from original data (real data) and a pre-trained model to reproduce the characteristics and structure of the original data (Recital 111 AI Act).</p><p>Whenever real data is used, Articles 9 (1), 6 (1)(a), (b), and (f) GDPR are applied. Yet, the AI Act lacks explicit guidance on the proper procedures and legal basis for processing such data, particularly concerning consent acquisition and providing information and transparency, which are enshrined in the GDPR. GDPR provides the proper legal framework by imposing different or additional conditions and constraints on the use of data for AI purposes. However, in practical terms, GDPR might be hardly invoked by the data subjects. The absence of this clarity raises questions about how the data subjects could ask to restrict processing (Article 18 GDPR) and to delete and erase data (Article 17 GDPR).</p><p>At this point, we should highlight that the initial requirement outlined in Article 10 (5) (a) AI Act states that data processing under this article is permissible only when its goal, specifically bias detection and correction, "cannot effectively be achieved through processing synthetic or anonymised data." This means synthetic or anonymised data should first be used to identify and correct bias. In contrast, real data can be used only when the requirement of synthetic or anonymised data is exhausted. Furthermore, when biases are based on sensitive data-like ethnicity data-the AI Act requires using anonymised 1 Treaty of the European Union, 1992, https://www.cvce.eu/content/publication/2002/4/9/ 2c2f2b85-14bb-4488-9ded-13f3cd04de05/publishable_en.pdf. 2 Treaty on the Functioning of the European Union, 1958, https://eur-lex.europa.eu/resource.html?uri=cellar: 2bf140bf-a3f8-4ab2-b506-fd71826e6da6.0023.02/DOC_1&amp;format=PDF. 3 European Charter of Fundamental Rights,2012, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:12012P/ TXT. data to process sensitive data as a primary bias detection and correction tool <ref type="bibr" target="#b65">[66]</ref>. AI-friendly data anonymisation tools align with the Recital 45 AI Act, which states, "Practices that are prohibited by Union law, including data protection law, non-discrimination law, consumer protection law, and competition law, should not be affected by this Regulation". It is worth mention <ref type="bibr" target="#b65">[66]</ref>ing that the inclusion of synthetic data in the AI Act was underlined by the European Commission's Joint Research Centre [67], supporting rebalancing mis/under-represented groups of people in ethnicity, gender, etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">A gender-based risk assessment according to the AI Act and GDPR</head><p>Another important aspect of gender classifiers, from a legal point of view, is to which extent they pose a risk to human rights (Articles 3 (2) and 27 AI Act). The AI Act adopts a risk-based approach (Article 9 AI Act) to oversee AI systems, categorising them into prohibited, high-risk and low-risk categories. However, the criteria and thresholds for determining the risk level of an AI system are not consistently clear. This lack of clarity may result in the exclusion of specific AI systems that may pose significant risks to data protection rights, such as those that process sensitive personal data or involve large-scale processing of personal data. For this purpose, a gender-based risk assessment in our study case is needed.</p><p>To begin with, AI systems that profile individuals based on automated processing of personal data to assess various aspects of a person's life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location, or movement, are always considered high-risk AI systems. For instance, one area in which AI profiling is used is border management control by law enforcement agencies <ref type="bibr" target="#b66">[68,</ref><ref type="bibr" target="#b67">69]</ref>.</p><p>The classification rules for high-risk AI systems are enshrined in Article 6 (2) AI Act and Annex III AI Act. Remote biometric identification, biometric categorisation and emotion recognition systems are considered high-risk AI systems. Yet, the two requirements outlined in Article 6 (1) AI Act should be fulfilled to consider the above-mentioned AI systems as high-risk AI systems. More particularly, the AI system should intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I. Additionally, the product whose safety component under point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product according to the Union harmonisation legislation listed in Annex I.</p><p>Needless to say, these requirements are not fulfilled in our study case, which was an experiment for academic purposes (Article 2 (6) AI Act). Yet, for the above reasons, if this gender classifier was placed in the market, it would be considered a high-risk AI system. Moreover, solely for the legal analysis of our study case, we must stress that the data used is biometric derived from facial images. Biometric data is personal data resulting from specific technical processing relating to a natural person's physical, physiological or behavioural characteristics, which allow or confirm the unique identification of that person. Biometric data, like data revealing racial or ethnic origin or someone's sexual behaviour or sexual orientation, are considered a special category of data (Articles 3 (35) AI Act and 9 (1) GDPR). However, gender data is not. This difference is important, as GDPR does not offer extra protection for gender data processing, which signals that gender can be stored and processed without further constraints.</p><p>The legal analysis concludes with three key points. Firstly, it underscores the importance of enhancing data curation practices to improve the reliability and predictability of gender classifiers. This measure aims to mitigate the risk of generating biased outcomes that could result in arbitrary discrimination, unfair decisions, denial of services, or inappropriate interference with individuals' fundamental rights or freedoms. Data collection, curation and selection are essential components of AI risk management <ref type="bibr" target="#b68">[70]</ref> and a fundamental aspect of the AI data governance framework. This framework aims to establish robust procedures ensuring high-quality data availability, labelling, and use.</p><p>Secondly, the feminist theory of the social construction of gender offers insights into patriarchal power dynamics and the privileges of data (cis-normative) male domination. Through this lens, we can interrogate how to advance gender equality and prevent non-discrimination through data quality and curation. Given that "attractiveness" and beauty ideals differ across cultural and geographic boundaries, we endorse a feminist intersectional approach <ref type="bibr" target="#b69">[71]</ref> that acknowledges and accommodates these diversities. This approach aims to ensure that datasets used for AI training are not predominantly focused on white, middle-class, young, and heterosexual women.</p><p>Lastly, advocating for an interdisciplinary approach to developing AI applications becomes imperative in light of this background. This entails integrating the averageness theory from the field of psychology and analysing the legal safeguards to address gender bias in AI. The AI Act emphasises the necessity of such collaboration, as highlighted in Recital 142, to ensure that AI advancements yield socially advantageous results and address socio-economic disparities. This involves fostering cooperation among AI developers, experts in inequality and non-discrimination, academics, and other relevant stakeholders.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>This exploration of the complexities of gender perception and classification within artificial intelligence highlights the intricate interplay between technical capabilities and socio-psychological insights. Through the lens of the "averageness theory" and the creation of a synthetic dataset, the study illuminates the intrinsic biases associated with traditionally attractive features. Demonstrating how attractiveness, understood as youthfulness, the use of make-up, or particular facial expressions, algorithmically influences representations, particularly those of women.</p><p>A qualitative and quantitative analysis of the generated dataset reveals that men are also subject to gender stereotypes, with solid conformity to cis-normative stereotypes (for instance, having a beard or short hair). Consequently, the issue does not concern only the binary categorisation of gender but also how, within this dichotomy, stereotypes of femininity and masculinity, often tied to stereotypical beauty standards, are perpetuated and reinforced by AI algorithms. Such distortions can have unintended and harmful consequences for individuals and groups, underscoring the urgent need for ethical and responsible AI development.</p><p>In this context, the introduction of legislative discussions, particularly the efforts of the European Union exemplified by the AI Act, emphasises the necessity of integrating ethical, legal, and technical considerations to ensure the development of artificial intelligence technologies that respect human rights and equality. Indeed, from a legal standpoint, ensuring the regulation of diverse and representative datasets is fundamental <ref type="bibr" target="#b70">[72]</ref>. However, adequate data curation requires a solid understanding of the various biases and discriminations that need attention and correction <ref type="bibr" target="#b71">[73,</ref><ref type="bibr" target="#b72">74]</ref>. For this reason, this work promotes the necessity of an interdisciplinary approach in the development phases of AI technologies. Beyond cognitive psychology, we highlight how studies such as the feminist theory of the social construction of gender offer insights into the dynamics of patriarchal power and the privileges of cis-normative male dominance in the data collection process.</p><p>Only through an integration of technological innovation, social analysis, and solid regulatory oversight can we achieve an AI that is not only advanced but also fair and inclusive. Artificial intelligence technologies that truly serve society require commitment from all social actors since implementing fairness measures within AI algorithms is indispensable despite their unavoidable non-neutrality <ref type="bibr" target="#b72">[74]</ref>. As we lay the groundwork for future explorations, this study advocates for a multidisciplinary strategy to identify and mitigate AI biases. The intertwining of technological innovation with robust regulatory oversight presents a promising avenue towards developing AI that is not only advanced but also aligned with the principles of fairness and inclusivity, thereby ensuring that AI serves the diverse needs of the global community.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Examples of sample images in the created dataset using Stable Diffusion 2.1 for the prompts 'front photograph of an unattractive/attractive ethnicity man/woman. ' for White, Black, and Asian groups</figDesc><graphic coords="6,117.13,65.60,361.03,211.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Based on Accuracy -"First level of representation" (RQ1.a. and RQ1.b)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Error Rate gap between Attractive and Unattractive Men/Women for the models analysed. The red line reports the inner gap between the female and male error gap for each model.</figDesc><graphic coords="8,115.62,453.76,351.71,155.69" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Composite faces and face samples of attractive/unattractive men/women with clear differences in makeup use, face expression and age gap.</figDesc><graphic coords="9,117.13,145.29,361.03,161.58" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Controversial output for prompts referring to an unattractive black woman in (A) and (B) for attractive women (black/white). (B) Cases of "chubby" face in the unattractive groups</figDesc><graphic coords="11,162.25,294.23,270.76,152.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Representation of class percentage in the dataset created.</figDesc><table><row><cell cols="3">Ethnicity Attractive Unattractive</cell></row><row><cell>Asian</cell><cell>16.59%</cell><cell>16.62%</cell></row><row><cell>Black</cell><cell>16.16%</cell><cell>16.54%</cell></row><row><cell>White</cell><cell>17.64%</cell><cell>16.11%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Gender classification performance for Amazon Rekognition, measured in terms of Positive Predictive Value (PPV) and error rate.</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell cols="2">Amazon Rekognition</cell><cell></cell></row><row><cell cols="6">Group Metric (%) Asian Black White Avg.</cell></row><row><cell>A-M</cell><cell>PPV ER</cell><cell>100 -</cell><cell>99.46 0.53</cell><cell>95.07 4.93</cell><cell>98.18 1.82</cell></row><row><cell>U-M</cell><cell>PPV ER</cell><cell>99.45 0.54</cell><cell>100 -</cell><cell>99.00 1.00</cell><cell>99.48 0.52</cell></row><row><cell>A-W</cell><cell>PPV ER</cell><cell>100 -</cell><cell>100 -</cell><cell>100 -</cell><cell>100 -</cell></row><row><cell>U-W</cell><cell>PPV ER</cell><cell>75.77 24.23</cell><cell>99.49 0.51</cell><cell>91.63 8.37</cell><cell>88.96 11.04</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Comparison of gender classification performance between InsightFace and DeepFace, measured in terms of Positive Predictive Value (PPV) and Error Rate (ER).</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell cols="2">InsightFace</cell><cell></cell><cell></cell><cell cols="2">DeepFace</cell><cell></cell></row><row><cell cols="6">Group Metric (%) Asian Black White Avg.</cell><cell cols="4">Asian Black White Avg.</cell></row><row><cell>A-M</cell><cell>PPV ER</cell><cell>79.45 20.54</cell><cell>93.54 6.45</cell><cell>91.00 9.00</cell><cell>87.98 11.99</cell><cell>98.95 1.04</cell><cell>99.46 0.53</cell><cell>100 -</cell><cell>99.47 0.52</cell></row><row><cell>U-M</cell><cell>PPV ER</cell><cell>85.86 14.13</cell><cell>91.39 8.60</cell><cell>87.19 12.80</cell><cell>88.14 11.84</cell><cell>100 -</cell><cell>100 -</cell><cell>100 -</cell><cell>100 -</cell></row><row><cell>A-W</cell><cell>PPV ER</cell><cell>96.41 3.58</cell><cell>75.77 24.22</cell><cell>84.65 15.34</cell><cell>85.61 14.38</cell><cell>72.82 27.18</cell><cell>53.09 46.91</cell><cell>76.72 23.28</cell><cell>67.54 32.46</cell></row><row><cell>U-W</cell><cell>PPV ER</cell><cell>70.61 29.38</cell><cell>55.55 44.44</cell><cell>62.06 37.93</cell><cell>62.74 37.25</cell><cell>17.01 82.99</cell><cell>14.14 85.86</cell><cell>32.51 67.49</cell><cell>21.22 78.78</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head></head><label></label><figDesc><ref type="bibr" target="#b3">4</ref>. With this age range set, a clear trend emerges regarding age between attractive and non-attractive women, as observed visually from a qualitative analysis. Non-attractive women tend to distribute in older age bands, with a significant presence among those aged 40-49 years (36.95%) and 50-59 years (37.93%) and some samples over 70 years. In contrast, attractive women are generally younger, with about 60% of the samples in the 20-29 year age range. This trend is consistent across all ethnicities. Furthermore, it is possible to see that generally, Asian women are depicted as younger compared to all other groups. At the same time, non-attractive women are older than the others. A similar pattern is observed for men, with non-attractive men slightly older than the attractive ones. The most attractive men belong to the age range of 20 to 39 years, while non-attractive men are predominantly in the age range of 30 to 59 years. Ethnic differences follow a similar trend, with slight variations in age range distributions. Once again, Asians are seen as younger by the model and white men as older. A strong connection between attractiveness and age is evident, with attractive individuals appearing younger than non-attractive ones, particularly among women.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>Consolidated Distribution of Samples Based on Age Ranges for Attractive (A) and Unattractive (U) Men (M) and Women (W) Across White, Black, and Asian ethnicities by AmazonRekognition</figDesc><table><row><cell>Age</cell><cell>Men</cell><cell></cell><cell></cell><cell>Women</cell><cell></cell></row><row><cell>White</cell><cell>Black</cell><cell>Asian</cell><cell>White</cell><cell>Black</cell><cell>Asian</cell></row><row><cell>A</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 5</head><label>5</label><figDesc>Average top-10 attributes detected for Attractive/Unattractive (A/U) Women (W) and Attractive/Unattractive Men (M). The attributes are generalised into 5 main groups: Hair Attributes (purple), Makeup and Accessories (orange), Facial Expression and Features (light blue), Beard Attributes (green) and Other Physical Attributes (yellow).</figDesc><table><row><cell>A-M</cell><cell></cell><cell>U-M</cell><cell></cell><cell>A-W</cell><cell></cell><cell>U-W</cell><cell></cell></row><row><cell>Attributes</cell><cell>(%)</cell><cell>Attributes</cell><cell>(%)</cell><cell>Attributes</cell><cell>(%)</cell><cell>Attributes</cell><cell>(%)</cell></row><row><cell>Big_Lips</cell><cell>96,77</cell><cell>Big_Lips</cell><cell>96,24</cell><cell>Young</cell><cell>100</cell><cell>No_Beard</cell><cell>95,79</cell></row><row><cell>Young</cell><cell>95,86</cell><cell>Big_Nose</cell><cell>64,00</cell><cell>No_Beard</cell><cell>98,80</cell><cell>Young</cell><cell>77,78</cell></row><row><cell cols="2">High_Cheekbones 73.08</cell><cell>Young</cell><cell cols="5">57,42 Wearing_Lipstick 91,50 High_Cheekbones 65,24</cell></row><row><cell>Smiling</cell><cell>77,62</cell><cell>No_Beard</cell><cell>54,51</cell><cell>Heavy_Makeup</cell><cell>71,74</cell><cell>Black_Hair</cell><cell>49,42</cell></row><row><cell>Black_Hair</cell><cell cols="5">54,63 High_Cheekbones 50,47 High_Cheekbones 63,64</cell><cell>Big_Lips</cell><cell>48,23</cell></row><row><cell>Big_Nose</cell><cell>51,54</cell><cell>Black_Hair</cell><cell>46,72</cell><cell>Smiling</cell><cell>57,56</cell><cell>Smiling</cell><cell>44,91</cell></row><row><cell>No_Beard</cell><cell>46,39</cell><cell>Goatee</cell><cell>38,20</cell><cell>Black_Hair</cell><cell>53,54</cell><cell>Big_Nose</cell><cell>41,09</cell></row><row><cell cols="2">5_o_Clock Shadow 44,83</cell><cell>Smiling</cell><cell>33,53</cell><cell>Big_Nose</cell><cell cols="3">40,47 Wearing_Lipstick 16,15</cell></row><row><cell>Goatee</cell><cell>30,49</cell><cell>Mustache</cell><cell>22,58</cell><cell>Big_Lips</cell><cell>34,41</cell><cell>Bangs</cell><cell>10,49</cell></row><row><cell>Bushy_Eyebrows</cell><cell>28,38</cell><cell>Chubby</cell><cell>14,08</cell><cell>Wavy_Hair</cell><cell>21,09</cell><cell>Brown_Hair</cell><cell>9,89</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 HHAI-WS 2024: Workshops at the Third International Conference on Hybrid Human-Artificial Intelligence (HHAI), June 10-14, 2024, Malmö, Sweden † This work was supported by the ARIAC project (No. 2010235), funded by the Service Public de Wallonie (SPW Recherche). ‡ This work was supported by the FARI -AI for the Common Good Institute (ULB-VUB), financed by the European Union,</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Language, gesture, and emotional communication: An embodied view of social interaction</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">De</forename><surname>Stefani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">De</forename><surname>Marco</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Psychology</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">465649</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Linguistic intergroup bias: Stereotype perpetuation through language</title>
		<author>
			<persName><forename type="first">A</forename><surname>Maass</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in experimental social psychology</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="79" to="121" />
			<date type="published" when="1999">1999</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">the casual cruelty of our prejudices&quot;: On walter lippmann&apos;s theory of stereotype and its &quot;obliteration&quot; in psychology and social science</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">P</forename><surname>Bottom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Kong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the History of the Behavioral Sciences</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="363" to="394" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title/>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Lefton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Brannon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PSYCHOLOGY</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Oakley</surname></persName>
		</author>
		<title level="m">Sex, gender and society (london, maurice temple smith</title>
				<imprint>
			<date type="published" when="1972">1972</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Women, men and programming: Knowledge, metaphors and masculinity</title>
		<author>
			<persName><forename type="first">I</forename><surname>Boivie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Gender issues in learning and working with information technology: Social constructs and cultural contexts</title>
				<imprint>
			<publisher>IGI Global</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Ai in education: An opportunity riddled with challenges</title>
		<author>
			<persName><forename type="first">I</forename><surname>Bartoletti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The ethics of artificial intelligence in education</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="74" to="90" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Diagnosing gender bias in image recognition systems</title>
		<author>
			<persName><forename type="first">C</forename><surname>Schwemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Knight</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bello-Pardo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Oklobdzija</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schoonvelde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lockhart</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">socius</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page">2378023120967171</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Mauro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schellmann</surname></persName>
		</author>
		<ptr target="https://www.theguardian.com/technology/2023/feb/08/biased-ai-algorithms-racy-women-bodies,guardianexclusive" />
		<title level="m">there is no standard&apos;: investigation finds ai algorithms objectify women&apos;s bodies</title>
				<imprint>
			<publisher>The Guardian</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>: AI tools rate photos of women as more sexually suggestive than those of men, especially if nipples, pregnant bellies or exercise is involved. This story was produced in partnership with the Pulitzer Center&apos;s AI Accountability Network</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">U A</forename></persName>
		</author>
		<ptr target="https://fra.europa.eu/en/publication/2019/fundamental-rights-report-2019" />
		<title level="m">for Fundamental Rights, Fundamental Rights Report 2019 -fra</title>
				<imprint>
			<date type="published" when="2019-08">2019. 08-04-2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Artificial cognition: How experimental psychology can help generate explainable artificial intelligence</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E T</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">W</forename><surname>Taylor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychonomic Bulletin &amp; Review</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="454" to="475" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Attractive faces are only average</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Langlois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Roggman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological science</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="115" to="121" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">What is average and what is not average about attractive faces?</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Langlois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Roggman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Musselman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological science</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="214" to="220" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Fitting the mind to the world: Face adaptation and attractiveness aftereffects</title>
		<author>
			<persName><forename type="first">G</forename><surname>Rhodes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jeffery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Watson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Clifford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nakayama</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological science</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="558" to="566" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Are average facial configurations attractive only because of their symmetry?</title>
		<author>
			<persName><forename type="first">G</forename><surname>Rhodes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sumich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Byatt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological science</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="52" to="58" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Artificial intelligence and data protection: Observations on a growing conflict: Observations on a growing conflict</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">H</forename><surname>Cate</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dockery</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Economic regulations and law</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="107" to="130" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Feminist legal theory: A primer</title>
		<author>
			<persName><forename type="first">N</forename><surname>Levit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Verchick</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>NYU Press</publisher>
			<biblScope unit="volume">74</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Gender shades: Intersectional accuracy disparities in commercial gender classification</title>
		<author>
			<persName><forename type="first">J</forename><surname>Buolamwini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on fairness, accountability and transparency</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="77" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">How computers see gender: An evaluation of gender classification in commercial facial analysis services</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Scheuerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Brubaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the ACM on Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="33" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yatskar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ordonez</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1811.08489</idno>
		<title level="m">Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Muthukumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pedapati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ratha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sattigeri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-W</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kingsbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Thomas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mojsilovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Varshney</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1812.00099</idno>
		<title level="m">Understanding unequal gender classification accuracy from face images</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Gendered differences in face recognition accuracy explained by hairstyles, makeup, and facial morphology</title>
		<author>
			<persName><forename type="first">V</forename><surname>Albiero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">W</forename><surname>Bowyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Information Forensics and Security</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="127" to="137" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Grother</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ngan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hanaoka</surname></persName>
		</author>
		<title level="m">Face recognition vendor test (fvrt): Part 3, demographic effects</title>
				<meeting><address><addrLine>Gaithersburg, MD</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
		<respStmt>
			<orgName>National Institute of Standards and Technology</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">Ada</forename><surname>Ada</surname></persName>
		</author>
		<ptr target="https://ada-ada-ada.art/projects/in-transitu" />
		<title level="m">transitu</title>
				<imprint>
			<date type="published" when="2023-09-25">2024. 2023-09-25</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Luccioni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Akiki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jernite</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.11408</idno>
		<title level="m">Stable bias: Analyzing societal representations in diffusion models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Social biases through the text-to-image generation lens</title>
		<author>
			<persName><forename type="first">R</forename><surname>Naik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nushi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2023 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="786" to="808" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brundage</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2108.02818</idno>
		<title level="m">Evaluating clip: Towards characterization of broader capabilities and downstream implications</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Learning transferable visual models from natural language supervision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hallacy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Goh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mishkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="8748" to="8763" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nakashima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Garcia</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2312.03027</idno>
		<title level="m">Stable diffusion exposed: Gender bias from prompt to image</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Brack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schramowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Friedrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hintersdorf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kersting</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2212.06013</idno>
		<title level="m">The stable artist: Steering semantics in diffusion latent space</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models</title>
		<author>
			<persName><forename type="first">P</forename><surname>Schramowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Deiseroth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kersting</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="22522" to="22531" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">The human face</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sanderson</surname></persName>
		</author>
		<idno type="DOI">10.1017/S0021932000000377</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Biosocial Science</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="201" to="204" />
			<date type="published" when="1974">1974. 1975</date>
		</imprint>
	</monogr>
	<note>price £3</note>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">The physiognomic basis of sexual stereotyping</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Nakdimen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The American journal of psychiatry</title>
		<imprint>
			<biblScope unit="volume">141</biblScope>
			<biblScope unit="page" from="499" to="503" />
			<date type="published" when="1984">1984</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Gender stereotypes in science education resources: A visual content analysis</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Kerkhoven</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Land-Zandstra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Rodenburg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PloS one</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">e0165037</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Gender bias in the iranian high school efl textbooks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Amini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Birjandi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">English Language Teaching</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="134" to="147" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Quantification of gender representation bias in commercial films based on image analysis</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Jang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the ACM on Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="29" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Female librarians and male computer programmers? gender bias in occupational images on digital media platforms</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">K</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chayko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Inamdar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Floegel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Association for Information Science and Technology</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="page" from="1281" to="1294" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">The perception of face gender: The role of stimulus structure in recognition and classification</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>O'toole</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Deffenbacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Valentin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mckee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Huff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Abdi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Memory &amp; cognition</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="146" to="160" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Sex-typicality and attractiveness: Are supermale and superfemale faces super-attractive?</title>
		<author>
			<persName><forename type="first">G</forename><surname>Rhodes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hickford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jeffery</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">British journal of psychology</title>
		<imprint>
			<biblScope unit="volume">91</biblScope>
			<biblScope unit="page" from="125" to="140" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Maxims or myths of beauty? a meta-analytic and theoretical review</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Langlois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kalakanis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Rubenstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hallam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Smoot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological bulletin</title>
		<imprint>
			<biblScope unit="volume">126</biblScope>
			<biblScope unit="page">390</biblScope>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">A unified account of the effects of distinctiveness, inversion, and race in face recognition</title>
		<author>
			<persName><forename type="first">T</forename><surname>Valentine</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Quarterly Journal of Experimental Psychology</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="161" to="204" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">U</forename><surname>Noble</surname></persName>
		</author>
		<title level="m">Algorithms of Oppression: How Search Engines Reinforce Racism</title>
				<imprint>
			<publisher>New York University Press</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Algorithmic decision making and the cost of fairness</title>
		<author>
			<persName><forename type="first">S</forename><surname>Corbett-Davies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pierson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Feller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Huq</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</title>
				<meeting>the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining<address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="797" to="806" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<title level="m" type="main">Automatic prediction of human attractiveness</title>
		<author>
			<persName><forename type="first">R</forename><surname>White</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Eden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maire</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
		<respStmt>
			<orgName>UC Berkeley CS280A Proj</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Predicting facial beauty without landmarks</title>
		<author>
			<persName><forename type="first">D</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Gong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. European Conference on Computer Vision (ECCV)</title>
				<meeting>European Conference on Computer Vision (ECCV)</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Annotator rationales for visual recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Donahue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Grauman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2011 International Conference on Computer Vision</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1395" to="1402" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Scut-fbp: A benchmark dataset for facial beauty perception</title>
		<author>
			<persName><forename type="first">D</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Systems, Man, and Cybernetics</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="1821" to="1826" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Celeba-spoof: Large-scale face anti-spoofing dataset with rich annotations</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Vision-ECCV 2020: 16th European Conference</title>
				<meeting><address><addrLine>Glasgow, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">August 23-28, 2020. 2020</date>
			<biblScope unit="page" from="70" to="85" />
		</imprint>
	</monogr>
	<note>Proceedings, Part XII 16</note>
</biblStruct>

<biblStruct xml:id="b48">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Rombach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Blattmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Esser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ommer</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2112.10752</idno>
		<title level="m">High-resolution image synthesis with latent diffusion models</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Ai</surname></persName>
		</author>
		<ptr target="https://huggingface.co/spaces/stabilityai/stable-diffusion" />
		<title level="m">Stable diffusion</title>
				<imprint>
			<date type="published" when="2022">2022. 2024-04-10</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2310.00277</idno>
		<title level="m">A unified framework for generative data augmentation: A comprehensive survey</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b51">
	<analytic>
		<title level="a" type="main">Joint face detection and facial expression recognition with mtcnn</title>
		<author>
			<persName><forename type="first">J</forename><surname>Xiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2017 4th international conference on information science and control engineering (ICISCE), IEEE</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="424" to="427" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<monogr>
		<author>
			<persName><forename type="first">I</forename></persName>
		</author>
		<ptr target="Ac-cessed:2024-" />
		<title level="m">Amazon Web Services, Amazon rekognition</title>
				<imprint>
			<date type="published" when="2024-03-26">2024. 03-26</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<monogr>
		<title level="m" type="main">Deepface: A lightweight face recognition and facial attribute analysis (age, gender, emotion and race) library for python</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Serengil</surname></persName>
		</author>
		<ptr target="https://github.com/serengil/deepface" />
		<imprint>
			<date type="published" when="2024-03-26">2024. 2024-03-26</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Deng</surname></persName>
		</author>
		<ptr target="https://github.com/deepinsight/insightface" />
		<title level="m">Insightface: 2d and 3d face analysis project</title>
				<imprint>
			<date type="published" when="2024-03-26">2024. 2024-03-26</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<monogr>
		<author>
			<persName><surname>Tornadomeet</surname></persName>
		</author>
		<ptr target="https://github.com/tornadomeet/mxnet-face" />
		<title level="m">Mxnet-face: A deep learning face recognition implementation using mxnet</title>
				<imprint>
			<date type="published" when="2024-04-10">2024. 2024-04-10</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b56">
	<analytic>
		<title level="a" type="main">Deep learning face attributes in the wild</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Tang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of International Conference on Computer Vision (ICCV)</title>
				<meeting>International Conference on Computer Vision (ICCV)</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b57">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Harjunen</surname></persName>
		</author>
		<title level="m">Neoliberal bodies and the gendered fat body</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b58">
	<analytic>
		<title level="a" type="main">Decentering technology in discourse on discrimination</title>
		<author>
			<persName><forename type="first">S</forename><surname>Peña Gangadharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Niklas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information, Communication &amp; Society</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="882" to="899" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b59">
	<analytic>
		<title level="a" type="main">Big data&apos;s disparate impact</title>
		<author>
			<persName><forename type="first">S</forename><surname>Barocas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Selbst</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Calif. L. Rev</title>
		<imprint>
			<biblScope unit="volume">104</biblScope>
			<biblScope unit="page">671</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b60">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Commission</surname></persName>
		</author>
		<ptr target="https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai" />
		<title level="m">Ethics guidelines for trustworthy AI -digital-strategy</title>
				<imprint>
			<date type="published" when="2019-08">2019. 08-04-2024</date>
		</imprint>
	</monogr>
	<note>ec.europa</note>
</biblStruct>

<biblStruct xml:id="b61">
	<analytic>
		<title level="a" type="main">The role and challenges of education for responsible ai</title>
		<author>
			<persName><forename type="first">V</forename><surname>Dignum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">London Review of Education</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="1" to="11" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b62">
	<analytic>
		<title level="a" type="main">A legal framework for ai training data-from first principles to the artificial intelligence act</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hacker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Law, Innovation and Technology</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="257" to="301" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b63">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Hoffmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Borgeaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mensch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Buchatskaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rutherford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D L</forename><surname>Casas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Hendricks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Welbl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Clark</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2203.15556</idno>
		<title level="m">Training compute-optimal large language models</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b64">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Hagerty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Rubinov</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1907.07892</idno>
		<title level="m">Global ai ethics: a review of the social impacts and ethical implications of artificial intelligence</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b65">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Dominguez-Catena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Paternain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Galar</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2205.10049</idno>
		<title level="m">Assessing demographic bias transfer from dataset to model: A case study in facial expression recognition</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b66">
	<analytic>
		<title level="a" type="main">Robots and refugees: the human rights impacts of artificial intelligence and automated decision-making in migration</title>
		<author>
			<persName><forename type="first">P</forename><surname>Molnar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Research Handbook on International Migration and Digital Technology</title>
				<imprint>
			<publisher>Edward Elgar Publishing</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="134" to="151" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b67">
	<analytic>
		<title level="a" type="main">Artificial intelligence (ai) at schengen borders: automated processing, algorithmic profiling and facial recognition in the era of techno-solutionism</title>
		<author>
			<persName><forename type="first">N</forename><surname>Vavoula</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">European Journal of Migration and Law</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="457" to="484" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b68">
	<monogr>
		<ptr target="https://www.nist.gov/itl/ai-risk-management-framework" />
		<title level="m">Ai risk management framework</title>
				<imprint>
			<date type="published" when="2024-04-08">2024. 2024-04-08</date>
		</imprint>
		<respStmt>
			<orgName>National Institute of Standards and Technology</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b69">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Lutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T H</forename><surname>Vivar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Supik</surname></persName>
		</author>
		<title level="m">Framing intersectionality: Debates on a multi-faceted concept in gender studies</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b70">
	<analytic>
		<title level="a" type="main">Oppressive ai: Feminist categories to understand its political effects</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Varon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Why Is AI a Feminist Issue</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b71">
	<analytic>
		<title level="a" type="main">Ai and the quest for diversity and inclusion: a systematic literature review</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Shams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zowghi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI and Ethics</title>
		<imprint>
			<biblScope unit="page" from="1" to="28" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b72">
	<analytic>
		<title level="a" type="main">Diversity in sociotechnical machine learning systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Fazelpour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De-Arteaga</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Big Data &amp; Society</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">20539517221082027</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
