<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Mitigating Biases in Deep Learning Models: A Path Towards Fairness and Inclusivity</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Ismael</forename><surname>Garrido-Muñoz</surname></persName>
							<email>igmunoz@ujaen.es</email>
							<affiliation key="aff0">
								<orgName type="institution">Universidad de Jaén</orgName>
								<address>
									<addrLine>Campus Las Lagunillas s/n</addrLine>
									<postCode>23071</postCode>
									<settlement>Jaén</settlement>
									<country key="ES">España</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Mitigating Biases in Deep Learning Models: A Path Towards Fairness and Inclusivity</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">47C6BE9E23CD7389F2862B51530FBD19</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:05+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>bias</term>
					<term>deep learning</term>
					<term>nlp</term>
					<term>fairness</term>
					<term>mitigation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The emergence of large language models (LLMs) has revolutionized the field of natural language processing, facilitating remarkable progress across various domains. However, the inherent opaqueness of these models, functioning as black boxes, presents significant challenges. The lack of transparency obstructs our comprehension of their internal mechanisms and decision-making processes, raising concerns about their reliability and fairness. Various forms of biases have already been identified within these models. It is crucial to identify the location and encoding of these biases within LLMs to enable necessary modifications that ensure their safe and equitable application free of social biases in all kind of areas. Given the extensive deployment of LLMs in real-world applications, their impact on individuals' lives is magnified. Thus, the subsequent phase of this thesis will focus on effectively mitigating biases in deep learning models.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The advent of GPT-3 <ref type="bibr" target="#b0">[1]</ref> has sparked a massive adoption of this model, with predictions of its profound impact on the labor market, as outlined by <ref type="bibr" target="#b1">[2]</ref>. This remarkable influence stems from the diverse range of capabilities that these models possess, including question answering, text generation, translation, summarization, information retrieval, act as a conversational agent, programming assistance, educational support, story telling and more.</p><p>However, despite the tremendous utility of LLMs, they also pose an emerging challenge: their tendency to operate as black boxes. While they exhibit impressive performance, their internal mechanisms and decision-making processes often remain opaque, making them difficult to comprehend and explain. This lack of transparency gives rise to concerns regarding their trustworthiness, fairness, and the potential biases that may be embedded within their models.</p><p>The concept of a black box refers to a system or model where the inputs and outputs are known, but the inner mechanisms and algorithms that generate those outputs remain concealed or poorly understood. LLMs, with their complex neural networks and millions, or even billions, of parameters, are intricate black boxes that often surpass human comprehension. This opaqueness hampers our ability to fully grasp the decision-making processes of these models, making it difficult to tackle biases, recognize potential vulnerabilities, and guarantee ethical and responsible utilization. Consequently, there is a pressing need to enhance transparency and develop techniques that shed light on the inner workings of LLMs.</p><p>In recent years, artificial intelligence has made significant advances, and a substantial portion of this progress can be attributed to neural network models. These models, trained on extensive datasets, have showcased remarkable capabilities in capturing various aspects of reality. However, while their ability to capture reality with precision is commendable, it can also have negative implications. One such concern arises from their propensity to inadvertently perpetuate and replicate undesirable stereotypes.</p><p>These models are already being used in multiple production systems such as medical systems <ref type="bibr" target="#b2">[3]</ref>, legal systems <ref type="bibr" target="#b3">[4]</ref>, hiring <ref type="bibr" target="#b4">[5]</ref>, content moderation <ref type="bibr" target="#b5">[6]</ref>, CRM <ref type="bibr" target="#b6">[7]</ref>, marketing <ref type="bibr" target="#b7">[8]</ref>, virtual assistants, harmful content detection <ref type="bibr" target="#b8">[9]</ref>, chatbots, etc.</p><p>These systems are used in products despite having proven to be unsafe. It is well known that sometimes these black boxes cause unintended harm. One example is the police COMPAS system, which assigned an unreal recidivism value to both white and black people. For white individuals, the assigned value was lower than the actual value, while for black individuals, it was higher than the actual value <ref type="bibr" target="#b9">[10]</ref>. Another example is the medical system called Optum <ref type="bibr" target="#b10">[11]</ref>, which systematically allocated fewer resources for the treatment of black patients compared to white patients with the same level of need.</p><p>This realization raises concerns about the fairness and potential harm that may arise from the application of non-explainable models in certain situations. For instance, Amazon discontinued the use of a recruitment tool <ref type="bibr" target="#b11">[12]</ref> when it was discovered to be biased against women. These examples highlight the presence of biases not only in language models but also in systems employing computer vision <ref type="bibr" target="#b12">[13]</ref>, audio processing <ref type="bibr" target="#b13">[14]</ref>, and linguistic corpora <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b15">[16]</ref>. It is crucial to address these biases as they can perpetuate inequality and have real-world consequences. Understanding and mitigating biases in such systems is a pressing concern.</p><p>In the case of GPT-3 <ref type="bibr" target="#b0">[1]</ref> (or its frontends like Chat-GPT or Bing GPT) or Google's alternative, Bard <ref type="bibr" target="#b16">[17]</ref>, studying these models is not feasible because they are provided as services through APIs or web interfaces. However, there have been releases of models with similar numbers of parameters and capabilities as the aforementioned ones. For instance, models like Llama <ref type="bibr" target="#b17">[18]</ref>, Vicuna <ref type="bibr" target="#b18">[19]</ref>, Bloom <ref type="bibr" target="#b19">[20]</ref>, OPT <ref type="bibr" target="#b20">[21]</ref>, XGLM <ref type="bibr" target="#b21">[22]</ref>, and the recent Falcon <ref type="bibr" target="#b22">[23]</ref> do provide access to the trained models weights. This access enables us to review, correct, or mitigate any biases present in them.</p><p>This will be the next step of the thesis. Next, we provide a brief overview of evaluation techniques, followed by a collection of the most relevant techniques for bias mitigation. In previous works, such as the one mentioned in , a broader summary of the state-of-the-art in studying bias in language models can be found.</p><p>This will be the next step of the thesis. In the following section, we provide a brief overview of evaluation techniques, followed by a collection of the most relevant techniques for bias mitigation. In previous works, a broader summary of the state-of-the-art in studying bias in language models can be found <ref type="bibr" target="#b23">[24]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Bias in NLP with deep learning</head><p>When we talk about bias in language models, we can approach it as a representational problem <ref type="bibr" target="#b24">[25]</ref>. This refers to the bias that certain demographic groups face in terms of misrepresentation, including negative associations or even their absence in the data and consequently in the model. On the other hand, we can approach it as an allocation problem, which refers to issues of opportunities or resource distribution for individuals belonging to specific demographic groups.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Bias evaluation</head><p>There is extensive work when it comes to evaluating language models for bias, starting with the work of Bolukbasi et al. <ref type="bibr" target="#b25">[26]</ref> on simple word embeddings. Later studies approached the bias issue from the perspective of coreference resolution, such as <ref type="bibr" target="#b26">[27]</ref> with GloVe embeddings. Bias is also examined by measuring the association between concepts and protected attributes. Caliskan et al. <ref type="bibr" target="#b27">[28]</ref> created the Word Embedding Association Test (WEAT) for this purpose. This test was extended by Dev et al. <ref type="bibr" target="#b28">[29]</ref> and Manzini et al. <ref type="bibr" target="#b29">[30]</ref>. Also it was extended by Lauscher et al. <ref type="bibr" target="#b30">[31]</ref> by adding more protected attributes and applying it to languages other than English . It was later on adapted to more complex models like BERT, under the name SEAT, by May et al. <ref type="bibr" target="#b31">[32]</ref> and Tan and Celis <ref type="bibr" target="#b32">[33]</ref>.</p><p>There are other approaches for more complex models like BERT or GPT-2.Vig <ref type="bibr" target="#b33">[34]</ref> introduced visualization tools to understand where these models capture unwanted biases by examining their attention. Additionally, adaptations of WEAT, such as SEAT, have been developed. SEAT tests the protected attribute against a sentence instead of a word, specifically designed for contextual models like BERT. This work was further extended to consider the full context instead of just the sentence level. The latest evaluation method is applied to models like GPT-2, BERT, ELMo, among others.</p><p>More complex models also make serious errors. A compendium of errors discovered in ChatGPT is presented in the work of Borji <ref type="bibr" target="#b34">[35]</ref>. The paper explains that this model is unable to successfully complete tasks that require spatial, temporal, or physical reasoning unless it has been specifically trained for those tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Bias correction</head><p>The main approaches to address bias in language models consist of the following: fine-tuning the model <ref type="bibr" target="#b35">[36]</ref>, data augmentation to balance categories and avoid distortions towards one category <ref type="bibr" target="#b26">[27]</ref>, protecting the attribute during model training to prevent bias capture <ref type="bibr" target="#b36">[37]</ref>, or correcting the vector space of the model as presented in the works of Manzini et al. <ref type="bibr" target="#b29">[30]</ref>, Zhou et al. <ref type="bibr" target="#b37">[38]</ref>, Dev and Phillips <ref type="bibr" target="#b36">[37]</ref>. Among these techniques, fine tuning and model editing are considered the most realistic, especially in the case of large-scale models, since retraining a model from scratch would be very costly in terms of time, hardware resources, money and the effort required to perform the pre-processing and tuning of training data.</p><p>One of the most promising techniques for model editing involves identifying how the model encodes certain knowledge and then making edits accordingly. The proposal of Meng et al. <ref type="bibr" target="#b38">[39]</ref> focuses on editing factual knowledge and serves as a foundation for further adaptation. This technique first identifies the model's influential parts that contribute the most weight in choosing the last token by using causal mediation analysis. From there, the model's weights are edited to guide it towards the desired token. For example, if the model answers Obama to the question "What is the surname of the U.S. president?", the weights can be located and corrected to select the desired token Biden since this would be the updated and accurate answer. Similarly, this method can be generalized to make broader corrections. In fact, in a subsequent work <ref type="bibr" target="#b39">[40]</ref> they adapt this method to perform mass corrections across the model weights. Then they evaluate whether the model only edits knowledge for the specific context given in the prompt or if it can generalize by asking questions about the same fact using different questions or contexts.</p><p>These techniques hold great potential in tackling bias, enhancing the accuracy, and bolstering the reliability of language models. By facilitating targeted edits that align with desired outcomes, these approaches enable the mitigation of unwanted biases in the models' responses. As a result, they contribute to an improved understanding of fairness and ensure more reliable and unbiased outputs from language models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Relevance of the problem</head><p>Every day, these enormous models are increasingly integrated into various products and production systems. However, this integration comes with its own set of challenges. From an economic standpoint, utilizing a biased system can lead to significant disadvantages, as it may not function effectively for all users. On the other hand, the impact of these models on people's lives cannot be overlooked. There are specific contexts, such as systems for resource distribution, employment, or bank credit, where it is crucial to avoid using models that may contain any form of bias. Therefore, it is imperative to thoroughly study bias in data models and understand its underlying causes. This knowledge will enable us to either avoid deploying biased models altogether or develop strategies to mitigate harmful biases when they arise.</p><p>Furthermore, when a language model is identified as not performing adequately in a production system, such as a commercial product, companies face important decisions. Given the immense size and cost associated with training these models, some proposed solutions may be difficult to justify from an economic perspective. For instance, training the model from scratch with revised, filtered, or corrected training data would entail significant expenses. Another option, albeit costly, could involve discontinuing the use of the model, as a poorly performing model is unsuitable for deployment in production systems. This proposition gains some relevance considering the potential non-compliance of such models with new European AI regulations <ref type="bibr" target="#b40">[41]</ref>. Alternatively, more practical approaches could involve retraining the model or leveraging state-of-the-art bias mitigation techniques to address the identified issues.</p><p>The choice of approach will depend on various factors, including the severity of the bias, the feasibility of retraining or mitigating the model, and the legal and ethical obligations that must be met. Regardless of the chosen course of action, it is essential to proactively address and rectify bias issues to ensure responsible and fair deployment of language models in real-world applications. By doing so, we can foster inclusivity, promote equitable outcomes, and uphold the principles of fairness and ethical AI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Hypotheses and objectives</head><p>The following hypothesis is assumed: Given a language model based on deep learning, it will be possible to discern whether it contains biases, and characterize, measure, and mitigate them.</p><p>The following objectives are established:</p><p>• At this phase of the thesis, our primary focus is on the last point.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Methodology and the proposed experiments</head><p>As we move forward with the use of large language models, our next step will involve adapting and evaluating the previous work <ref type="bibr" target="#b41">[42]</ref> in the context of LLMs. Specifically, the previous study shed light on how models tend to perceive women based on their physical appearance, while men are assessed primarily based on their behavior. This pattern was observed across the majority of the models investigated.</p><p>To proceed, we will replicate the aforementioned experiment using large language models (LLMs) and analyze to what extent increasing the model size affects bias, whether it exacerbates or reduces it. Once this evaluation is completed, our focus will shift towards bias mitigation strategies.</p><p>To mitigate bias, we will construct a corpus of prompts that elicit biased responses from the models. This corpus will serve as a foundation for our work in two main areas. First, we will develop methods to detect and identify biased terms produced by the model in its responses. Second, we will explore the previously discussed fact editing techniques to edit the behavior of the model for the detected biases in order to reduce or eliminate them. This will require adapting the causal mediation analysis mechanism to our problem, since editing a specific fact is not the same as making an edit that causes a trade-off between different classes of a protected attribute. After the editing process, we will evaluate the performance of the model with the same set of prompts to check the effectiveness of the mitigation method.</p><p>By undertaking these steps, we aim to gain insights into the behavior of large language models regarding bias and work towards developing effective strategies for bias mitigation.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Conduct an intensive study of the state of the art regarding detection, evaluation, or mitigation of biases in deep learning models. • Analyze and characterize biases present in existing models. • Development of techniques and algorithms for unsupervised or semi-supervised detection and characterization of bias in existing models. • Development of techniques and algorithms for the mitigation or correction of bias in existing models.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Language models are few-shot learners</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">B</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herbert-Voss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Henighan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Ziegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sigler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Litwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Berner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mccandlish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
		<idno>CoRR abs/2005.14165</idno>
		<ptr target="https://arxiv.org/abs/2005.14165.arXiv:2005.14165" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">potential of large language models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Eloundou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Manning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mishkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Rock</surname></persName>
		</author>
		<idno>ArXiv abs/2303.10130</idno>
	</analytic>
	<monogr>
		<title level="m">Gpts are gpts: An early look at the labor market impact</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Using clinical natural language processing for health outcomes research: Overview and actionable suggestions for future advances</title>
		<author>
			<persName><forename type="first">S</forename><surname>Velupillai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Suominen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Liakata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roberts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Morley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Osborn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hayes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Stewart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Downs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chapman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dutta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J Biomed Inform</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<biblScope unit="page" from="11" to="19" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Law and word order: Nlp in legal tech</title>
		<author>
			<persName><forename type="first">R</forename><surname>Dale</surname></persName>
		</author>
		<idno type="DOI">10.1017/S1351324918000475</idno>
	</analytic>
	<monogr>
		<title level="j">Natural Language Engineering</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="211" to="217" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Help wanted: an examination of hiring algorithms, equity, and bias</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bogen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rieke</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Gillespie</surname></persName>
		</author>
		<idno type="DOI">10.12987/9780300235029</idno>
		<title level="m">Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<ptr target="https://investor.salesforce.com/press-releases/press-release-details/2023/Salesforce-Announces-AI-Cloud--Bringing-Trusted-Generative-AI-to-the-Enterprise/default.aspx" />
		<title level="m">Salesforce Announces AI Cloud -Bringing Trusted Generative AI to the Enterprise -investor</title>
				<imprint>
			<date type="published" when="2023-06-18">2023. Accessed 18-Jun-2023</date>
		</imprint>
	</monogr>
	<note type="report_type">salesforce</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://news.adobe.com/news/news-details/2023/Adobe-Announces-New-Sensei-GenAI-Services-to-Reimagine-End-to-End-Marketing-Workflows/default.aspx" />
		<title level="m">Adobe Announces New Sensei GenAI Services to Reimagine End-to-End Marketing Workflows -news</title>
				<imprint>
			<date type="published" when="2023-06-18">2023. Accessed 18-Jun-2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Tabahriti</surname></persName>
		</author>
		<ptr target="https://www.businessinsider.com/twitter-now-relying-more-ai-identify-harmful-content-2022-12" />
		<title level="m">Twitter is now relying more on AI to identify harmful content, says its new trust and safety chief -businessinsider</title>
				<imprint>
			<date type="published" when="2022-06-18">2022. 18-Jun-2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">predict future criminals. and it&apos;s biased against blacks</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Julia Angwin</surname></persName>
		</author>
		<ptr target="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" />
	</analytic>
	<monogr>
		<title level="m">Machine bias -there&apos;s software used across the country to</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Dissecting racial bias in an algorithm that guides health decisions for 70 million people</title>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">O U</forename><surname>Berkeley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Obermeyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Berkeley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M U O</forename><surname>Chicago</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mullainathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">O</forename><surname>Chicago</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">M A</forename><surname>Metrics</surname></persName>
		</author>
		<idno type="DOI">10.1145/3287560.3287593</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/3287560.3287593" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the conference on fairness, accountability, and transparency</title>
				<meeting>the conference on fairness, accountability, and transparency</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Amazon scraps secret ai recruiting tool that showed bias against women</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dastin</surname></persName>
		</author>
		<ptr target="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Howard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Borensteion</surname></persName>
		</author>
		<ptr target="https://www.americanscientist.org/article/trust-and-bias-in-robots" />
		<title level="m">Trust and bias in robots</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A field study of the impact of gender and user&apos;s technical experience on the performance of voice-activated medical tracking application</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rodger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pendharkar</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ijhcs.2003.09.005</idno>
	</analytic>
	<monogr>
		<title level="j">Int. J. Hum.-Comput. Stud</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="page" from="529" to="544" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Extracting semantic representations from word co-occurrence statistics: A computational study</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Bullinaria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Levy</surname></persName>
		</author>
		<idno type="DOI">10.3758/BF03193020</idno>
		<ptr target="https://doi.org/10.3758/BF03193020.doi:10.3758/BF03193020" />
	</analytic>
	<monogr>
		<title level="j">Behavior Research Methods</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="510" to="526" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">text and corpus analysis: Computer-assisted studies of language and culture</title>
		<author>
			<persName><forename type="first">M</forename><surname>Barlow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Stubbs</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Corpus Linguistics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="319" to="327" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Anil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Firat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lepikhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Passos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shakeri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Taropa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bailey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
		<idno>ArXiv abs/2305.10403</idno>
		<title level="m">Palm 2 technical report</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Izacard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Martinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lacroix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Rozière</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hambro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Azhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<idno>ArXiv abs/2302.13971</idno>
		<title level="m">LLaMA: Open and Efficient Foundation Language Models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">W.-L</forename><surname>Chiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Gonzalez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Stoica</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P</forename><surname>Xing</surname></persName>
		</author>
		<ptr target="https://lmsys.org/blog/2023-03-30-vicuna/" />
		<title level="m">Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Scao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Akiki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E.-J</forename><surname>Pavlick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ili'c</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hesslow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Castagn'e</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Luccioni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yvon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gallé</surname></persName>
		</author>
		<idno>ArXiv abs/2211.05100</idno>
		<title level="m">Bloom: A 176b-parameter open-access multilingual language model</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Roller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Artetxe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dewan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Diab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">V</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mihaylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shleifer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Simig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Koura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sridhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno>ArXiv abs/2205.01068</idno>
		<title level="m">Opt: Open pre-trained transformer language models</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Few-shot learning with multilingual language models</title>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">V</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mihaylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Artetxe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Simig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhosale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pasunuru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shleifer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Koura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Chaudhary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>O'horo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Kozareva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Diab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<idno>CoRR abs/2112.10668</idno>
		<ptr target="https://arxiv.org/abs/2112.10668.arXiv:2112.10668" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Almazrouei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Alobeidli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alshamsi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cappelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cojocaru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Debbah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Goffinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Heslow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Launay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Malartic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Noune</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Pannier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Penedo</surname></persName>
		</author>
		<title level="m">Falcon-40B: an open large language model with state-of-the-art performance</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">A survey on bias in deep nlp</title>
		<author>
			<persName><forename type="first">I</forename><surname>Garrido-Muñoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montejo-Ráez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Martínez-Santiago</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Ureña-López</surname></persName>
		</author>
		<idno type="DOI">10.3390/app11073184</idno>
		<ptr target="https://www.mdpi.com/2076-3417/11/7/3184.doi:10.3390/app11073184" />
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Fairness in Language Models Beyond English: Gaps and Challenges</title>
		<author>
			<persName><forename type="first">K</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sitaram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Choudhury</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2023.findings-eacl.157" />
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: EACL 2023, Association for Computational Linguistics</title>
				<meeting><address><addrLine>Dubrovnik, Croatia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="2106" to="2119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings</title>
		<author>
			<persName><forename type="first">T</forename><surname>Bolukbasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Saligrama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kalai</surname></persName>
		</author>
		<idno>CoRR abs/1607.06520</idno>
		<ptr target="http://arxiv.org/abs/1607.06520.arXiv:1607.06520" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yatskar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ordonez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-W</forename><surname>Chang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1804.06876</idno>
		<idno>arXiv:1804.06876</idno>
		<title level="m">Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Semantics derived automatically from language corpora contain human-like biases</title>
		<author>
			<persName><forename type="first">A</forename><surname>Caliskan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Bryson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narayanan</surname></persName>
		</author>
		<idno type="DOI">10.1126/science.aal4230</idno>
		<ptr target="https://www.science.org/doi/pdf/10.1126/science.aal4230" />
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">356</biblScope>
			<biblScope unit="page" from="183" to="186" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">OSCaR: Orthogonal subspace correction and rectification of biases in word embeddings</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Phillips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Srikumar</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.emnlp-main.411</idno>
		<ptr target="https://aclanthology.org/2021.emnlp-main.411.doi:10.18653/v1/2021.emnlp-main.411" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Online and</title>
				<meeting>the 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Online and<address><addrLine>Punta Cana, Dominican Republic</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="5034" to="5050" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings</title>
		<author>
			<persName><forename type="first">T</forename><surname>Manzini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Yao</forename><surname>Chong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Black</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tsvetkov</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1062</idno>
		<ptr target="https://aclanthology.org/N19-1062.doi:10.18653/v1/N19-1062" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Minneapolis, Minnesota</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="615" to="621" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">A general framework for implicit and explicit debiasing of distributional word vector spaces</title>
		<author>
			<persName><forename type="first">A</forename><surname>Lauscher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Glavas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">P</forename><surname>Ponzetto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vulic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">On measuring social biases in sentence encoders</title>
		<author>
			<persName><forename type="first">C</forename><surname>May</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bordia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Bowman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rudinger</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1063</idno>
		<ptr target="https://aclanthology.org/N19-1063.doi:10.18653/v1/N19-1063" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Minneapolis, Minnesota</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="622" to="628" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">C</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">E</forename><surname>Celis</surname></persName>
		</author>
		<title level="m">Assessing social and intersectional biases in contextualized word representations</title>
				<imprint>
			<publisher>NeurIPS</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">A multiscale visualization of attention in the transformer model</title>
		<author>
			<persName><forename type="first">J</forename><surname>Vig</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P19-3007</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="37" to="42" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<title level="m" type="main">A categorical archive of chatgpt failures</title>
		<author>
			<persName><forename type="first">A</forename><surname>Borji</surname></persName>
		</author>
		<idno type="DOI">10.21203/rs.3.rs-2895792/v1</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title level="m" type="main">It&apos;s all in the name: Mitigating gender bias with name-based counterfactual data substitution</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Maudslay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gonen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Teufel</surname></persName>
		</author>
		<idno>CoRR abs/1909.00871</idno>
		<ptr target="http://arxiv.org/abs/1909.00871.arXiv:1909.00871" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<title level="m" type="main">Attenuating bias in word vectors</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Phillips</surname></persName>
		</author>
		<idno>CoRR abs/1901.07656</idno>
		<ptr target="http://arxiv.org/abs/1901.07656.arXiv:1901.07656" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Analyzing and mitigating gender bias in languages with grammatical gender and bilingual word embeddings</title>
		<author>
			<persName><forename type="first">P</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-H</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-W</forename><surname>Chang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACL 2019</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Locating and editing factual associations in GPT</title>
		<author>
			<persName><forename type="first">K</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Andonian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Belinkov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Andonian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Belinkov</surname></persName>
		</author>
		<author>
			<persName><surname>Bau</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2210.07229</idno>
		<title level="m">Mass editing memory in a transformer</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b40">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Ziady</surname></persName>
		</author>
		<ptr target="https://edition.cnn.com/2023/06/15/tech/ai-act-europe-key-takeaways/index.html" />
		<title level="m">Europe is leading the race to regulate AI. Here&apos;s what you need to know</title>
				<imprint>
			<date type="published" when="2023-06-18">2023. 18-Jun-2023</date>
		</imprint>
	</monogr>
	<note>CNN Business -edition</note>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<title level="m" type="main">Maria and beto are sexist: evaluating gender bias in large language models for spanish, Language Resources and Evaluation</title>
		<author>
			<persName><forename type="first">I</forename><surname>Garrido</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montejo Raéz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">Martínez</forename><surname>Santiago</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
