<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">From Inclusive Language to Inclusive AI: A Proof-of-Concept Study into Pre-Trained Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Marion</forename><surname>Bartl</surname></persName>
							<email>marion.bartl@insight-centre.org</email>
							<affiliation key="aff0">
								<orgName type="department">Insight SFI Research Centre for Data Analytics</orgName>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">School of Information and Communication Studies</orgName>
								<orgName type="institution">University College Dublin</orgName>
								<address>
									<settlement>Belfield, Dublin</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Susan</forename><surname>Leavy</surname></persName>
							<email>susan.leavy@ucd.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Insight SFI Research Centre for Data Analytics</orgName>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">School of Information and Communication Studies</orgName>
								<orgName type="institution">University College Dublin</orgName>
								<address>
									<settlement>Belfield, Dublin</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">From Inclusive Language to Inclusive AI: A Proof-of-Concept Study into Pre-Trained Models</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C6F9132DB691215EEE4EBC81C394CEC3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>gender-inclusive language</term>
					<term>feminist AI</term>
					<term>gender bias</term>
					<term>pre-trained models Orcid 0000-0002-8893-4961 (M. Bartl); 0000-0002-3679-2279 (S. Leavy)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Pre-trained language models are central to today's AI landscape. However, harmful and outdated gender stereotypes can be learned from training data and ingrained into these models. Since pre-trained models are used in many everyday language-based technologies, the deployment of unchecked systems risks the perpetuation of stereotypical and heteronormative conceptualizations of gender in society and could result in biased AI-driven decisions. In this work, we present a study into the effects of data curation to mitigate such gender bias. We use language that counteracts male-centric expressions and structures in favor of inclusivity across all gender identities. This line of interdisciplinary research has received little attention in NLP in the past, despite the fact that gender-inclusive language has been a central tenet within feminist linguistics over five decades. For this study we rewrite gender-specific pronouns using the gender-neutral they pronoun and replace gendered role nouns for gender-inclusive variants. Our findings show a reduction in gender stereotyping for English word embedding models and a disruption of latent gender associations of gender-neutral words. This work demonstrates how incorporating principles of gender inclusive language can mitigate risks of bias in AI.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Language models significantly impact society. They are ubiquitous in applications ranging from search engines to hiring systems. State-of-the-art models like GPT-4 <ref type="bibr" target="#b0">[1]</ref> and LLama2 <ref type="bibr" target="#b1">[2]</ref> dominate current research due to their high performance. However, earlier models, such as classic pre-trained embeddings (Word2Vec <ref type="bibr" target="#b2">[3]</ref>, GloVe <ref type="bibr" target="#b3">[4]</ref>) and smaller-scale language models (BERT <ref type="bibr" target="#b4">[5]</ref>), remain in industrial use for their cost-effectiveness due to fast computation and memory efficiency <ref type="bibr" target="#b5">[6]</ref>.</p><p>All these pre-trained representations present a significant issue: they encode concepts of gender derived from training data that mirror existing patterns of inequality and discrimination <ref type="bibr" target="#b6">[7]</ref>. These biased representations can reinforce discriminatory patterns through generated language or influence hiring decisions that perpetuate gender imbalances based on stereotypes <ref type="bibr" target="#b7">[8]</ref>. To build trustworthy and fair language technologies, we must therefore ensure that the training language is fair from the outset.</p><p>One approach to achieving this is through training with gender-inclusive language, which has three main aims: <ref type="bibr" target="#b0">(1)</ref> Avoiding the use of masculine terms generically to refer to people of unknown gender or groups of mixed gender (e.g. mankind→humankind, to man→to staff ), <ref type="bibr" target="#b1">(2)</ref> eliminating irrelevant gender distinctions such as in headmistress/headmaster→headteacher <ref type="bibr" target="#b8">[9]</ref>, and (3) establishing a trans-inclusive model of gender that includes references beyond binary categories, such as the use of singular they or neopronouns <ref type="bibr" target="#b9">[10]</ref>. Gender-inclusive language has a long research tradition in feminist linguistics <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref> and has recently become a focus in research on gender bias in NLP. Examples include gender-neutral rewriting models <ref type="bibr" target="#b14">[14,</ref><ref type="bibr" target="#b15">15]</ref> and gender-inclusive language as a means of combating misgendering in translation <ref type="bibr" target="#b16">[16]</ref> or as a fine-tuning strategy for reducing gender stereotyping in Large Language Models (LLMs) <ref type="bibr" target="#b17">[17,</ref><ref type="bibr" target="#b18">18]</ref>.</p><p>Most of these works, however, make an assumption that equality-promoting effects of genderinclusive language, such as reduction in gender stereotyping and discrimination, can be directly picked</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>original text</head><p>As a fireman, Zachary is always ready to help people, but since his parents' relationship was marked by conflict, he is opposed to commitments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>after rewriting</head><p>As a firefighter, Zachary is always ready to help people, but since their parents' relationship was marked by conflict, they are opposed to commitments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Example sentence from Wikipedia subcorpus before and after gender-neutral rewriting up by LLMs through fine-tuning <ref type="bibr" target="#b17">[17]</ref>. However, fine-tuning an LLM necessarily invites interference from the pre-trained model, which might obscure conclusions on how gender-inclusive language is incorporated into model representations of gender. This work therefore presents a foundation-level proof-of-concept study with classic pre-trained embeddings. These allow us to train a model with gender-neutral text from scratch. By contrast, training an LLM from scratch goes beyond our and many other institution's resources <ref type="bibr" target="#b7">[8]</ref>. Further, word embedding models might still be used in small-scale industry settings due to their low computational costs, which makes them relevant <ref type="bibr" target="#b5">[6]</ref>.</p><p>We train two Word2Vec embedding models <ref type="bibr" target="#b2">[3]</ref> on unchanged vs. gender-neutral English text, additionally comparing against a common post-hoc debiasing technique <ref type="bibr" target="#b19">[19]</ref>. The code for our experiments is openly accessible <ref type="foot" target="#foot_0">1</ref> . In the experiments we find that the use of gender-neutral terminology reduces gender stereotyping as measured by the Word Embedding Association Test <ref type="bibr" target="#b20">[20]</ref> and the Embedding Coherence Test <ref type="bibr" target="#b21">[21]</ref> as well as reducing latent gender information in the embeddings of gender-neutral words. These results demonstrate how incorporating principles of gender-inclusive language, which were designed to help people avoid bias or discrimination in how they speak or write, can have the same effect on how gender is represented in word embedding models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Data Collection</head><p>Our experiments were conducted on a corpus introduced by <ref type="bibr" target="#b22">[22]</ref>, the Small Heap. The corpus is made up of random subsections of three popular LLM training corpora: OpenWebText2 <ref type="bibr" target="#b23">[23]</ref> (50%), CC-News <ref type="bibr" target="#b24">[24]</ref> (30%) and English Wikipedia (20%). The final sub-corpus contains ~250 million tokens, or 1.5 GB of text.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Gender-neutral Rewriting</head><p>The corpus was edited using the NeuTralRewriter <ref type="bibr" target="#b14">[14]</ref>. This involved replacing gender-specific pronouns (he, she, him etc.) with the corresponding variant of the gender-neutral pronoun they. Additionally, 91 gender-specific nouns (headmaster, mankind, etc.) including plural and spelling variants were replaced by neutral versions (principal, humankind, etc.; for full set see <ref type="bibr" target="#b14">14)</ref>. Table <ref type="table">1</ref> shows an example of rewritten text.</p><p>There are two implementations of the NeuTral Rewriter, a rule-based version and a neural, machine translation-based model. While the neural model performed better in the original experiments <ref type="bibr" target="#b14">[14]</ref>, it proved to be very susceptible to noise in our data (email addresses, digits, etc.) as well as low-frequency words, often translating them into unintelligible text. We therefore used the rule-based implementation, which uses a combination of word, part-of-speech and dependency information to derive the correct replacement of pronouns.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Embedding Models</head><p>In order to evaluate the effects of gender-neutral language on representations of gender within the corpus, we built three different Word2Vec models <ref type="bibr" target="#b2">[3]</ref>. The first was trained on the original and the second on the rewritten corpus. Each Word2Vec model was trained using the Continuous Bag of Words (CBOW) algorithm with the default hyperparameters of the gensim library's Word2Vec class <ref type="bibr" target="#b25">[25]</ref>. The third model was created by performing hard debiasing <ref type="bibr" target="#b19">[19]</ref> on our original model in order to compare our method to an existing, model-based debiasing method. Hard debiasing modifies embeddings in such a way that gender-neutral words (e.g. babysit) are equidistant to gendered word pairs (e.g. grandfather -grandmother). Additionally, the gendered component of embeddings of gender-neutral words, as defined by what is termed the 'gender subspace', is set to zero.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Bias Evaluation</head><p>The three trained embedding models were analyzed for underlying gender bias using three methods. Previous research found that bias measures are not always consistent <ref type="bibr" target="#b26">[26,</ref><ref type="bibr" target="#b27">27]</ref>. Using a composition of metrics therefore allows for a more comprehensive evaluation.</p><p>The Word Embedding Association Test (WEAT) is one of the most commonly applied bias measures for word embeddings <ref type="bibr" target="#b20">[20]</ref>. The test is modelled after a psychological assessment, the Implicit Associations Test <ref type="bibr" target="#b28">[28]</ref>, and measures bias by computing the mean association between two sets of target and attribute words. We used a WEAT implementation by Lauscher et al. <ref type="bibr" target="#b29">[29]</ref>. Each WEAT test (i.e. the specific combination of target and attribute terms) is identified as W𝑁 with 𝑁 corresponding to its position in the original WEAT paper. W9 and W10 were added by Lauscher et al. <ref type="bibr" target="#b29">[29]</ref>. We additionally added two tests using attribute words related to male-and female-stereotypical professions (W𝐴) as well as words related to computer science and childcare (W𝐵, cf. Table <ref type="table">4</ref>).</p><p>Clustering and Classification into two groups was used by Gonen and Goldberg <ref type="bibr" target="#b26">[26]</ref> to show that embedding spaces retain gender information despite application of debiasing. We measured cluster integrity after K-Means clustering (averaged over 50 runs) as well as classification accuracy with an SVM (trained for 20 epochs) in order to find out how well gender information can be recovered from the embedding space. We use the original model's 500 most male-/female-biased words according to their similarity to the element-wise mean of the male/female attribute embeddings 𝐴 1 and 𝐴 2 of W8 (cf. Table <ref type="table">5</ref>).</p><p>The Embedding Coherence Test (ECT) calculates distances between two sets of gendered target words 𝑇 1 ,𝑇 2 and a set of attribute words 𝐴 that relate to a societal gender imbalance (e.g. captain, football) <ref type="bibr" target="#b21">[21]</ref>. Instead of relying on the absolute distances, the ECT calculates the Spearman coefficient between the ranked distances for 𝑇 1 and 𝐴 vs. 𝑇 2 and 𝐴. A high coefficient indicates similar ranks between the two gendered sets, signifying reduced bias. We used the ECT implementation by <ref type="bibr" target="#b29">[29]</ref>.</p><p>Semantic Quality is evaluated following Lauscher et al. <ref type="bibr" target="#b29">[29]</ref>, by using the similarity benchmarks SimLex-999 <ref type="bibr" target="#b30">[30]</ref> and WordSim-353 <ref type="bibr" target="#b31">[31]</ref> and computing the Pearson and Spearman correlation coefficients between the benchmark term-pair similarities and cosine similarities of the corresponding embedding pairs of the respective models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Findings and Discussion</head><p>We will discuss the results for our chosen gender bias metrics: WEAT <ref type="bibr" target="#b20">[20]</ref>, ECT <ref type="bibr" target="#b21">[21]</ref> and Clustering and Classification <ref type="bibr" target="#b26">[26]</ref>. Finally, we contextualize these findings with the performance of our models on two semantic quality benchmarks.</p><p>WEAT: Results indicated a reduction in gender bias related to stereotypical associations of women with arts, domestic work, and childcare, and men with (computer) science, maths, and careers, respectively. All five tests measuring gender bias, W6, W7, W8, W𝐴, and W𝐵, showed a reduction in the statistic after rewriting (Table <ref type="table" target="#tab_0">2</ref>). W𝐴 additionally shows that there is a reduction in the association of feminine/masculine words with traditionally gendered professions. Comparing the WEAT scores after rewriting to the hard debiased embeddings, one can see that the scores for the hard debiased embeddings are approaching zero, which indicates equal association of male/female attributes with the respective targets. Thus, on one hand, training with gender-neutral language generally leads to a reduction in WEAT bias scores, indicating that this change in the language can lead to a reduction in associations based on stereotypes. On the other hand, stereotyped associations can be specifically targeted and mostly removed post-hoc.  ECT: A reduction in bias was demonstrated in relation to gendered associations with professions. Table <ref type="table" target="#tab_1">3a</ref> shows that for arts vs science and profession attributes, the ECT scores increase, both after rewriting and hard debiasing. However, the ECT scores show a higher increase for the hard debiased model, suggesting an advantage of this method over neutral rewriting. For the computer science vs. childcare attributes however, both methods show a reduction in ECT scores, which could indicate that neither neutral rewriting nor hard debiasing are sufficiently affecting words in these semantic fields.</p><p>Clustering and Classification: Our results demonstrated that while the embeddings clearly encode gender information that is very salient to a binary classifier, rewriting with gender-neutral terminology has a more comprehensive effect than focusing on removing a limited 'gender subspace', as done by hard debiasing. In fact, hard debiasing improves the clustering accuracy, indicating that gender in word embeddings is more intricately encoded than can be captured by a gender subspace. These results mirror the findings of Gonen and Goldberg <ref type="bibr" target="#b26">[26]</ref>. Both clustering and classification accuracy marginally decline in the model trained on gender-neutral text, as shown in Table <ref type="table" target="#tab_1">3a</ref>. The SVM shows a very high accuracy of 98% in separating words with male and female direct bias in the unchanged and hard-debiased model, which is reduced to 96% in the gender-neutral model.</p><p>Semantic Quality: The semantic quality of the word embeddings as measured by the SimLex 999 <ref type="bibr" target="#b30">[30]</ref> and WordSim 353 <ref type="bibr" target="#b31">[31]</ref> benchmarks dropped only minimally by at most 0.01 points after rewriting (Table <ref type="table" target="#tab_1">3b</ref>). Overall, these results fall only slightly behind larger embedding models. According to the SimLex 999 leaderboard, a Word2Vec model trained on one billion words of Wikipedia text reached a Spearman correlation of 0.37<ref type="foot" target="#foot_1">2</ref> , which is similar to our model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Limitations and Future Work</head><p>The first limitation pertains to the size of the data used. Our corpus contains 250 million tokens, which is less than 25% of the training data size for a common embedding model <ref type="bibr" target="#b2">[3]</ref>. We will explore in future work whether our findings hold for larger datasets and whether the measured reduction in gender stereotyping in the embedding model can translate to LLMs if fine-tuned on gender-inclusive text. Secondly, our research is focused on gender-inclusive language in English and not directly applicable to other languages. The NeuTralRewriter <ref type="bibr" target="#b14">[14]</ref> was specifically developed for English, and since the specific characteristics of gender-neutral terminology are language-dependent, applying the method to other languages would require the development of a language-specific version of the Rewriter. We leave this to future research. A third limitation of our research lies in the erasure of word embeddings for he/she pronouns due to the replacement with they. However, since we are presenting a proof-of-concept study and the measurement of gender stereotyping is not dependent on these pronouns, we accepted this. Future work could rewrite pronouns only in a percentage of cases or only in cases where masculine/feminine pronouns are used generically. Lastly, our research is limited by a narrow focus on binary male and female genders when assessing model bias. There is a significant gap in NLP research regarding the incorporation of non-binary gender identities in both measuring and mitigating bias <ref type="bibr" target="#b32">[32]</ref>. Due to the nature of this proof-of-concept study, we adhered to commonly employed binary metrics. Future work will need to examine progress made regarding the integration of non-binary gender identities in embedding models through inclusive terminology.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>This research explored the effects of gender-neutral language on gender stereotyping and latent gender information in classic embedding models. We found that training on text with gender-neutral singular pronouns and role nouns effected a reduction in stereotyping as measured by WEAT <ref type="bibr" target="#b20">[20]</ref> and ECT <ref type="bibr" target="#b21">[21]</ref>. These reductions do not surpass those that can be achieved by targeted, post-hoc debiasing <ref type="bibr" target="#b19">[19]</ref>. However, gender-neutral training data showed an advantage when measuring latent gender information in embeddings through classification and clustering. This demonstrates a more comprehensive effect of gender-neutral language in the removal of unnecessarily gendered associations, which is in line with the aims of gender-inclusive language.</p><p>While future work will need to investigate whether our results hold at scale and can be transferred to LLMs, our exploratory findings suggest that adjusting training data to be more gender-inclusive can improve gender representations in pre-trained models toward a more equitable conceptualization of gender. This research presents a promising approach to the incorporation of principles of genderinclusive language to ensure fairness and inclusivity in AI systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>Results for WEAT before and after rewriting. Results marked * significant with 𝑝 &lt; 0.05. Zero values indicate 𝑑 ≈ 0. h.d. = hard debiased; CS = computer science.</figDesc><table><row><cell>Cohen's d</cell><cell>effect size</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>Results</figDesc><table /><note>for ECT (Spearman's rank correlation 𝑟), Clustering &amp; Classification accuracy, and semantic quality. h.d. = hard debiased.</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/marionbartl/ILIA</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://fh295.github.io/simlex.html</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 12/RC/2289_P2. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. WEAT Target and Attribute Terms</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 5</head><p>Attribute words used in WEAT 7, WEAT 8 and in the present study. We replaced the original lists due to the fact that gender-specific pronouns were removed by the rewriting process and we decided to keep the original length of seven attribute words.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<idno type="DOI">10.48550/arXiv.2303.08774</idno>
		<idno type="arXiv">arXiv:2303.08774</idno>
		<ptr target="http://arxiv.org/abs/2303.08774.doi:10.48550/arXiv.2303.08774" />
		<title level="m">GPT-4</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
		<respStmt>
			<orgName>OpenAI</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">LLaMA: Open and Efficient Foundation Language Models</title>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Izacard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Martinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lacroix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Rozière</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hambro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Azhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2302.13971</idno>
		<idno type="arXiv">arXiv:2302.13971</idno>
		<ptr target="http://arxiv.org/abs/2302.13971.doi:10.48550/arXiv.2302.13971" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Efficient Estimation of Word Representations in Vector Space</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dean</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1301.3781</idno>
		<ptr target="http://arxiv.org/abs/1301.3781" />
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Glove: Global Vectors for Word Representation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pennington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Manning</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/D14-1162</idno>
		<ptr target="http://aclweb.org/anthology/D14-1162.doi:10.3115/v1/D14-1162" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics</title>
				<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1532" to="1543" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kenton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of NAACL-HLT</title>
				<meeting>NAACL-HLT</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Contextual Embeddings: When Are They Worth It?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Arora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>May</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ré</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.acl-main.236</idno>
		<ptr target="https://aclanthology.org/2020.acl-main.236.doi:10.18653/v1/2020.acl-main.236" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Jurafsky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Chai</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Schluter</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Tetreault</surname></persName>
		</editor>
		<meeting>the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="2650" to="2663" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Bias and Fairness in Large Language Models: A Survey</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">O</forename><surname>Gallegos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Rossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Barrow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Tanjim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dernoncourt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Ahmed</surname></persName>
		</author>
		<idno type="DOI">10.1162/coli_a_00524</idno>
		<ptr target="https://doi.org/10.1162/coli_a_00524.doi:10.1162/coli_a_00524" />
	</analytic>
	<monogr>
		<title level="j">Computational Linguistics</title>
		<imprint>
			<biblScope unit="page" from="1" to="79" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">On the dangers of stochastic parrots: Can language models be too big?</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Bender</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mcmillan-Major</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shmitchell</surname></persName>
		</author>
		<idno type="DOI">10.1145/3442188.3445922</idno>
	</analytic>
	<monogr>
		<title level="m">FAccT 2021 -Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="610" to="623" />
		</imprint>
	</monogr>
	<note>conference Proceedings</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Feminist Linguistics and Linguistic Feminisms</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kramer</surname></persName>
		</author>
		<ptr target="https://go.exlibris.link/J2p0HbgK" />
	</analytic>
	<monogr>
		<title level="m">Mapping Feminist Anthropology in the Twenty-First Century</title>
				<editor>
			<persName><forename type="first">Ellen</forename><surname>Lewin</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Leni</surname></persName>
		</editor>
		<editor>
			<persName><surname>Silverstein</surname></persName>
		</editor>
		<imprint>
			<publisher>Rutgers University Press</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page">65</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A Little Word That Means A Lot: A Reassessment of Singular They in a New Era of Gender Politics</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Saguy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Williams</surname></persName>
		</author>
		<idno type="DOI">10.1177/08912432211057921</idno>
		<ptr target="http://journals.sagepub.com/doi/10.1177/08912432211057921.doi:10.1177/08912432211057921" />
	</analytic>
	<monogr>
		<title level="j">Gender &amp; Society</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="5" to="31" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Language and Woman&apos;s Place</title>
		<author>
			<persName><forename type="first">R</forename><surname>Lakoff</surname></persName>
		</author>
		<ptr target="http://www.jstor.org/stable/4166707" />
	</analytic>
	<monogr>
		<title level="j">Language in Society</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="45" to="80" />
			<date type="published" when="1973">1973</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Linguistic Sexism and Feminist Linguistic Activism</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pauwels</surname></persName>
		</author>
		<idno type="DOI">10.1002/9780470756942.ch24</idno>
	</analytic>
	<monogr>
		<title level="m">The Handbook of Language and Gender</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Holmes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Meyerhoff</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Blackwell Publishing Ltd</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="550" to="570" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Language, gender, and sexuality: an introduction, Book, Whole</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">F</forename><surname>Kiesling</surname></persName>
		</author>
		<imprint>
			<publisher>Routledge</publisher>
			<biblScope unit="volume">1</biblScope>
			<pubPlace>London</pubPlace>
		</imprint>
	</monogr>
	<note>1st</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title/>
		<idno type="DOI">10.4324/9781351042420</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender Neutral Alternatives</title>
		<author>
			<persName><forename type="first">E</forename><surname>Vanmassenhove</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Emmery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Shterionov</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2021.emnlp-main.704" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Online and</title>
				<meeting>the 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Online and<address><addrLine>Punta Cana, Dominican Republic</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="8940" to="8948" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model</title>
		<author>
			<persName><forename type="first">C</forename><surname>Amrhein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Schottmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sennrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Läubli</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.acl-long.246</idno>
		<ptr target="https://aclanthology.org/2023.acl-long.246.doi:10.18653/v1/2023.acl-long.246" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Rogers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Boyd-Graber</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</editor>
		<meeting>the 61st Annual Meeting of the Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="4486" to="4506" />
		</imprint>
	</monogr>
	<note>: Long Papers), Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Gender Neutralization for an Inclusive Machine Translation: from Theoretical Foundations to Open Challenges</title>
		<author>
			<persName><forename type="first">A</forename><surname>Piergentili</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Fucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Savoldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bentivogli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Negri</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2301.10075</idno>
		<idno type="arXiv">arXiv:2301.10075</idno>
		<ptr target="http://arxiv.org/abs/2301.10075.doi:10.48550/arXiv.2301.10075" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions</title>
		<author>
			<persName><forename type="first">H</forename><surname>Thakur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vaddamanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">P</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L.-P</forename><surname>Morency</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2023.acl-short.30" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics</title>
				<meeting>the 61st Annual Meeting of the Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="340" to="351" />
		</imprint>
	</monogr>
	<note>Short Papers), Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Fatemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xiong</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2023.acl-short.108" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics</title>
				<meeting>the 61st Annual Meeting of the Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1249" to="1262" />
		</imprint>
	</monogr>
	<note>Short Papers), Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings</title>
		<author>
			<persName><forename type="first">T</forename><surname>Bolukbasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Saligrama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Kalai</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">29</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Semantics derived automatically from language corpora contain human-like biases</title>
		<author>
			<persName><forename type="first">A</forename><surname>Caliskan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Bryson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narayanan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">356</biblScope>
			<biblScope unit="page" from="183" to="186" />
			<date type="published" when="2017">2017</date>
			<publisher>American Association for the Advancement of Science</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Attenuating Bias in Word vectors</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Phillips</surname></persName>
		</author>
		<ptr target="iSSN:2640-3498" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics</title>
				<meeting>the Twenty-Second International Conference on Artificial Intelligence and Statistics<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="879" to="887" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">From &apos;Showgirls&apos; to &apos;Performers&apos;: Fine-tuning with Gender-inclusive Language for Bias Reduction in LLMs</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bartl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Leavy</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2024.gebnlp-1.18" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP), Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Faleńska</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Basta</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Costa-Jussà</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Goldfarb-Tarrant</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Nozza</surname></persName>
		</editor>
		<meeting>the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP), Association for Computational Linguistics<address><addrLine>Bangkok, Thailand</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="280" to="294" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">The Pile: An 800GB Dataset of Diverse Text for Language Modeling</title>
		<author>
			<persName><forename type="first">L</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Biderman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Black</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Golding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hoppe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Foster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Phang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Thite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Nabeshima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Presser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Leahy</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2101.00027</idno>
		<idno type="arXiv">arXiv:2101.00027</idno>
		<ptr target="http://arxiv.org/abs/2101.00027.doi:10.48550/arXiv.2101.00027" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">CC-News-En: A Large English News Corpus</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mackenzie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Benham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Petri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Trippas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Culpepper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moffat</surname></persName>
		</author>
		<idno type="DOI">10.1145/3340531.3412762</idno>
		<idno>doi:10.1145/3340531.3412762</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/3340531.3412762" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management, ACM, Virtual Event Ireland</title>
				<meeting>the 29th ACM International Conference on Information &amp; Knowledge Management, ACM, Virtual Event Ireland</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="3077" to="3084" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Software Framework for Topic Modelling with Large Corpora</title>
		<author>
			<persName><forename type="first">R</forename><surname>Řehůřek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sojka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks</title>
				<meeting>the LREC 2010 Workshop on New Challenges for NLP Frameworks<address><addrLine>Valletta, Malta</addrLine></address></meeting>
		<imprint>
			<publisher>ELRA</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="45" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them</title>
		<author>
			<persName><forename type="first">H</forename><surname>Gonen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Goldberg</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1061</idno>
		<ptr target="https://aclanthology.org/N19-1061.doi:10.18653/v1/N19-1061" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Burstein</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Doran</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Solorio</surname></persName>
		</editor>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Minneapolis, Minnesota</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="609" to="614" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Intrinsic Bias Metrics Do Not Correlate with Application Bias</title>
		<author>
			<persName><forename type="first">S</forename><surname>Goldfarb-Tarrant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Marchant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Muñoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sánchez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pandya</surname></persName>
		</author>
		<author>
			<persName><surname>Lopez</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.acl-long.150</idno>
		<ptr target="https://aclanthology.org/2021.acl-long.150.doi:10.18653/v1/2021.acl-long.150" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing</title>
		<title level="s">Long Papers</title>
		<meeting>the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1926" to="1940" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Measuring individual differences in implicit cognition: the implicit association test</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Greenwald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Mcghee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Schwartz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of personality and social psychology</title>
		<imprint>
			<biblScope unit="volume">74</biblScope>
			<biblScope unit="page">1464</biblScope>
			<date type="published" when="1998">1998</date>
			<publisher>American Psychological Association</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces</title>
		<author>
			<persName><forename type="first">A</forename><surname>Lauscher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Glavaš</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">P</forename><surname>Ponzetto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vulić</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v34i05.6325</idno>
		<ptr target="https://ojs.aaai.org/index.php/AAAI/article/view/6325.doi:10.1609/aaai.v34i05.6325" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page">5</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Hill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Reichart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Korhonen</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1408.3456</idno>
		<idno type="arXiv">arXiv:1408.3456</idno>
		<ptr target="http://arxiv.org/abs/1408.3456.doi:10.48550/arXiv.1408.3456" />
		<title level="m">SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note>cs. version: 1</note>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Placing search in context: the concept revisited</title>
		<author>
			<persName><forename type="first">L</forename><surname>Finkelstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gabrilovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Matias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Rivlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Solan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wolfman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ruppin</surname></persName>
		</author>
		<idno type="DOI">10.1145/503104.503110</idno>
		<idno>doi:10.1145/503104.503110</idno>
		<ptr target="https://doi.org/10.1145/503104.503110" />
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Information Systems</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="116" to="131" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Theories of &quot;Gender&quot; in NLP Bias Research</title>
		<author>
			<persName><forename type="first">H</forename><surname>Devinney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Björklund</surname></persName>
		</author>
		<author>
			<persName><surname>Björklund</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM FAccT Conference 2022, Conference on Fairness, Accountability, and Transparency, Hybrid via</title>
				<meeting><address><addrLine>Seoul, Soth Korea</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">June 21-14, 2022, 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings</title>
		<author>
			<persName><forename type="first">T</forename><surname>Manzini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Yao</forename><surname>Chong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Black</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tsvetkov</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1062</idno>
		<ptr target="https://aclanthology.org/N19-1062.doi:10.18653/v1/N19-1062" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Minneapolis, Minnesota</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="615" to="621" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
