<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Women&apos;s Professions and Targeted Misogyny Online</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alessio</forename><surname>Cascione</surname></persName>
							<email>cascione@studenti.unipi.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università di Pisa</orgName>
								<address>
									<addrLine>Largo B. Pontecorvo 3</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Aldo</forename><surname>Cerulli</surname></persName>
							<email>a.cerulli1@studenti.unipi.it</email>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">Dipartimento di Filologia</orgName>
								<orgName type="department" key="dep2">Letteratura e Linguistica</orgName>
								<orgName type="institution">Università di Pisa</orgName>
								<address>
									<addrLine>Via Santa Maria 36</addrLine>
									<postCode>56126</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marta</forename><forename type="middle">Marchiori</forename><surname>Manerba</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università di Pisa</orgName>
								<address>
									<addrLine>Largo B. Pontecorvo 3</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lucia</forename><forename type="middle">C</forename><surname>Passaro</surname></persName>
							<email>lucia.passaro@unipi.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università di Pisa</orgName>
								<address>
									<addrLine>Largo B. Pontecorvo 3</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Tenth Italian Conference on Computational Linguistics</orgName>
								<address>
									<addrLine>Dec 04 -06</addrLine>
									<postCode>2024</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Women&apos;s Professions and Targeted Misogyny Online</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E38B546611638B163CCB7D63BD7D60EE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Abusive Language</term>
					<term>Online Misogyny</term>
					<term>Hurtfulness</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>With the increasing popularity of social media platforms, the dissemination of misogynistic content has become more prevalent and challenging to address. In this paper, we investigate the phenomenon of online misogyny on Twitter through the lens of hurtfulness, qualifying its different manifestation in English tweets considering the profession of the targets of misogynistic attacks. By leveraging manual annotation and a BERTweet model trained for fine-grained misogyny identification, we find that specific types of misogynistic speech are more intensely directed towards particular professions. For example, derailing discourse predominantly targets authors and cultural figures, while dominance-oriented speech and sexual harassment are mainly directed at politicians and athletes. Additionally, we use the HurtLex lexicon and ItEM to assign hurtfulness scores to tweets based on different hate speech categories. Our analysis reveals that these scores align with the profession-based distribution of misogynistic speech, highlighting the targeted nature of such attacks.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Misogyny is a radical manifestation of sexism directed toward the female gender, which becomes subject of hatred. Its effects are widespread and systematic, bearing severe both social and individual consequences, such verbal and physical violence, rape and femicide. Indeed, misogyny, prejudice, and contempt towards women continue to persist in various forms in our society. While overt acts of discrimination and sexism have received attention, it is crucial to acknowledge that misogyny often manifests in subtle and nuanced ways <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. Moreover, with the increasing popularity of social media platforms, the dissemination of misogynistic content has become more prevalent and challenging to address <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>From a socio-historical perspective, women have faced numerous barriers that limited their access to certain professions, hindered their career progression, and subjected them to belittlement and offense related to their work <ref type="bibr" target="#b4">[5]</ref>. These gendered biases not only perpetuate inequality but also serve as breeding grounds for misogyny.</p><p>In this paper, we focus on automated misogyny detection, specifically investigating whether different professional roles trigger varying degrees of hurtfulness across social media posts. By examining the correlation between the profession of offended women and the prevalence of misogynistic attitudes, we aim to shed light on the extent to which misogyny is perpetuated within specific professional domains.</p><p>Fontanella et al. <ref type="bibr" target="#b5">[6]</ref> highlight how research focusing on automatic detection of misogyny tends to show weak connections with other conceptual areas addressing different aspects of the phenomenon. The finding suggests that current research has not yet adequately addressed the fine-grained manifestations of online misogynistic attacks. Our contribution conducts novel analyses to uncover and measure misogynistic attitudes within different professional fields. Specifically, we examine how different types of misogyny are distributed across various women's professions and how the language used in misogynistic posts varies across them. To explore this relationship, we expand the English misogyny identification dataset introduced by Fersini et al. <ref type="bibr" target="#b6">[7]</ref>, known as AMI, by incorporating the professions of the women targeted. By adding professional categories to AMI, we enable novel analyses on how misogynistic attacks against women differ based on their profession. Our research is driven by the following research questions:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>RQ1 How does misogyny distribute across professions?</head><p>We analyze women's profession according to the type of misogyny directed towards them. RQ2 How does the language used in misogynistic tweets vary across different professions? We investigate how specific hurtful expressions are directed at specific professions more frequently than others.</p><p>To address our RQs, we proceed following the work-  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>AMI Dataset</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>PRF Dataset</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>AMI -PRF Dataset</head><p>Figure <ref type="figure">1</ref>: A subset of the AMI dataset, containing ground-truth misogyny annotations, is manually labeled with the professions of victims of misogynistic attacks, as detailed in Section 3. The PRF dataset, featuring professions by-design, is extracted and automatically annotated with misogyny types using a BERTweet model trained on the AMI dataset. The manually annotated AMI subset and the automatically annotated PRF dataset are then combined to form the AMI-PRF dataset. Labels distributions of each dataset are displayed in the workflow. flow depicted in Figure <ref type="figure">1</ref>. We begin by utilizing a subset of the AMI dataset, which contains ground-truth annotations for misogyny. This subset is manually labeled with the professions of the victims of misogynistic attacks, as detailed in Section 3.2. We then employ a misogyny classifier to automatically annotate with various types of misogyny a novel collection, the Profession (PRF) dataset, which comprises 760 tweets labeled with professions. The final step involves combining the manually annotated AMI subset with the automatically annotated PRF dataset, resulting in the AMI-PRF dataset <ref type="foot" target="#foot_0">1</ref> . This enriched dataset provides a resource that enables a thorough investigation of the phenomenon.</p><p>The remainder of this paper is organized as follows. Section 2 discusses previous works that closely related to ours, while Section 3 details the enrichment of the AMI dataset with professional categories. Section 4 reports the experiments conducted to answer our RQs, whereas Section 5 outlines conclusions, limitations, and future directions of the work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>In recent years, the field of NLP has witnessed a growing interest in detecting misogyny and sexist content on social media platforms. Various works have significantly contributed to this area by publicly introducing diverse datasets and evaluation tasks tailored for misog-yny detection <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9]</ref>. Indeed, it is a pressing need to develop systems for detecting emotive <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref> and offensive word lexicons for harassment research <ref type="bibr" target="#b11">[12]</ref>, as highlighted by Rezvan et al. <ref type="bibr" target="#b12">[13]</ref>. Contributing to the field of sexism categorization, Parikh et al. <ref type="bibr" target="#b13">[14]</ref> provide a large dataset for multi-label classification of sexism. Chiril et al. <ref type="bibr" target="#b14">[15]</ref> explore the detection of sexist hate speech, examining the relationship between gender stereotype detection and sexism classification. Similarly, Felmlee et al. <ref type="bibr" target="#b15">[16]</ref> investigate online aggression towards women on social media platforms, focusing on the strategic nature of sexist tweets and the reinforcement of stereotypes.</p><p>Emphasizing the interaction and co-influence of social dimensions, like gender and profession, can assist in capturing complex social dynamics and informing the development of norms that promote equity and justice, as outlined by Hancock <ref type="bibr" target="#b16">[17]</ref> and Dhamoon <ref type="bibr" target="#b17">[18]</ref>. Specifically, previous social science research has examined hate discourse directed at specific groups of women, such as politicians and celebrities. For example, Silva-Paredes and Ibarra Herrera <ref type="bibr" target="#b18">[19]</ref> offer a corpus-based analysis of gender-based aggression towards a Chilean right-wing female politician, while Phipps and Montgomery <ref type="bibr" target="#b19">[20]</ref> and Ritchie <ref type="bibr" target="#b20">[21]</ref> focus on forms of hate speech in media campaigns against Nancy Pelosi and Hillary Clinton, respectively. Specifically for tweets, Saluja and Thilaka <ref type="bibr" target="#b21">[22]</ref> employ the Feminist Critical Discourse Theory to perform gender-specific inferences w.r.t. Twitter discourse concerning Indian political leaders. On the other hand, Ghaffari <ref type="bibr" target="#b22">[23]</ref> analyzes 2000 user-generated posts focusing on American celebrity Lena Dunham, examining manifestations of hate and stereotypes. To the best of our knowledge, this is the first data-driven work that examines the relationship between women professional categories and types of misogynistic attacks on online platforms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Data Exploration and Enrichment</head><p>In this section, we detail the construction of our novel AMI-PRF dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">AMI Dataset</head><p>We address the lack of misogynous data annotated w.r.t. victims' professions by enriching the AMI dataset<ref type="foot" target="#foot_1">2</ref>  <ref type="bibr" target="#b6">[7]</ref>.</p><p>The dataset includes a coarse-grained distinction between misogynistic and not-misogynistic tweets, as well as a fine-grained labeling for misogynistic tweets, categorizing them into five different types of misogynistic hate speech: derailing (to justify women abuse), discredit (general slurring), dominance (to assert men superiority), sexual harassment (sexual advances and violence) and stereotype (oversimplification and objectification).</p><p>We enrich AMI by adding information about the professions of the victims. This enrichment is performed through retrieving from Wikidata<ref type="foot" target="#foot_2">3</ref> professional figures that are subclasses of the person class.</p><p>Our annotation of professions include four categories, namely 'artist', 'author', 'athlete', 'politician (and activist)'. We focus on these professions as they are represented in the AMI dataset, based on the popular women referenced. Although the first two are both subclasses of creator, which is an immediate subclass of person, we keep them separate due to their different natures: the former encompasses visual and performing arts, the latter intellectual activities. On the other hand, we choose to group politicians and activists together to highlight their shared involvement in public social activities, even though they are not directly related according to Wikidata taxonomy.</p><p>As shown by Fig. <ref type="figure">4</ref> (Appendix A), each macroprofession initiates a potentially large set of nested subprofessions based on Wikidata subclass of relationship.</p><p>We leverage these professions to manually label AMI misogynistic tweets that actually refer to women. In order to produce a consistent labeling, we establish the following conventions: if the tweet refers to a famous woman, we choose the first (or unique) occupation among those appearing on her Wikidata page, tracing it back to the appropriate macro-category. This approach mitigates annotation inconsistencies by leveraging an established external resource for labeling. When such information is unavailable, we determine the professional category Finally, we point out that not all tweets in the AMI dataset have women as victims. In several cases, misogynist language is used to insult men, companies or political parties. Out of 5000 AMI tweets, we initially filtered out those that were not directed at women. Among the remaining tweets, 2187 were labelled as misogynistic. However, we were able to obtain professional categories for only a subset of 380 of these tweets, highlighting the need for additional data collection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">PRF Dataset</head><p>To address the issue of having only a small number of tweets annotated for both misogyny and profession, we crawl additional tweets. From the most common expressions in the misogynistic tweets of AMI, we derive a list of misogynistic keywords. For each of our target professions, we choose five representative popular women, collecting tweets containing a reference to them in the form of a hashtag, mention and/or explicit name and surname. As a result, we extract 760 tweets labeled with professions, which have been posted before the beginning of February 2023: we refer to this collection as the Profession (PRF) dataset. Since these tweets are filtered using specific keywords and are directed at popular women, we consider them inherently misogynistic, as a woman is the primary target of hate speech.</p><p>To identify the type of misogyny in PRF, we leverage BERTweet<ref type="foot" target="#foot_3">4</ref> , a transformer-based <ref type="bibr" target="#b23">[24]</ref> model trained on the AMI multi-classification dataset. We opt for this model since it is pre-trained on Twitter, and it achieves state-of-the-art performance in Twitter sentiment analysis tasks <ref type="bibr" target="#b24">[25]</ref>. Before training, the AMI tweets are preprocessed with a TweetNormalizer function <ref type="foot" target="#foot_4">5</ref> which maps emojis into text strings and substitutes user mentions and web/url links with @USER and HTTPURL placeholders. For model selection, we perform a stratified cross-validation with k = 5. We search for the best weight decay and learning rate in [1e-2,1e-5] and [1e-5,3e-5], respectively. For each configuration, we set 10 epochs, 500 warm up steps and a train/validation batch of 16/8. The optimal performance is achieved with a learning rate of 3e-5 and a weight decay of 1e-2. Tab. 1 shows BERTweet performances for the multi-class misogyny detection task on AMI test set, comprising 1000 tweets (460 misogynistic). For the multi-classification task, we focus only on misogynistic tweets. The evaluation metrics include Accuracy, as well as weighted and unweighted average Precision, Recall, and F1-score. We adopt this model to label our PRF dataset with types of misogyny.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>AMI-PRF Dataset</head><p>By combining the 380 tweets from AMI, having ground-truth information regarding the type of misogyny, and the PRF dataset, labeled with our trained model, we obtain 1140 tweets featuring both misogyny type and professions. Such dataset, named AMI-PRF, is leveraged to investigate the relation between misogyny and professions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments and Data Analyses</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Misogyny Type by Profession (RQ1)</head><p>To address RQ1, we examine how different types of misogynistic speech are distributed across various professions in AMI-PRF. For each type of misogyny, we find how many tweets belonging to such class are directed towards a specific profession and qualitatively compare the results in Fig. <ref type="figure" target="#fig_0">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Discussion</head><p>We observe distinct patterns in the usage of misogynistic speech across professions: derailing discourse, which focuses on justifying women abuse and rejecting male responsibility, tends to primarily target authors compared to the other professions. This aligns with the nature of derailing speech, which seeks to rationalize mistreatment of women and deflect male accountability. Therefore, this kind of discourse can be expected to be commonly directed at public intellectuals or cultural figures. In contrast, dominance-oriented misogynistic discourse, aimed at asserting male superiority along with stereotypical negative speech, is predominantly directed at powerful figures such as politicians. This prevalence could be explained as an attempt to undermine the legitimacy and value of women holding relevant public roles. Sexual harassment is notably prevalent towards politicians and athletes, as expressions of intent to assert power over women through threats of violence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Hurtfulness by Profession (RQ2)</head><p>To address RQ2 -whether specific hurtful expressions target women in certain professions -we define a quantitative lexicon-based measure for assessing the hurtfulness of tweets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hurtfulness Evaluation</head><p>To define a hurtfulness measure for tweets, we leverage the HurtLex lexicon, which compiles offensive words and stereotyped expressions aimed at insulting and degrading marginalized individuals and groups <ref type="bibr" target="#b25">[26]</ref>. HurtLex organizes words into 17 fine-grained categories, each identifying a specific target or form of offense.</p><p>Inspired by the work of Nozza et al. <ref type="bibr" target="#b11">[12]</ref>, where a harmful sentence completions indicator is defined for generative language models, we employ a subset of 9 HurtLex categories for our purposes: animals, prostitution, professions, negative connotations, homosexuality, male genitalia, female genitalia, derogatory terms, and crime <ref type="foot" target="#foot_5">6</ref> . The hurtfulness score for a tweet w.r.t. one of the 9 categories could be computed as the ratio of HurtLex lemmas <ref type="foot" target="#foot_6">7</ref> from that category to the total HurtLex lemmas from any category present in the tweet. However, an approach relying solely on the HurtLex lexicon would not provide a sufficiently comprehensive analysis, as HurtLex has low coverage of the vocabulary in the AMI-PRF dataset, with only 15.42% of the lemmas in a tweet occurring in HurtLex on average. To enhance our reference vocabulary, we leverage ItEM<ref type="foot" target="#foot_7">8</ref> , a methodology proposed by Passaro and Lenci <ref type="bibr" target="#b9">[10]</ref>. For each lemma in the HurtLex subset, we obtain its vectorial representation using ItEM and the Word2vec Twitter embeddings <ref type="foot" target="#foot_8">9</ref> , following Godin <ref type="bibr" target="#b26">[27]</ref>. For each category, we compute a centroid embedding by averaging the vectors associated with each lemma in that category. This allows us to represent each category through a unique embedding. Tab. 2 reports the average cosine similarity between lemmas of a specific category and the respective centroid. Finally, we compute the cosine similarity between each word embedding in the Word2vec Twitter vocabulary and each centroid, thus creating a new lexicon featuring a coverage of 76.51% w.r.t. the AMI-PRF dataset.</p><p>We leverage the similarity scores to define a hurtful emotive score for each tweet as follows: let t be a lemmatized tweet, 𝑤 a lemma in t, 𝑘 one of the 9 HurtLex categories, 𝑘 ˜the centroid of category 𝑘, 𝑠 the cosine similarity function and 𝑉 the set of vocabulary items, i.e. the words for which we have a Twitter emmbedding. For each 𝑤 ∈ 𝑉 , we define the 𝐼𝑡𝐸𝑀 function as:</p><formula xml:id="formula_0">𝐼𝑡𝐸𝑀 (𝑤, 𝑘 ˜, 𝑡ℎ𝑟) = {︃ 𝑠(𝑤, 𝑘 ˜) if 𝑠(𝑤, 𝑘 ˜) ≥ 𝑡ℎ𝑟 0 if 𝑠(𝑤, 𝑘 ˜) &lt; 𝑡ℎ𝑟<label>(1)</label></formula><p>where 𝑡ℎ𝑟 designates a threshold in [0, 1] range. In other words, the 𝐼𝑡𝐸𝑀 function outputs the cosine similarity value between 𝑤 and 𝑘's centroid if such value is greater or equal then 𝑡ℎ𝑟, while it outputs 0 if it is lower than 𝑡ℎ𝑟. Additionally, if 𝑤 is not found in the vocabulary, its 𝐼𝑡𝐸𝑀 value is also considered 0.</p><p>The Emotive score for a tweet t w.r.t. a category 𝑘 and a threshold 𝑡ℎ𝑟 is then computed as: where 𝑞 is the number of lemmas in t which occur in 𝑉 . This allows us to obtain, for each tweet-category pair, a score between [0, 1], indicating the tweet hurtfulness tendency.</p><formula xml:id="formula_1">Emotive(t, 𝑘) = ∑︀ 𝑤∈t 𝐼𝑡𝐸𝑀 (𝑤, 𝑘, 𝑡ℎ𝑟) 𝑞<label>(2)</label></formula><p>Discussion Fig. <ref type="figure" target="#fig_1">3</ref> provides a visual analysis of the results. The Emotive score is computed category-wise as the average of the scores for each tweet, after having standardized the values with a z-score approach. We keep a 𝑡ℎ𝑟 of 0.2 in terms of cosine similarity to filter out excessively noisy category associations, while still allowing low values to contribute to the average score. This provides a general overview on the hurtful language across different professions. According to the Emotive analysis, politicians are mainly targeted with insults related to crime, homosexuality and male genitalia. This is consistent with what has been observed in Fig. <ref type="figure" target="#fig_0">2</ref>, where forms of sexual harassment discourse were mainly directed toward political figures. For artists, we notice a peak w.r.t. female genitalia, while for athletes we register a more balanced trend, except for a peak in negative connotation. On the other hand, authors seem to be mainly targeted with crime and profession-related topics, consistent with the fact that the type of misogyny mostly inflicted towards this profession consists of derailing and stereotypes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper, we investigated the phenomenon of misogyny on Twitter through the lens of hurtfulness, qualifying its different manifestation considering the profession of the targets of the misogynistic attacks.</p><p>Specifically, we examined how different types of misogyny are distributed across various professions, unveiling how derailing discourse is mostly used to attack authors, while dominance and sexual harassment speech targets especially politicians.</p><p>Additionally, we studied through a hurtfulness score measure how the language used in misogynistic tweets varies across different professions: politicians tend to be targeted with hate speech revolving around sexuality (female/male genitalia, homosexuality) and crime, while artists seem to be insulted mainly through general derogatory terms. On the other hand, less heterogeneous results were obtained for athletes and authors, except for peaks in hurtful topics regarding crimes and professions.</p><p>We acknowledge two potential limitations of our contribution: the incomplete coverage of our dataset's vocabulary by the Hurtlex-based ItEM lexicon, and our decision to focus on just four professions, which, as motivated, was guided by the representation of those professions in the AMI dataset. We therefore plan to extend the approach adopting a richer vocabulary w.r.t. datasets as well as expanding the set of professions. Indeed, as further future investigations, it could be assessed how hurtfulness dimensions change using different lexicons or automatic approaches. We also intend to investigate the distribution of misogynistic language both textual and multi-modal, as well as the broader expression of emotions in posts associated with different professions.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Alluvial plot depicting the relationship between misogyny types and professions. Thicker streams indicate a higher number of tweets corresponding to the misogyny type originating from the respective block.</figDesc><graphic coords="4,302.62,84.19,203.35,115.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Emotive z-scores for HurtLex categories with respect to professions.</figDesc><graphic coords="5,328.04,84.19,152.53,145.34" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>der 4.61% dis 51.35% dom 12.30% sex 17.28% ste 14.44% not mis 56.10% mis 43.90%</head><label></label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>politician 30.10% 19.03% artist athlete author 21.49% 29.38% der 4.91% dis 54.30% dom 9.74% sex 11.84% ste 19.21%</head><label></label><figDesc></figDesc><table><row><cell></cell><cell></cell><cell>Tweets filtering</cell></row><row><cell></cell><cell></cell><cell>Manual annotation</cell></row><row><cell></cell><cell></cell><cell>of profession</cell></row><row><cell cols="2">politician 21.84%</cell><cell></cell></row><row><cell>artist</cell><cell>28.69%</cell><cell>Automatic annotation of misogyny type</cell></row><row><cell>athlete</cell><cell>31.05%</cell><cell></cell></row><row><cell>author</cell><cell>18.42%</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1</head><label>1</label><figDesc>BERTweet multi-classification results on AMI test set.</figDesc><table><row><cell></cell><cell cols="4">support% Precision Recall F1-score</cell></row><row><cell>der</cell><cell>2.391%</cell><cell>0.250</cell><cell>0.273</cell><cell>0.261</cell></row><row><cell>dis</cell><cell>30.65%</cell><cell>0.626</cell><cell>0.794</cell><cell>0.700</cell></row><row><cell>dom</cell><cell>26.95%</cell><cell>0.811</cell><cell>0.484</cell><cell>0.606</cell></row><row><cell>sex</cell><cell>9.565%</cell><cell>0.500</cell><cell>0.773</cell><cell>0.607</cell></row><row><cell>ste</cell><cell>30.43%</cell><cell>0.906</cell><cell>0.821</cell><cell>0.861</cell></row><row><cell cols="2">Macro Avg, -</cell><cell>0.618</cell><cell>0.629</cell><cell>0.607</cell></row><row><cell>Wtd. Avg.</cell><cell>-</cell><cell>0.740</cell><cell>0.704</cell><cell>0.704</cell></row><row><cell>Accuracy</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>0.704</cell></row><row><cell cols="5">by examining relevant job details in the tweet content</cell></row><row><cell cols="5">or on the profile page of the victim, if mentioned. For</cell></row><row><cell cols="5">such cases, a collaborative approach was taken during</cell></row><row><cell cols="5">group meetings to share general insights, ensuring that</cell></row><row><cell cols="5">any disagreements were addressed through discussions</cell></row><row><cell cols="5">and ultimately resolved through consensus. In absence</cell></row><row><cell cols="5">of clues regarding the profession, the tweet is simply</cell></row><row><cell cols="2">labeled as 'generic'.</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2</head><label>2</label><figDesc>Average cosine similarity between HurtLex lemmas and ItEM centroids using Word2vec Twitter embeddings.</figDesc><table><row><cell>HurtLex Category</cell><cell>Centroid similarity</cell></row><row><cell>animals</cell><cell>0.57</cell></row><row><cell>prostitution</cell><cell>0.60</cell></row><row><cell>professions</cell><cell>0.60</cell></row><row><cell>negative connotations</cell><cell>0.55</cell></row><row><cell>homosexuality</cell><cell>0.59</cell></row><row><cell>male genitalia</cell><cell>0.52</cell></row><row><cell>female genitalia</cell><cell>0.56</cell></row><row><cell>derogatory</cell><cell>0.56</cell></row><row><cell>crime</cell><cell>0.57</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The dataset is accessible for research purposes by requesting it by email from the authors. To protect the identities of the affected women, we chose to omit explicit references to profiles and original tweet IDs from the dataset.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://live.european-language-grid.eu/catalogue/corpus/7272</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://www.wikidata.org/wiki/Wikidata:Main_Page</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://github.com/VinAIResearch/BERTweet</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://github.com/VinAIResearch/BERTweet/blob/master/ TweetNormalizer.py</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">For detailed descriptions of each category, we refer to Bassignana et al.<ref type="bibr" target="#b25">[26]</ref>.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6">We retain only conservative-level lemmas.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_7">https://github.com/Unipisa/ItEM/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_8">https://github.com/FredericGodin/TwitterEmbeddings</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Research partially funded by PNRR-PE00000013 "FAIR -Future Artificial Intelligence Research" -Spoke 1 "Human-centered AI" under NextGeneration EU, ERC-2018-ADG G.A. 834756 XAI: Science and technology for the eXplanation of AI decision making under Horizon 2020, and PRIN 2022 PIANO (Personalized Interventions Against Online Toxicity) project, CUP B53D23013290006.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Supplementary Material</head><p>In Figure <ref type="figure">4</ref>, we display the tree of nested professions based on the Wikidata taxonomy concerning the popular women selected to collect the PRF dataset ( §3.2). Branches identify Wikidata subclass of relationships, while dashed marks the connections between women and the first (or unique) occupation appearing on their Wikidata pages.We avoid reporting women's names to maintain anonymity. </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Reclaiming feminism: Challenging everyday misogyny</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>David</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>Policy Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Communicating misogyny: An interdisciplinary research agenda for social psychology</title>
		<author>
			<persName><forename type="first">C</forename><surname>Tileagă</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Social and Personality Psychology Compass</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">e12491</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Back to the kitchen, cunt&apos;: Speaking the unspeakable about online misogyny</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Jane</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Continuum</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="558" to="570" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Special issue on online misogyny</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ging</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Siapera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Feminist media studies</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="515" to="524" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Exploring gender at work</title>
		<author>
			<persName><forename type="first">J</forename><surname>Marques</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">How do we study misogyny in the digital age? A systematic literature review using a computational linguistic approach</title>
		<author>
			<persName><forename type="first">L</forename><surname>Fontanella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ignazzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sarra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tontodimamma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Humanities and Social Sciences Communications</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="1" to="15" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Overview of the evalita 2018 task on automatic misogyny identification (AMI)</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fersini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nozza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2263/paper009.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) colocated with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">Tommaso</forename><surname>Caselli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Nicole</forename><surname>Novielli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
		</editor>
		<meeting>the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) colocated with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)<address><addrLine>Turin, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">December 12-13, 2018. 2018</date>
			<biblScope unit="volume">2263</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter</title>
		<author>
			<persName><forename type="first">V</forename><surname>Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bosco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fersini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nozza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Patti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Rangel Pardo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sanguinetti</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/S19-2007</idno>
		<ptr target="https://aclanthology.org/S19-2007.doi:10.18653/v1/S19-2007" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th International Workshop on Semantic Evaluation, Association for Computational Linguistics</title>
				<meeting>the 13th International Workshop on Semantic Evaluation, Association for Computational Linguistics<address><addrLine>Minneapolis, Minnesota, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="54" to="63" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">SemEval-2020 task 12: Multilingual offensive language identification in social media (OffensEval</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zampieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rosenthal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Atanasova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Karadzhov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Mubarak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Derczynski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Pitenis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ç</forename><surname>Çöltekin</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.semeval-1.188</idno>
		<ptr target="https://aclanthology.org/2020.semeval-1.188.doi:10.18653/v1/2020.semeval-1.188" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourteenth Workshop on Semantic Evaluation, International Committee for Computational Linguistics</title>
				<meeting>the Fourteenth Workshop on Semantic Evaluation, International Committee for Computational Linguistics<address><addrLine>Barcelona (online</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1425" to="1447" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Evaluating context selection strategies to build emotive vector space models</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Passaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lenci</surname></persName>
		</author>
		<ptr target="http://www.lrec-conf.org/proceedings/lrec2016/summaries/637.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Calzolari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Choukri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Declerck</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Goggi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Grobelnik</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Maegaard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Mariani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Mazo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Moreno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Odijk</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Piperidis</surname></persName>
		</editor>
		<meeting>the Tenth International Conference on Language Resources and Evaluation LREC 2016<address><addrLine>Portorož, Slovenia</addrLine></address></meeting>
		<imprint>
			<publisher>European Language Resources Association (ELRA</publisher>
			<date type="published" when="2016">May 23-28, 2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Leveraging CLIP for image emotion recognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bondielli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Passaro</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3015/paper172.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth Workshop on Natural Language for Artificial Intelligence (NL4AI 2021) co-located with 20th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2021), Online event</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">E</forename><surname>Cabrio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Croce</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Passaro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Sprugnoli</surname></persName>
		</editor>
		<meeting>the Fifth Workshop on Natural Language for Artificial Intelligence (NL4AI 2021) co-located with 20th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2021), Online event</meeting>
		<imprint>
			<date type="published" when="2021-11-29">November 29, 2021. 2021</date>
			<biblScope unit="volume">3015</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">HONEST: measuring hurtful sentence completion in language models</title>
		<author>
			<persName><forename type="first">D</forename><surname>Nozza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bianchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hovy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Rumshisky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Hakkani-Tür</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Beltagy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bethard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</editor>
		<meeting>the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2021">June 6-11, 2021. 2021</date>
			<biblScope unit="page" from="2398" to="2406" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A quality type-aware annotated corpus and lexicon for harassment research</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rezvan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shekarpour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Balasuriya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Thirunarayan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">L</forename><surname>Shalin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Sheth</surname></persName>
		</author>
		<idno type="DOI">10.1145/3201064.3201103</idno>
		<idno>doi:10.1145/3201064.3201103</idno>
		<ptr target="https://doi.org/10.1145/3201064.3201103" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th ACM Conference on Web Science, WebSci 2018</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Akkermans</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Fontaine</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><forename type="middle">E</forename><surname>Vermeulen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Houben</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Weber</surname></persName>
		</editor>
		<meeting>the 10th ACM Conference on Web Science, WebSci 2018<address><addrLine>Amsterdam, The Netherlands</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">May 27-30, 2018. 2018</date>
			<biblScope unit="page" from="33" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Multi-label categorization of accounts of sexism using a neural framework</title>
		<author>
			<persName><forename type="first">P</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Abburi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Badjatiya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Krishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Chhaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Varma</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/D19-1174</idno>
		<ptr target="https://aclanthology.org/D19-1174.doi:10.18653/v1/D19-1174" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics</title>
				<meeting>the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics<address><addrLine>Hong Kong, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1642" to="1652" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">be nice to your wife! the restaurants are closed&quot;: Can gender stereotype detection improve sexism classification?</title>
		<author>
			<persName><forename type="first">P</forename><surname>Chiril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Benamara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Moriceau</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.findings-emnlp.242</idno>
		<ptr target="https://aclanthology.org/2021.findings-emnlp.242.doi:10.18653/v1/2021.findings-emnlp.242" />
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics</title>
				<meeting><address><addrLine>Punta Cana, Dominican Republic</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="2833" to="2844" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Sexist slurs: Reinforcing feminine stereotypes online</title>
		<author>
			<persName><forename type="first">D</forename><surname>Felmlee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Inara Rodis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sex Roles</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="page" from="16" to="28" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">When multiplication doesn&apos;t equal quick addition: Examining intersectionality as a research paradigm</title>
		<author>
			<persName><forename type="first">A.-M</forename><surname>Hancock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Perspectives on politics</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="63" to="79" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Considerations on mainstreaming intersectionality</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Dhamoon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Political research quarterly</title>
		<imprint>
			<biblScope unit="volume">64</biblScope>
			<biblScope unit="page" from="230" to="243" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Resisting antidemocratic values with misogynistic abuse against a chilean right-wing politician on twitter: The# camilapeluche incident</title>
		<author>
			<persName><forename type="first">D</forename><surname>Silva-Paredes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">Ibarra</forename><surname>Herrera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Discourse &amp; Communication</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="426" to="444" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Only YOU Can Prevent This Nightmare, America&quot;: Nancy Pelosi As the Monstrous-Feminine in Donald Trump&apos;s YouTube Attacks</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">B</forename><surname>Phipps</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Montgomery</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Women&apos;s Studies in Communication</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="316" to="337" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Creating a monster: Online media constructions of Hillary Clinton during the democratic primary campaign</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ritchie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Feminist Media Studies</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="102" to="119" />
			<date type="published" when="2007">2007. 2013</date>
		</imprint>
	</monogr>
	<note>-8</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Women leaders and digital communication: Gender stereotyping of female politicians on twitter</title>
		<author>
			<persName><forename type="first">N</forename><surname>Saluja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Thilaka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Content, Community &amp; Communication</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="227" to="241" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Discourses of celebrities on instagram: digital femininity, self-representation and hate speech</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ghaffari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Social Media Critical Discourse Studies</title>
				<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="43" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017</title>
				<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">V N</forename><surname>Vishwanathan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<meeting><address><addrLine>Long Beach, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">December 4-9, 2017. 2017</date>
			<biblScope unit="page" from="5998" to="6008" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Sentiment analysis in tweets: an assessment study from classical to modern word representation models</title>
		<author>
			<persName><forename type="first">S</forename><surname>Barreto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Moura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carvalho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Paes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Plastino</surname></persName>
		</author>
		<idno type="DOI">10.1007/S10618-022-00853-0</idno>
		<ptr target="https://doi.org/10.1007/s10618-022-00853-0.doi:10.1007/S10618-022-00853-0" />
	</analytic>
	<monogr>
		<title level="j">Data Min. Knowl. Discov</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="318" to="380" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Hurtlex: A multilingual lexicon of words to hurt</title>
		<author>
			<persName><forename type="first">E</forename><surname>Bassignana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Patti</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-2253/paper49.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">E</forename><surname>Cabrio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Mazzei</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Tamburini</surname></persName>
		</editor>
		<meeting>the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)<address><addrLine>Torino, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">December 10-12, 2018. 2018</date>
			<biblScope unit="volume">2253</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Improving and interpreting neural networks for word-level prediction tasks in natural language processing</title>
		<author>
			<persName><forename type="first">F</forename><surname>Godin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<pubPlace>Belgium</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Ghent University</orgName>
		</respStmt>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
