<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Cracking Down on Digital Misogyny with MULTILATE a MULTImodaL hATE Detection System Notebook for the Exist 2024 Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Advaitha</forename><surname>Vetagiri</surname></persName>
							<email>advaitha21_rs@cse.nits.ac.in</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science and Engineering</orgName>
								<orgName type="institution">National Institute of Technology Silchar</orgName>
								<address>
									<postCode>788010</postCode>
									<settlement>Assam</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Prateek</forename><surname>Mogha</surname></persName>
							<email>prateek_ug@ee.nits.ac.in</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Electrical Engineering</orgName>
								<orgName type="institution">National Institute of Technology Silchar</orgName>
								<address>
									<postCode>788010</postCode>
									<settlement>Assam</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Partha</forename><surname>Pakray</surname></persName>
							<email>partha@cse.nits.ac.in</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science and Engineering</orgName>
								<orgName type="institution">National Institute of Technology Silchar</orgName>
								<address>
									<postCode>788010</postCode>
									<settlement>Assam</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Cracking Down on Digital Misogyny with MULTILATE a MULTImodaL hATE Detection System Notebook for the Exist 2024 Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">D7DCD0264AEEC3B645DE09F265F50DA2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>EXIST 2024</term>
					<term>Convolutional Neural Networks -Bidirectional Long Short-Term Memory</term>
					<term>Residual Network 50</term>
					<term>Sexism Detection</term>
					<term>Sexist Content</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Sexism in social networks manifests in various forms, from blatant misogyny to subtle, implicit biases, presenting a significant societal challenge that necessitates effective detection and mitigation strategies. Addressing this issue involves participation in the EXIST 2024 tasks, a competition designed to advance the identification of sexist content in social media. This year's contest includes both traditional text-based data from tweets and an innovative meme dataset, incorporating both images and text. The approach leverages sophisticated models to analyze these multimodal inputs. For textual modalities, a Convolutional Neural Network -Bidirectional Long Short-Term Memory model is employed to discern sexist language and tweet behaviours. For image modalities, a combination of Residual Network 50 and text-based analysis is utilized to detect and interpret sexist elements within memes. Both models undergo hyperparameter tuning and k-fold cross-validation to ensure robustness and accuracy. Preliminary results indicate that integrating these methods enhances the precision and effectiveness of sexism detection, providing a comprehensive tool for identifying and addressing sexist content in diverse social media formats.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The phenomenon of sexism remains a crucial problem in the contemporary world at large; in turn, sexism encompasses stereotyped perceptions <ref type="bibr" target="#b0">[1]</ref>, ideological prejudice, and even acts of bigotry in relation to both male and female sex <ref type="bibr" target="#b1">[2]</ref>. As the internet continues to assert its presence in people's lives, especially within the context of social discourse through social networks, it becomes important to track and tackle sexism in these platforms. The Conference and Labs of the Evaluation Forum (CLEF) sponsored shared task for sEXism Identification in Social neTworks (EXIST) 2024 <ref type="bibr" target="#b2">[3]</ref> explores the potential of developing and improving the methodology and tools to effectively detect and classify sexism in language, bringing together researchers from various disciplines.</p><p>Sexism is one of the most multifaceted and consequential concepts because, in fact, it affects not only individuals but also entire societies, preserving gender inequalities, limiting personal and social opportunities, and strengthening discourses and representations of gendered divisions. This ultimately leads to the marginalization of women and the enhancement of inequalities, hence delaying the achievement of gender equity. Categorical and severe sex-related hatred has long been the main object of concern in the field of research <ref type="bibr" target="#b3">[4]</ref>, but recently, there has been an increased understanding and recognition of the need for a constant investigation of a range of sexist manifestations <ref type="bibr" target="#b4">[5]</ref>. The goals set by the EXIST campaigns are to include both blatant and subtle instances of sexism to embrace the full range of its expression and the ways that it could be encountered daily.</p><p>The chief aim of EXIST <ref type="bibr" target="#b5">[6]</ref> is to encompass a wide array of sexist expressions, ranging from blatant misogyny to more nuanced and implicit behaviours. This initiative has steadily evolved since its inception, with the 2024 edition introducing fresh challenges and broadening its scope to include multimodal data. The fourth iteration, hosted at the University of Grenoble Alpes, France, from September 9-12, 2024, builds on the groundwork established in previous years while incorporating novel elements to further enhance detection capabilities.</p><p>A significant addition to this year's challenge is incorporating a meme dataset alongside the traditional tweet dataset. Memes, which blend images and text to convey humour or commentary, present distinct challenges due to their multimodal nature and the subtleties involved in their interpretation. The EXIST 2024 <ref type="bibr" target="#b6">[7]</ref> task aspires to develop robust models adept at identifying sexist content within both textual and visual contexts.</p><p>In order to address these issues, this paper proposes a multifaceted approach that includes CNN and BiLSTM models <ref type="bibr" target="#b7">[8]</ref>. CNNs, especially those designed for image processing, are proficient at identifying localized representations and characteristics of text as well as images, which makes them appropriate for categorizing the subtle forms of expression identified in the form of memes <ref type="bibr" target="#b8">[9]</ref>. Long-range dependencies and context information in textual content make BiLSTM networks outperform on the other hand. This Duplex architecture aligns itself with the best aspects of both architectures, providing a complete answer to the task as nuanced as sexism detection. The structure provided by the CNN-BiLSTMs <ref type="bibr" target="#b9">[10]</ref> of the hybrid model allows for capturing fine-grained features and contextual information, which is critical in the identification and categorization of instances of sexism. This <ref type="bibr" target="#b10">[11]</ref> present work contributes to improving current approaches by proposing a new tool that enables automatic identification and categorization of stereotypically sexual comments in social media compared to previous work using a Generative Pre-trained Transformer 2 (GPT-2) <ref type="bibr" target="#b11">[12]</ref>.</p><p>OpenAI <ref type="bibr" target="#b12">[13]</ref> owns GPT-2 <ref type="bibr" target="#b13">[14]</ref>, a language model that marks a severe improvement in the Natural Language Processing (NLP) area. Being a member of the 12 GPT generation learning models, GPT-2 utilizes a unique neural network design and copious amounts of training data to produce human-like text. The actual upbringing of GPT-2 is done on a vast dataset of the language available on the internet, which lays down the statistical probabilities and features of language in its structure. This complex structure allows it to understand contextual details, thus making it capable of creating coherent and contextually appropriate text from the input given to it. In particular, GPT-2 has been used in the past as an effective means for the automatic identification and classification of subdivisions containing sexism. This is achieved by training GPT-2 on a dataset that is marked to contain sexist and non-sexist <ref type="bibr" target="#b14">[15]</ref> text, hence making it acquire prior knowledge on the typical characteristics of text samples that might contain sexist tendencies.</p><p>Recently, the CNN-BiLSTM model was found to outperform the previous GPT-2 model to be used in the EXIST 2023 task. CNN-BiLSTMs, which are specifically designed for text classification tasks, appear more effective at identifying sexist language and behaviours. The improvement of this model in the identification of contextual and semantic features of a text also contributes towards better identification of sexism. Moreover, ResNet50's combination with the textual model called "MULTIHATE" <ref type="bibr" target="#b15">[16]</ref> made a substantial enhancement in analyzing sexism embedded in memes. They have both been hyperoptimized and tested using k-fold cross-validation (CV), thereby presenting them as very reliable models. Altogether, these have made sexism detection more accurate and efficient than it was with the help of the previous GPT-2 model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature Survey</head><p>Sexism, a pervasive issue in society, is defined as discrimination based on sex or gender, especially against women and girls. Sexism can be a belief that one sex is superior to another sex. It imposes limits on what men should do and what women should do. Sexism in a society is most commonly applied against women and girls due to patriarchy or male domination. The problem of sexism extends beyond individual discrimination to systemic inequalities that affect various aspects of the life of a woman, including employment, education, and social interactions. Researchers have identified multiple categories and impacts of sexism in society, leading to a broad body of literature exploring its various dimensions and proposing methods for its identification and mitigation, especially in the field of deep learning</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Negative Effects of Sexism</head><p>The consequences of sexism are far-reaching and detrimental. Sexism contributes to gender inequality in the workplace, limiting opportunities for women in terms of promotions, salaries, and job roles. This inequality is often perpetuated through both discrimination and more subtle biases that affect hiring practices and workplace culture. Educational disparities also emerge, with sexist attitudes influencing the subjects that individuals are encouraged to pursue, often steering women away from STEM fields. Furthermore, sexism can lead to unequal access to resources and support systems, worsening the challenges faced by women.</p><p>Moreover, studies showing that persistent exposure to sexist attitudes and behaviours can contribute to sexism can lead to depression, post-traumatic stress disorder, lower self-esteem, and a heightened risk of mental health issues <ref type="bibr" target="#b16">[17]</ref>. This pervasive issue affects not only individuals but also the broader society by perpetuating gender inequalities and limiting the potential contributions of all its members.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Categories of Sexism in EXIST 2024</head><p>Ideological and Inequality: Ideological sexism often manifests in cultural norms and legal systems reinforcing gender disparities. For example, some historical and cultural narratives portray females as inferior to males in terms of their abilities.</p><p>Stereotyping and Dominance: A gender stereotype is a generalized view or perception about attributes or characteristics that should be possessed by women, or that should be performed by men and women; stereotypes can be both positive and negative, such as 'women are nurturing' and 'women are weak'. Stereotyping stops women from moving up by creating unfair doubts about their skills and leadership. This blocks their chances for promotion and equal recognition at work. <ref type="bibr" target="#b17">[18]</ref> Dominance, on the other hand, manifests in power dynamics, where men are considered to be dominant over women.</p><p>Objectification: Objectification means viewing or treating individuals as objects, reducing them to their physical appearance. This form of sexism is more common in media representations and advertising <ref type="bibr" target="#b18">[19]</ref>, where women are frequently depicted in ways that only focus on their physical attributes over their skills or personalities. Objectification can lead to dehumanization, where women are valued less for their personality or work and more for their appearance.</p><p>Sexual Violence: This severe form of sexism is when an individual is forced or manipulated into unwanted sexual activity without their consent. This includes sexual assault or rape, harassment, exploitation, public flashing and watching someone in a private act without their knowledge or permission. Sexual violence can have a permanent effect on a woman's life, which can lead to depression <ref type="bibr" target="#b19">[20]</ref>.</p><p>Misogyny and Non-Sexual Violence: Misogyny refers to hatred towards women; it is a form of sexism that can keep women at a lower social status than men <ref type="bibr" target="#b20">[21]</ref>. Misogyny has taken various forms, such as discrimination, objectification, belittlement, or violence, and often stems from deeply rooted societal attitudes and stereotypes about gender roles and power dynamics. Non-sexual violence includes behaviours such as verbal abuse, threats, and other forms of intimidation that are not explicitly sexual but are driven by gender bias <ref type="bibr" target="#b21">[22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Identification of Sexism</head><p>With advancements in technology, particularly in the domain of machine learning (ML) and deep learning (DL), Many methods have been developed to identify and analyze instances of sexism on the digital platform.</p><p>Machine learning techniques, such as support vector machines (SVM) and random forests, have been applied to classify and detect sexist content in text data. These methods rely on feature extraction and supervised learning to differentiate sexist remarks from non-sexist ones. By training models on labelled datasets, researchers can develop systems that automatically identify and categorize sexist language <ref type="bibr" target="#b22">[23]</ref> in both Spanish and English. These models use various textual features, such as word frequencies, n-grams, and syntactic structures, to distinguish between different types of sexist content. Similarly, <ref type="bibr" target="#b23">[24]</ref> used two datasets in a similar manner to identify online hate speech directed towards women.</p><p>Deep learning approaches have further enhanced the ability to identify sexism by automatically learning hierarchical representations of text data. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are commonly employed in this domain. CNNs effectively capture local patterns in text, while RNNs, particularly Long Short-Term Memory (LSTM) networks, excel at modelling sequential dependencies and context <ref type="bibr" target="#b24">[25]</ref>. These methods have been shown to outperform traditional ML techniques in various natural language processing tasks, including sentiment analysis and hate speech detection . <ref type="bibr" target="#b25">[26]</ref> To tackle the issue of sexism in memes, researchers have applied CNN-BiLSTM models for image classification. CNNs are employed to extract features from the image components of memes, while BiLSTM networks process the textual content, capturing both spatial and contextual information effectively <ref type="bibr" target="#b26">[27]</ref>. This combination allows for a comprehensive analysis of memes, considering both visual and textual cues to accurately identify sexist content. Integrating these models enables the detection of subtle and complex forms of sexism that may not be apparent through text or image analysis alone.</p><p>ResNet50 is a deep residual network that has shown superior performance in image recognition <ref type="bibr" target="#b27">[28]</ref> and can be used for identifying sexist content in memes. It handles the complexities of image data, making it well-suited for detecting subtle visual cues that indicate sexism. Hence, it can be used to differentiate between hateful and not-hateful memes when combined with other models such as LSTM <ref type="bibr" target="#b28">[29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Dataset</head><p>Since 2021, the primary aim of the EXIST campaigns has been the detection of sexism in tweets <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31,</ref><ref type="bibr" target="#b31">32]</ref>. Over the years, three distinct corpora of annotated tweets have been amassed for various EXIST tasks. In line with this tradition, the focus of EXIST 2024 remains on identifying sexism in textual content, utilizing the EXIST 2023 <ref type="bibr" target="#b31">[32]</ref> dataset, and expanding to encompass memes. Memes, which are images typically adorned with text captions, often carry humour and circulate widely on social media, forums, and other digital platforms. These memes can serve as vehicles for misinformation, perpetuate stereotypes, or degrade individuals. For EXIST 2024, a comprehensive lexicon of terms and expressions indicative of sexist memes has been meticulously curated, drawing from expressions that have proven effective in identifying sexism in previous EXIST editions. This lexicon includes a diverse array of topics, incorporating terms used in both sexist and non-sexist contexts, all centred around women. The final compilation includes 250 terms, with 112 in English and 138 in Spanish. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Crawling</head><p>These terms were employed as search queries on Google Images to retrieve the top 100 images. Through rigorous manual curation, efforts were made to define memes accurately and eliminate noise, such as images without text, text-only images, advertisements, and duplicates. The final collection comprises over 3,000 memes per language. Given the heterogeneous proportion of memes per term, the most unbalanced seeds were discarded to ensure that each seed had at least five memes. Furthermore, the final dataset was curated to achieve the most equitable distribution of memes per seed. To avoid selection bias, memes were randomly chosen, adhering to the appropriate distribution per seed. Consequently, the training set contains more than 2,000 memes per language, while the test set includes over 500 memes per language.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Labeling Process</head><p>As with the previous edition, potential sources of "label bias" have been carefully considered. Label bias can stem from the socio-demographic differences of the individuals involved in the annotation process, as well as from scenarios where multiple correct labels exist or where labelling decisions are highly subjective, as shown in Figure <ref type="figure" target="#fig_0">1</ref>. To mitigate label bias, two different social and demographic parameters were considered: gender (MALE/FEMALE) and age (18-22 years/23-45 years/46+ years). Each meme was annotated by 6 crowdsourcing annotators selected via the Prolific app, following guidelines established by two gender issues experts. As an added feature in the datasets, both for 2023 and 2024, three additional demographic characteristics of each annotator have been included: level of education, ethnicity, and country of residence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Learning with Disagreements</head><p>The notion that natural language expressions have a singular and clearly identifiable interpretation in any given context is an oversimplification, particularly in the realm of highly subjective tasks like sexism detection. The learning with disagreements paradigm addresses this by enabling systems to learn from datasets without gold-standard annotations, instead providing information about all annotator responses to capture the diversity of perspectives. In line with methods proposed for training directly from data with disagreements, all annotations per instance from the 6 different strata of annotators will be provided rather than using an aggregated label.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">System Overview</head><p>In For Tasks 4, 5, and 6, which concentrate on image and text data, the ResNet50 model for image and CNN-BiLSTM for text data as above is employed. ResNet50, a deep convolutional neural network with 50 layers, is utilized for its robust image feature extraction capabilities. The model undergoes fine-tuning on a curated dataset of memes to enhance its performance for each task. In Task 4, the ResNet50 model performs binary classification to determine whether a given meme is sexist. By training on labelled data, the model learns to recognize visual and textual elements indicative of sexism. The output for Task 4 is a binary classification label indicating whether the meme is sexist or not. Task 5 involves categorizing the source intention in memes and distinguishing between Direct and Judgmental intentions. The ResNet50 model learns to identify visual and textual cues associated with each subcategory by training on annotated meme datasets. The output for Task 5 provides the source intention classification label for each meme. Task 6 requires the ResNet50 model to categorize sexism in memes, a multiclass hierarchical multi-label classification problem. The model undergoes fine-tuning on a dataset annotated for sexism categorization in memes, learning to identify subcategories such as Ideological-Inequality, Stereotyping-Dominance, Objectification, Sexual-Violence, and Misogyny-Non-Sexual-Violence. The output for Task 6 provides probabilities or confidence scores for each subcategory, indicating the extent to which the meme belongs to each category.</p><p>The models effectively capture the intricate patterns associated with sexism by harnessing the combined strengths of CNN-BiLSTM for textual data and ResNet50 for image data. These models are meticulously fine-tuned through rigorous training and validation processes, achieving high performance in the classification and categorization tasks of EXIST 2024.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental Setup</head><p>The experimental framework for the EXIST 2024 tasks entails a meticulous approach to model training, validation, and assessment. This section delineates the data preprocessing protocols, model configurations, hyperparameter optimization, and evaluation metrics employed to achieve peak performance across the six tasks.</p><p>In preparing the textual data for Tasks 1, 2, and 3, several preprocessing steps are undertaken. Initially, texts are tokenized into words utilizing the Global Vectors (GloVe) <ref type="bibr" target="#b32">[33]</ref>, ensuring consistent treatment of punctuation marks and special symbols. Following this, all text is converted to lowercase to maintain uniformity. Common stopwords are excised to minimize noise within the dataset. Sequences are then padded to a standardized length to facilitate batch processing, with excessively long texts truncated to manageable sizes.</p><p>For the image data in Tasks 4, 5, and 6, a different set of preprocessing procedures is followed. Images are resized to 224x224 pixels to match the input dimensions required by the ResNet50 model. Pixel values are normalized to fall within the range of [0, 1], standardizing the input data. To enhance model robustness, data augmentation techniques such as random rotation, flipping, and colour jitter are applied, augmenting the training dataset's variability.</p><p>The CNN-BiLSTM model, used for the textual tasks, is configured with a convolutional layer employing a filter size of 300d, followed by a ReLU activation function and max pooling. The bidirectional LSTM layer comprises 128 units, adept at capturing contextual information in both forward and backward directions. Subsequently, two fully connected layers with 64 and 32 units, respectively, are incorporated, with a dropout rate of 0.5 to mitigate overfitting. The output layer is tailored to the specific task: a single sigmoid neuron for Task 1 and softmax layers corresponding to the number of classes for Tasks 2 and 3.</p><p>For the image tasks, the ResNet50 model, as well as CNN-BiLSTM for text, makes them a multimodal approach. The ResNet50 model is initialized with pre-trained weights from ImageNet. A custom classification head replaces the original, incorporating a global average pooling layer, followed by dense layers with 128 and 64 units, respectively, and a dropout rate of 0.5. The output layer is similarly tailored, with a single sigmoid neuron for Task 4 and softmax layers corresponding to the number of classes for Tasks 5 and 6.</p><p>Hyperparameter tuning is conducted via grid search and k-fold cross-validation to pinpoint the optimal parameters for each model. Key hyperparameters, such as learning rate, batch size, and the number of epochs, are systematically varied. Learning rates are explored within the range of 1e-5 to 1e-3, batch sizes are tested at 32, and the number of epochs is determined based on early stopping criteria, with patience set to five epochs. The models' performance is evaluated using a variety of metrics. Accuracy, defined as the proportion of correct predictions, is used for both binary and multiclass classification tasks. Precision, recall, and F1-score are calculated to provide a nuanced understanding of the balance between false positives and false negatives. The training and validation processes utilize the training and development datasets, respectively, with final evaluations conducted on the test datasets. This experimental setup ensures a thorough and comprehensive approach to model development, striving to attain superior performance across all tasks in the EXIST 2024 competition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Results &amp; Discussion</head><p>In this section, the outcomes of the methodologies applied for Tasks 1-6 in EXIST 2024 using the training dataset, as well as the final results provided by the organizers of the Shared Task on the test dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Training Results</head><p>The training outcomes provide a comprehensive evaluation of the employed methodologies, underscoring their efficacy in addressing the specified tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.1.">Task 1, 2 &amp; 3</head><p>For Task 1, which involves binary classification for sexism identification, the average classification report across five folds shows a precision of 0.72, a recall of 0.72, and an F1-score of 0.71. These metrics suggest a well-balanced model performance, where the precision and recall are in harmony, indicating consistent identification of sexist and non-sexist instances. This balance between precision and recall results in a robust F1-score, demonstrating the model's reliability in distinguishing between the two classes. The detailed performance metrics for Task 1 are summarized in Table <ref type="table">3</ref>, and the model's accuracy and loss curves are illustrated in Figure <ref type="figure">2</ref>, while the confusion matrix is shown in Figure <ref type="figure">3</ref>.</p><p>In Task 2, which addresses the hierarchical classification of source intention, the model achieved an average precision of 0.60, a recall of 0.65, and an F1-score of 0.61. The results reflect a reasonable performance, with recall slightly outperforming precision. This indicates the model's slightly better ability to identify all relevant instances of each class than its precision. However, the moderate precision points to some challenges in avoiding false positives, suggesting that further fine-tuning could enhance the model's specificity. The detailed performance metrics for Task 1 are summarized in Table <ref type="table" target="#tab_1">4</ref>, and the model's accuracy and loss curves are illustrated in Figure <ref type="figure">4</ref>, while the confusion matrix is shown in Figure <ref type="figure" target="#fig_3">5</ref>.</p><p>Task 3, which involves multiclass hierarchical multi-label classification for sexism categorization, exhibited average precision and recall scores of 0.58, with an F1-score also at 0.58 across five folds. These results point to the complexity of the task, where the model faces difficulties in accurately predicting multiple labels simultaneously. The consistent scores across precision, recall, and F1-score indicate that while the model is competent, there is room for improvement, particularly in refining its ability to handle multiple overlapping categories.5, and the model's accuracy and loss curves are illustrated in Figure <ref type="figure" target="#fig_4">6</ref>, while the confusion matrix is shown in Figure <ref type="figure" target="#fig_5">7</ref>.    Task 4 focused on image-based classification using the ResNet50 model, and the average classification report across five folds yielded a precision and recall of 0.63 and an F1-score of 0.62. These results highlight the model's consistent performance in classifying images accurately. The close alignment of precision and recall suggests that the model maintains a good balance between correctly identifying positive instances and minimizing false positives. The detailed performance metrics for Task 1 are summarized in Table <ref type="table" target="#tab_3">6</ref>, and the model's accuracy and loss curves are illustrated in Figure <ref type="figure" target="#fig_6">8</ref>, while the confusion matrix is shown in Figure <ref type="figure" target="#fig_7">9</ref>. Task 5 presented a more challenging scenario, reflected in the lower average precision of 0.48, recall of 0.53, and an F1-score of 0.50 across five folds. These figures indicate that the model struggled with this task, likely due to the finer granularity required for distinguishing between closely related categories. The disparity between precision and recall suggests that the model identified many relevant instances  but also produced a higher rate of false positives. The detailed performance metrics for Task 1 are summarized in Table <ref type="table">7</ref>, and the model's accuracy and loss curves are illustrated in Figure <ref type="figure" target="#fig_8">10</ref>, while the confusion matrix is shown in Figure <ref type="figure" target="#fig_9">11</ref>.</p><p>In Task 6, which also dealt with hierarchical multi-label classification using images, the model achieved an average precision and recall of 0.51, with an F1-score of 0.52. These results mirror the challenges seen in Task 3, underscoring the inherent difficulty of multi-label classification tasks. The equal precision and recall values reflect a balanced performance yet also highlight the need for further improvements to enhance the model's accuracy in handling complex, multifaceted data inputs. The detailed performance metrics for Task 1 are summarized in Table <ref type="table" target="#tab_5">8</ref>, and the model's accuracy and loss curves are illustrated in Figure <ref type="figure" target="#fig_0">12</ref>, while the confusion matrix is shown in Figure <ref type="figure" target="#fig_0">13</ref>. The results across these tasks demonstrate the varying degrees of model effectiveness, with strengths in binary and simpler hierarchical classifications, but also revealed significant challenges in more complex multi-label tasks. These insights are crucial for directing future enhancements and refinements to improve overall performance.     </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Final Result</head><p>This section presents the evaluation methodology and metrics utilized for each task in the EXIST 2024 competition. The primary evaluation metric used across all tasks is the Information Contrast Measure (ICM) <ref type="bibr" target="#b31">[32]</ref>. Additionally, details about the evaluation package, including the Python script and the contents of the evaluation folder, are provided. Different evaluation metrics are employed for the three tasks based on the classification problems' nature and the hierarchical structure of the The evaluation metric for this task is mono-label classification. To determine the ground truth labels, a "hard" setting is adopted, where the majority vote from human annotators is used. In this setting, the class annotated by more than three annotators is selected as the ground truth label. The ICM serves as the official metric for Task 1.</p><p>Tasks 2 &amp; 5: Source Intention Tasks 2 &amp; 5 focus on multiclass hierarchical classification, specifically categorizing the source intention as either sexist or not sexist, with further subcategorization into direct, reported, and judgmental. The evaluation metric for this task considers the severity of confusion between different categories. In the "hard" setting, the class annotated by more than two annotators is chosen as the ground truth label. The ICM is the official metric for Task 2.</p><p>Tasks 3 &amp; 6: Sexism Categorization Tasks 3 &amp; 6 involve multiclass hierarchical classification with multi-label assignments, where a tweet may belong to multiple subcategories simultaneously. Similar to Task 2, the evaluation metric for this task considers the hierarchical structure and the possibility of multiple labels. The ground truth labels are determined using a "hard" setting, selecting the labels assigned by multiple annotators. The official metric for Task 3 is the ICM, which is extended to ICM-soft to accommodate soft system outputs and ground truth assignments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Evaluation Variants for Each Task</head><p>The evaluation is conducted in two different modes for each task: "hard-hard" and "soft-soft. " In the "hard-hard" evaluation, systems that provide a conventional hard output are evaluated using hard ground truth labels. The official metric used to measure the system's performance is the ICM. Additionally, F1 scores are calculated and reported for comparison purposes, considering task-specific considerations.</p><p>The "soft-soft" evaluation is conducted for systems that provide probabilities for each category. In this context, the system's probabilities are compared with those assigned by the human annotators. The ICM-soft metric accounts for the probabilistic nature of both system outputs and ground truth labels.</p><p>The use of ICM and ICM-soft metrics in the evaluation process ensures the consideration of the hierarchical structure of categories and the possibility of multiple labels, providing a superior analytical evaluation framework compared to alternatives in the current state of the art.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>The case of detecting and countering sexism in social networks has remained a vital and prevalent topic of future research through the assessments drawn by the research and results of the shared task of EXIST 2024. The task has now used more advanced models, among which are the CNN-BiLSTM model for text data and the ResNet50-CNN-BiLSTM model for meme data, to help understand and detect the presence of sexist content in images and text. Due to their ability to exploit the semantics and context of   texts and recognise visual prompts that characterize memes, these models have become very useful in categorizing Sexist posts. While attention-modulated models demonstrate state-of-the-art performance for a number of tasks, the variation in results across tasks and labels suggests that there is more work to be done in refining the methods.</p><p>The outcomes from the experiment of the EXIST 2024 shared task declare that models offer reliable performance in the detection of differentiating between sexist and non-sexist content; however, accuracy and reproducibility must be improved. Potential problems of this approach, such as significant distinctions between 'DIRECT' and 'REPORTED' categories or between several kinds of sexism that include ideological inequality, stereotyping, objectification, sexual violence, misogyny, and non-sexual violence, explain why this question is disputable.</p><p>There is also a paramount need to increase the models' reliability with regard to identifying different types of sexism. These are some of the areas that would require expansion of datasets and diversification of the datasets that are fed to the system. Therefore, effective collaboration with the researcher, experts, and practitioners will be advisable to enhance the development of a better framework of automated systems to reduce sexism in social networks.</p><p>The EXIST 2024 shared task has offered a wealth of information and a clear starting point for future speculations, pushing machine learning forward in determining and eradicating the existing sexism on the Internet. These moves are expected to contribute towards the achievement of improving society, especially as it embraces the use of the digital platform.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Comprehensive Visualization of Dataset Classes: An Overview of Various Categories Including Non-Sexist and Sexist Examples.</figDesc><graphic coords="5,161.01,208.10,135.38,140.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>the EXIST 2024 shared task, the CNN-BiLSTM and ResNet50 models are employed for the detection and classification of sexism across various tasks. The competition encompasses six distinct tasks: Task 1 focuses on identifying sexism (binary classification), Task 2 involves classifying the source intention (a multiclass hierarchical classification), Task 3 deals with categorizing sexism (multiclass hierarchical multi-label classification), Task 4 targets the identification of sexism in memes (binary classification), Task 5 categorizes the source intention in memes (multiclass classification), and Task 6 involves the categorization of sexism in memes (multiclass hierarchical multi-label classification).For Tasks 1, 2, and 3, which concentrate on textual data, the CNN-BiLSTM model is utilized. This architecture merges Convolutional Neural Networks (CNN) for feature extraction and Bidirectional Long Short-Term Memory (BiLSTM) networks for capturing sequential dependencies and contextual nuances. The CNN-BiLSTM model undergoes fine-tuning on annotated datasets to optimize its efficacy for each specific task. In Task 1, the CNN-BiLSTM model classifies text instances as either sexist (YES) or not sexist (NO). By training on a dataset labelled for sexism identification, the model discerns patterns and linguistic signals indicative of sexism. The output for Task 1 is a binary classification label that denotes whether the text is sexist or not. Task 2 employs the CNN-BiLSTM model for source intention classification. This hierarchical classification task first differentiates between sexist and non-sexist texts and subsequently categorizes the sexist texts into Direct, Reported, and Judgmental subcategories. The model learns to identify linguistic cues associated with each subcategory by training on a dataset annotated for source intention. The output for Task 2 provides the source intention classification label for each text instance. Task 3 involves sexism categorization, a multiclass hierarchical multi-label classification problem. The CNN-BiLSTM model undergoes fine-tuning on a dataset with sexism categorization annotations. The first classification level distinguishes between sexist and non-sexist text, while the second level includes subcategories such as Ideological-Inequality, Stereotyping-Dominance, Objectification, Sexual-Violence, and Misogyny-Non-Sexual-Violence. As Task 3 permits multiple subcategories for a single text instance, the model generates multi-label predictions, outputting probabilities or confidence scores for each subcategory.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :Figure 3 :</head><label>23</label><figDesc>Figure 2: Task 01 Accuracy and Loss Curves</figDesc><graphic coords="8,72.00,575.45,225.63,74.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Task 02 Confusion Matrix</figDesc><graphic coords="9,297.64,199.08,225.64,189.48" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Task 03 Accuracy and Loss Curves</figDesc><graphic coords="10,72.00,128.03,225.63,74.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Task 03 Confusion Matrix</figDesc><graphic coords="10,297.64,65.61,225.63,198.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Task 04 Accuracy and Loss Curves</figDesc><graphic coords="11,72.00,123.69,225.63,74.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Task 04 Confusion Matrix</figDesc><graphic coords="11,297.64,65.61,225.64,190.18" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Task 05 Accuracy and Loss Curves</figDesc><graphic coords="11,72.00,347.31,225.63,74.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Task 05 Confusion Matrix</figDesc><graphic coords="11,297.64,289.23,225.64,190.18" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 12 :Figure 13 :</head><label>1213</label><figDesc>Figure 12: Task 06 Accuracy and Loss Curves</figDesc><graphic coords="12,72.00,128.03,225.63,74.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Dataset Split Statistics for Tweets<ref type="bibr" target="#b31">[32]</ref> </figDesc><table><row><cell></cell><cell></cell><cell>Table 2</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="4">Dataset Split Statistics for Memes [6]</cell></row><row><cell></cell><cell cols="2">Dev Train Test</cell><cell></cell><cell cols="2">Train Test</cell></row><row><cell cols="2">Spanish 549</cell><cell>3660 1098</cell><cell cols="2">Spanish 2046</cell><cell>540</cell></row><row><cell>English</cell><cell>489</cell><cell>3260 978</cell><cell cols="2">English 2010</cell><cell>513</cell></row><row><cell>Total</cell><cell cols="2">1038 6920 2076</cell><cell>Total</cell><cell cols="2">4056 1053</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 4</head><label>4</label><figDesc>Task 02 Testing Classification Report</figDesc><table><row><cell>Label</cell><cell cols="3">Precision Recall F1-score</cell></row><row><cell>NO</cell><cell>0.16</cell><cell>0.04</cell><cell>0.07</cell></row><row><cell>DIRECT</cell><cell>0.41</cell><cell>0.16</cell><cell>0.23</cell></row><row><cell>REPORTED</cell><cell>0.46</cell><cell>0.57</cell><cell>0.51</cell></row><row><cell>JUDGEMENTAL</cell><cell>0.77</cell><cell>0.86</cell><cell>0.81</cell></row><row><cell cols="2">Accuracy</cell><cell></cell><cell>0.67</cell></row></table><note>Figure 4: Task 02 Accuracy and Loss Curves</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 5</head><label>5</label><figDesc>Task 03 Testing Classification Report</figDesc><table><row><cell>Label</cell><cell cols="3">Precision Recall F1-score</cell></row><row><cell>NO</cell><cell>0.67</cell><cell>0.04</cell><cell>0.08</cell></row><row><cell>IDEOLOGICAL-INEQUALITY</cell><cell>0.31</cell><cell>0.19</cell><cell>0.24</cell></row><row><cell>STEREOTYPING-DOMINANCE</cell><cell>0.71</cell><cell>0.94</cell><cell>0.81</cell></row><row><cell>OBJECTIFICATION</cell><cell>0.46</cell><cell>0.38</cell><cell>0.41</cell></row><row><cell>SEXUAL-VIOLENCE</cell><cell>0.30</cell><cell>0.11</cell><cell>0.17</cell></row><row><cell>MISOGYNY-NON-SEXUAL-VIOLENCE</cell><cell>0.56</cell><cell>0.06</cell><cell>0.11</cell></row><row><cell>Accuracy</cell><cell></cell><cell></cell><cell>0.66</cell></row><row><cell>6.1.2. Task 4, 5 &amp; 6</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 6</head><label>6</label><figDesc>Task 04 Training Classification Report</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Label Precision Recall F1-score</head><label></label><figDesc></figDesc><table><row><cell>NO</cell><cell>0.67</cell><cell>0.69</cell><cell>0.68</cell></row><row><cell>YES</cell><cell>0.65</cell><cell>0.62</cell><cell>0.64</cell></row><row><cell cols="2">Accuracy</cell><cell></cell><cell>0.66</cell></row><row><cell>Table 7</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Task 05 Training Classification Report</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Label</cell><cell cols="3">Precision Recall F1-score</cell></row><row><cell>NO</cell><cell>0.60</cell><cell>0.68</cell><cell>0.64</cell></row><row><cell>DIRECT</cell><cell>0.47</cell><cell>0.53</cell><cell>0.50</cell></row><row><cell>JUDGEMENTAL</cell><cell>0.32</cell><cell>0.06</cell><cell>0.10</cell></row><row><cell cols="2">Accuracy</cell><cell></cell><cell>0.54</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 8 Task 06 Training Classification Report Label Precision Recall F1-score</head><label>8</label><figDesc></figDesc><table><row><cell>NO</cell><cell>0.50</cell><cell>0.03</cell><cell>0.06</cell></row><row><cell>IDEOLOGICAL-INEQUALITY</cell><cell>0.28</cell><cell>0.15</cell><cell>0.20</cell></row><row><cell>STEREOTYPING-DOMINANCE</cell><cell>0.60</cell><cell>0.80</cell><cell>0.69</cell></row><row><cell>OBJECTIFICATION</cell><cell>0.40</cell><cell>0.32</cell><cell>0.36</cell></row><row><cell>SEXUAL-VIOLENCE</cell><cell>0.25</cell><cell>0.10</cell><cell>0.14</cell></row><row><cell>MISOGYNY-NON-SEXUAL-VIOLENCE</cell><cell>0.50</cell><cell>0.05</cell><cell>0.09</cell></row><row><cell>Accuracy</cell><cell></cell><cell></cell><cell>0.52</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 9</head><label>9</label><figDesc>Task 01 Evaluation Result</figDesc><table><row><cell>Variants</cell><cell cols="6">Run Rank ICM-Soft ICM-Hard ICM-Soft Norm ICM-Hard Norm</cell></row><row><cell>Soft-Soft (All)</cell><cell>A C</cell><cell>0 25</cell><cell>3.1182 -0.2086</cell><cell>--</cell><cell>1 0.4666</cell><cell>--</cell></row><row><cell>Hard-Hard (All)</cell><cell>B C</cell><cell>0 54</cell><cell>--</cell><cell>0.9948 0.1977</cell><cell>--</cell><cell>1 0.5994</cell></row><row><cell>Soft-Soft (ES)</cell><cell>A C</cell><cell>0 31</cell><cell>3.1177 -0.3845</cell><cell>--</cell><cell>1 0.4383</cell><cell>--</cell></row><row><cell>Hard-Hard (ES)</cell><cell>B C</cell><cell>0 55</cell><cell>--</cell><cell>0.9999 0.0672</cell><cell>--</cell><cell>1 0.5336</cell></row><row><cell>Soft-Soft (EN)</cell><cell>A C</cell><cell>0 23</cell><cell>3.1141 -0.0220</cell><cell>--</cell><cell>1 0.4965</cell><cell>--</cell></row><row><cell>Hard-Hard (EN)</cell><cell>B C</cell><cell>0 50</cell><cell>-</cell><cell>0.9798 0.3320</cell><cell>--</cell><cell>1 0.6694</cell></row><row><cell cols="7">EXIST2024_test_gold_soft = A; EXIST2024_test_gold_hard = B; CNLP-NITS-PP = C</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>Table 10</head><label>10</label><figDesc>Task 02 Evaluation Result</figDesc><table><row><cell>Variants</cell><cell cols="6">Run Rank ICM-Soft ICM-Hard ICM-Soft Norm ICM-Hard Norm</cell></row><row><cell>Soft-Soft (All)</cell><cell>A C</cell><cell>0 15</cell><cell>6.2057 -2.4732</cell><cell>--</cell><cell>1 0.3007</cell><cell>--</cell></row><row><cell>Hard-Hard (All)</cell><cell>B C</cell><cell>0 33</cell><cell>--</cell><cell>1.5378 -0.2694</cell><cell>--</cell><cell>1 0.4124</cell></row><row><cell>Soft-Soft (ES)</cell><cell>A C</cell><cell>0 17</cell><cell>6.2431 -2.7097</cell><cell>--</cell><cell>1 0.2830</cell><cell>--</cell></row><row><cell>Hard-Hard (ES)</cell><cell>B C</cell><cell>0 33</cell><cell>--</cell><cell>1.6007 -0.3778</cell><cell>--</cell><cell>1.6007 0.3820</cell></row><row><cell>Soft-Soft (EN)</cell><cell>A C</cell><cell>0 9</cell><cell>6.1178 -2.2452</cell><cell>--</cell><cell>1 0.3165</cell><cell>--</cell></row><row><cell>Hard-Hard (EN)</cell><cell>B C</cell><cell>0 31</cell><cell>-</cell><cell>1.4449 -0.1572</cell><cell>--</cell><cell>1 0.4456</cell></row><row><cell cols="7">EXIST2024_test_gold_soft = A; EXIST2024_test_gold_hard = B; CNLP-NITS-PP = C</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_8"><head>Table 11 Task 03 Evaluation Result Variants Run Rank ICM-Soft ICM-Hard ICM-Soft Norm ICM-Hard Norm</head><label>11</label><figDesc></figDesc><table><row><cell>Soft-Soft (All)</cell><cell>A C</cell><cell>0 17</cell><cell>9.4686 -5.7385</cell><cell>--</cell><cell>1 0.1970</cell><cell>--</cell></row><row><cell>Hard-Hard (All)</cell><cell>B C</cell><cell>0 25</cell><cell>--</cell><cell>2.1533 -0.9571</cell><cell>--</cell><cell>1 0.2778</cell></row><row><cell>Soft-Soft (ES)</cell><cell>A C</cell><cell>0 17</cell><cell>9.6071 -6.2485</cell><cell>--</cell><cell>1 0.1748</cell><cell>--</cell></row><row><cell>Hard-Hard (ES)</cell><cell>B C</cell><cell>0 27</cell><cell>--</cell><cell>2.2393 -1.0686</cell><cell>--</cell><cell>1 0.2614</cell></row><row><cell>Soft-Soft (EN)</cell><cell>A C</cell><cell>0 14</cell><cell>9.1255 -4.9948</cell><cell>--</cell><cell>1 0.2263</cell><cell>--</cell></row><row><cell>Hard-Hard (EN)</cell><cell>B C</cell><cell>0 22</cell><cell>-</cell><cell>2.0402 -0.8331</cell><cell>--</cell><cell>1 0.2958</cell></row><row><cell cols="7">EXIST2024_test_gold_soft = A; EXIST2024_test_gold_hard = B; CNLP-NITS-PP = C</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_9"><head>Table 12</head><label>12</label><figDesc>Task 04 Evaluation Result</figDesc><table><row><cell>Variants</cell><cell cols="6">Run Rank ICM-Soft ICM-Hard ICM-Soft Norm ICM-Hard Norm</cell></row><row><cell>Soft-Soft (All)</cell><cell>A C</cell><cell>0 27</cell><cell>3.1107 -1.2354</cell><cell>--</cell><cell>1 0.3014</cell><cell>--</cell></row><row><cell>Hard-Hard (All)</cell><cell>B C</cell><cell>0 27</cell><cell>--</cell><cell>0.9832 -0.1234</cell><cell>--</cell><cell>1 0.4372</cell></row><row><cell>Soft-Soft (ES)</cell><cell>A C</cell><cell>0 28</cell><cell>3.1360 -1.2557</cell><cell>--</cell><cell>1 0.2998</cell><cell>--</cell></row><row><cell>Hard-Hard (ES)</cell><cell>B C</cell><cell>0 35</cell><cell>--</cell><cell>0.9815 -0.2781</cell><cell>--</cell><cell>1 0.3584</cell></row><row><cell>Soft-Soft (EN)</cell><cell>A C</cell><cell>0 28</cell><cell>3.0794 -1.2140</cell><cell>--</cell><cell>1 0.3029</cell><cell>--</cell></row><row><cell>Hard-Hard (EN)</cell><cell>B C</cell><cell>0 23</cell><cell>-</cell><cell>0.9848 0.0289</cell><cell>--</cell><cell>1 0.5147</cell></row><row><cell cols="7">EXIST2024_test_gold_soft = A; EXIST2024_test_gold_hard = B; CNLP-NITS-PP = C</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_10"><head>Table 13</head><label>13</label><figDesc>Task 05 Evaluation Result</figDesc><table><row><cell>Variants</cell><cell cols="6">Run Rank ICM-Soft ICM-Hard ICM-Soft Norm ICM-Hard Norm</cell></row><row><cell>Soft-Soft (All)</cell><cell>A C</cell><cell>0 5</cell><cell>4.7018 -1.5907</cell><cell>--</cell><cell>1 0.3308</cell><cell>--</cell></row><row><cell>Hard-Hard (All)</cell><cell>B C</cell><cell>0 9</cell><cell>--</cell><cell>1.4383 -0.3370</cell><cell>--</cell><cell>1 0.3829</cell></row><row><cell>Soft-Soft (ES)</cell><cell>A C</cell><cell>0 5</cell><cell>4.8140 -1.8008</cell><cell>--</cell><cell>1 0.3130</cell><cell>--</cell></row><row><cell>Hard-Hard (ES)</cell><cell>B C</cell><cell>0 8</cell><cell>--</cell><cell>1.4356 -0.3809</cell><cell>--</cell><cell>1 0.3674</cell></row><row><cell>Soft-Soft (EN)</cell><cell>A C</cell><cell>0 5</cell><cell>4.5834 -1.4400</cell><cell>--</cell><cell>1 0.3429</cell><cell>--</cell></row><row><cell>Hard-Hard (EN)</cell><cell>B C</cell><cell>0 5</cell><cell>-</cell><cell>1.4409 -0.2944</cell><cell>--</cell><cell>1 0.3978</cell></row><row><cell cols="7">EXIST2024_test_gold_soft = A; EXIST2024_test_gold_hard = B; CNLP-NITS-PP = C</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_11"><head>Table 14 Task 06 Evaluation Result Variants Run Rank ICM-Soft ICM-Hard ICM-Soft Norm ICM-Hard Norm</head><label>14</label><figDesc></figDesc><table><row><cell>Soft-Soft (All)</cell><cell>A C</cell><cell>0 8</cell><cell>9.4343 -6.6782</cell><cell>--</cell><cell>1 0.1461</cell><cell>--</cell></row><row><cell>Hard-Hard (All)</cell><cell>B C</cell><cell>0 14</cell><cell>--</cell><cell>2.4100 -1.7920</cell><cell>--</cell><cell>1 0.1282</cell></row><row><cell>Soft-Soft (ES)</cell><cell>A C</cell><cell>0 11</cell><cell>9.6290 -6.9019</cell><cell>--</cell><cell>1 0.1416</cell><cell>--</cell></row><row><cell>Hard-Hard (ES)</cell><cell>B C</cell><cell>0 16</cell><cell>--</cell><cell>2.4432 -1.8559</cell><cell>--</cell><cell>1 0.1202</cell></row><row><cell>Soft-Soft (EN)</cell><cell>A C</cell><cell>0 8</cell><cell>9.2546 -6.4165</cell><cell>--</cell><cell>1 0.1533</cell><cell>--</cell></row><row><cell>Hard-Hard (EN)</cell><cell>B C</cell><cell>0 13</cell><cell>-</cell><cell>2.3532 -1.6954</cell><cell>--</cell><cell>1 0.1398</cell></row><row><cell cols="7">EXIST2024_test_gold_soft = A; EXIST2024_test_gold_hard = B; CNLP-NITS-PP = C</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We would like to express our gratitude to the National Institute of Technology Silchar for allowing us to conduct our research and experimentation. We are thankful for the resources and research atmosphere provided by the CNLP &amp; AI Lab, NIT Silchar.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Debating stereotypes: Online reactions to the vicepresidential debate of 2020</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Felmlee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Julien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Francisco</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PloS one</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page">e0280828</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Restorative justice for survivors of sexual violence experienced in adulthood: A scoping review</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Burns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sinko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Trauma, Violence, &amp; Abuse</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="340" to="354" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Exist 2024: sexism identification innbsp;social networks andnbsp;memes</title>
		<author>
			<persName><forename type="first">L</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carrillo-De Albornoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Amigó</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Morante</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Spina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Maeso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ruiz</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-56069-9_68</idno>
		<ptr target="https://doi.org/10.1007/978-3-031-56069-9_68.doi:10.1007/978-3-031-56069-9_68" />
	</analytic>
	<monogr>
		<title level="m">Advances in Information Retrieval: 46th European Conference on Information Retrieval, ECIR 2024</title>
				<meeting><address><addrLine>Glasgow, UK; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2024">March 24-28, 2024. 2024</date>
			<biblScope unit="page" from="498" to="504" />
		</imprint>
	</monogr>
	<note>Proceedings, Part V</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Blockchain technology and gender equality: A systematic literature review</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Di</forename><surname>Vaio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hassan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Palladino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Information Management</title>
		<imprint>
			<biblScope unit="volume">68</biblScope>
			<biblScope unit="page">102517</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A systematic review of hate speech automatic detection using natural language processing</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Jahan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Oussalah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="page">126232</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Overview of exist 2024 -learning with disagreement for sexism identification and characterization in social networks and memes</title>
		<author>
			<persName><forename type="first">L</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>De Albornoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ruiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Maeso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Amigó</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Morante</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Spina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Fifteenth International Conference of the CLEF Association</title>
				<meeting><address><addrLine>CLEF</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Overview of exist 2024 -learning with disagreement for sexism identification and characterization in social networks and memes (extended overview)</title>
		<author>
			<persName><forename type="first">L</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>De Albornoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ruiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Maeso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Amigó</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Morante</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Spina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kalita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Halder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Taparia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pakray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Manna</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2404.02013</idno>
		<title level="m">Breaking the silence detecting and mitigating gendered abuse in hindi, tamil, and indian english online spaces</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Addressing hate speech: Atlantis for efficient hate span detection</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Barman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Poddar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pakray</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">A deep dive into automated sexism detection using fine-tuned deep learning and large language models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pakray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<idno>SSRN 4791798</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">Available at</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Examining hate speech detection across multiple indo-aryan languages in tasks 1 &amp; 4</title>
		<author>
			<persName><forename type="first">G</forename><surname>Kalita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Halder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Taparia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pakray</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Working Notes of FIRE</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Leveraging gpt-2 for automated classification of online sexist content</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">K</forename><surname>Adhikary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pakray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Working Notes of CLEF</title>
		<imprint>
			<biblScope unit="page" from="1107" to="1122" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Achiam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Adler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Akkaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">L</forename><surname>Aleman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Altenschmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Altman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Anadkat</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.08774</idno>
		<title level="m">Gpt-4 technical report</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Language models are unsupervised multitask learners</title>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Luan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">OpenAI blog</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">9</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">CNLP-NITS at SemEval-2023 task 10: Online sexism prediction, PREDHATE!</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Adhikary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pakray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.semeval-1.113</idno>
		<ptr target="https://aclanthology.org/2023.semeval-1.113.doi:10.18653/v1/2023.semeval-1.113" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Association for Computational Linguistics</title>
				<meeting>the 17th International Workshop on Semantic Evaluation (SemEval-2023), Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="815" to="822" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">multilate&quot;: A synthetic dataset for multimodal hate speech detection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vetagiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Halder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das Majumder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pakray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<idno>SSRN 4733628</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">Available at</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">The impact of gender discrimination on a woman&apos;s mental health</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Vigod</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Rochon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">EClinicalMedicine</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Description and prescription: How gender stereotypes prevent women&apos;s ascent up the organizational ladder</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Heilman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of social issues</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="657" to="674" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The sexual objectification of women in advertising: A contemporary cultural perspective</title>
		<author>
			<persName><forename type="first">A</forename><surname>Zimmerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dahlberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of advertising research</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="71" to="79" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Health consequences of sexual violence against women</title>
		<author>
			<persName><forename type="first">R</forename><surname>Jina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Thomas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Best practice &amp; research Clinical obstetrics &amp; gynaecology</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="15" to="26" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Holland</surname></persName>
		</author>
		<title level="m">A brief history of misogyny: The world&apos;s oldest prejudice</title>
				<imprint>
			<publisher>Hachette UK</publisher>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">&apos;nothing really happened&apos;: the invalidation of women&apos;s experiences of sexual violence</title>
		<author>
			<persName><forename type="first">L</forename><surname>Kelly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Radford</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Critical Social Policy</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="39" to="53" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Automatic identification and classification of misogynistic language on twitter</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Anzovino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fersini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Applications of Natural Language to Data Bases</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Online hate speech against women: Automatic identification of misogyny and sexism on twitter</title>
		<author>
			<persName><forename type="first">S</forename><surname>Frenda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ghanem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Gómez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:156056029" />
	</analytic>
	<monogr>
		<title level="j">J. Intell. Fuzzy Syst</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="4743" to="4752" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Deep learning for hate speech detection in tweets</title>
		<author>
			<persName><forename type="first">P</forename><surname>Badjatiya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Varma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th international conference on World Wide Web companion</title>
				<meeting>the 26th international conference on World Wide Web companion</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="759" to="760" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Hate speech detection: A solved problem? the challenging case of long tail on twitter</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Luo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="925" to="945" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A hybrid deep bilstm-cnn for hate speech detection in multi-social media</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Passi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mahanti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Asian and Low-Resource Language Information Processing</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="770" to="778" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Hateful meme prediction model using multimodal deep learning</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Ahmed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bhadani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Chakraborty</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2021 International Conference on Computing, Communication and Green Engineering (CCGE), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Overview of exist 2021: sexism identification in social networks</title>
		<author>
			<persName><forename type="first">F</forename><surname>Rodríguez-Sánchez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carrillo-De Albornoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Comet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Donoso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procesamiento del Lenguaje Natural</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="page" from="195" to="207" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Overview of exist 2022: sexism identification in social networks</title>
		<author>
			<persName><forename type="first">F</forename><surname>Rodríguez-Sánchez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carrillo-De Albornoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mendieta-Aragón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Marco-Remón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Makeienko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Spina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procesamiento del Lenguaje Natural</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="page" from="229" to="240" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Overview of exist 2023: sexism identification in social networks</title>
		<author>
			<persName><forename type="first">L</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carrillo-De Albornoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Morante</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Amigó</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Spina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023</title>
				<meeting><address><addrLine>Dublin, Ireland</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">April 2-6, 2023. 2023</date>
			<biblScope unit="page" from="593" to="599" />
		</imprint>
	</monogr>
	<note>Proceedings, Part III</note>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Glove: Global vectors for word representation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pennington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)</title>
				<meeting>the 2014 conference on empirical methods in natural language processing (EMNLP)</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1532" to="1543" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
