<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">UMUTeam at eRisk@CLEF 2024: Fine-Tuning Transformer Models with Sentiment Features for Early Detection and Severity Measurement of Eating Disorders Notebook for the eRisk Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Ronghao</forename><surname>Pan</surname></persName>
							<email>ronghao.pan@um.es</email>
							<affiliation key="aff0">
								<orgName type="department">Facultad de Informática</orgName>
								<orgName type="institution">Universidad de Murcia</orgName>
								<address>
									<addrLine>Campus de Espinardo</addrLine>
									<postCode>30100</postCode>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">José</forename><forename type="middle">Antonio</forename><surname>García-Díaz</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Facultad de Informática</orgName>
								<orgName type="institution">Universidad de Murcia</orgName>
								<address>
									<addrLine>Campus de Espinardo</addrLine>
									<postCode>30100</postCode>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tomás</forename><surname>Bernal-Beltrán</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Facultad de Informática</orgName>
								<orgName type="institution">Universidad de Murcia</orgName>
								<address>
									<addrLine>Campus de Espinardo</addrLine>
									<postCode>30100</postCode>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Rafael</forename><surname>Valencia-García</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Facultad de Informática</orgName>
								<orgName type="institution">Universidad de Murcia</orgName>
								<address>
									<addrLine>Campus de Espinardo</addrLine>
									<postCode>30100</postCode>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">UMUTeam at eRisk@CLEF 2024: Fine-Tuning Transformer Models with Sentiment Features for Early Detection and Severity Measurement of Eating Disorders Notebook for the eRisk Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E06732BB7A7B73B91FE9C0934C19CF47</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:51+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Mental disorders</term>
					<term>Deep learning</term>
					<term>Natural Language Processing</term>
					<term>Fine-tuning</term>
					<term>Transformers</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper describes the participation of the UMUTeam in the eRisk shared task organized at CLEF 2024. We have addressed the Task 2 and 3 which are related to early detection of signs of anorexia and measuring the severity of eating disorder signs. For this purpose, several approaches were used, including the fine-tuning of a sentence transformer model for measuring the severity of eating disorder signs and the fine-tuning of pre-trained Transformers-based language models with sentiment features for detecting anorexia signs. For Task 2, we have reached the 5th position in the decision-based evaluation ranking and raking based evaluation ranking. As for Task 3, we have obtained 5th place, out of 5 participants, however, our model has a more balanced overall accuracy and performance across most metrics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Mental health is the state of a person's psychological and emotional well-being. It includes the ability to manage emotions, cope with stress, maintain satisfying relationships, work productively, and contribute to the community. It can be influenced by many factors, including genetics, life experiences, social environment, stress, and brain chemistry <ref type="bibr" target="#b0">[1]</ref>. In recent years, there has been an increase in mental illness, an alarming phenomenon that has captured the attention of public health officials, experts, researchers, and governments around the world. According to a recent report by the World Health Organization (WHO), one in eight people in the world suffers from a mental illness 1 . Therefore, there is an urgent need to address the factors contributing to the increase in these diseases and to implement effective strategies to improve the mental and physical health of the world's population.</p><p>Several studies have shown that excessive use of social networking site can have negative effects on mental health, specially in adolescents and young adults, making it a topic of growing interest and concern in research and public health <ref type="bibr" target="#b1">[2]</ref>. This relationship highlights the importance of early detection of mental health symptoms in order to effectively intervene and prevent these problems from worsening.</p><p>For this reason, the interest in the detection and identification of mental disorders in social network streams has grown in recent years, driven by the use of advanced Natural Language Processing (NLP) technologies, due to the increasing prevalence of mental health problems and their relationship with digital platforms <ref type="bibr" target="#b2">[3]</ref>. In addition, a number of mental health-related tasks have emerged in important evaluation campaigns, such MentalriskES <ref type="bibr" target="#b3">[4]</ref> of Iberian Languages Evaluation Forum (IberLEF) and eRisk <ref type="bibr" target="#b4">[5]</ref> of Conference and Labs of the Evaluation Forum (CLEF).</p><p>The eRisk Lab focuses on the development of assessment methodologies and metrics for the early detection of risks on the Internet, specially related to health and safety issues. The initiative was initiated at CLEF in Dublin in 2017, and has already hosted eight editions through 2024. Throughout these editions, the Lab has presented numerous collections and models that address different application domains. Previous editions have explored topics such as depression, eating disorders, gambling, and self-harm detection. Lab tasks include early warning and severity assessment challenges, which involve automated analysis of temporal text streams to predict specific risks and compute detailed symptom estimates from users' writings.</p><p>The eRisk@CLEF 2024 <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref> focuses on the early detection of signs of anorexia, the search for symptoms of depression, and measuring the severity of signs of eating disorders. This shared task was defined using the test collection, and evaluation metrics were proposed.</p><p>This paper presents the participation of the UMUTeam in tasks related to the early detection of signs of anorexia and measuring the severity of signs of eating disorders. For this purpose, several approaches have been employed, including fine-tuning of a sentence transformers model to measure the severity of the signs of eating disorders and fine-tuning of the pre-trained language models based on Transformers with sentiment features for the detection of signs of anorexia. The rest of the paper is organized as follows. Section 2 presents the task and the provided dataset. In Section 3, the methodology of our proposed system for addressing each task is described. Secondly, Section 4 shows the results obtained, and a discussion of them is presented. Finally, Section 5 concludes the paper with some conclusions and perspectives for future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Task description</head><p>This edition of eRisk focuses on detecting symptoms of depression, signs of anorexia, and the severity of symptoms associated with eating disorders through various datasets and challenges involving automated analysis of temporal text streams to predict specific problems and compute detailed symptoms estimations based on user writings. Thus, this shared task is divided into three tasks:</p><p>• Task 1: Search for symptoms of depression. This task is a continuation of eRisk 2023's Task 1, involves ranking sentences from user writing based on their relevance to symptoms of depression outlined in the BDI questionnaire.</p><p>• Task 2: Early detection of signs of anorexia. This task is a continuation of eRisk 2018's T2 and 2019's T1 tasks, focuses on early detection of signs of anorexia. In this case, we are tasked with sequentially processing pieces of evidence to detect early signs of anorexia as early as possible, primarily using Text Mining solutions on social network texts. The test collection follows the format of the collection described in <ref type="bibr" target="#b7">[8]</ref> and comprises writings of social media users, categorized into individuals with anorexia and control users.</p><p>• Task 3: Measuring the severity of the signs of eating disorders. This task involves estimating the level of features associated with an eating disorder diagnosis from a history of user posts. In this task, participants are given a history of each user's posts and are asked to complete a standard eating disorder questionnaire based on the clues found in the posts. The questionnaires are derived from the Eating Disorder Examination Questionnaire (EDE-Q), which is a 28-item self-report questionnaire adapted from the Eating Disorder Examination (EDE) semi-structured interview, and only questions 1-12 and 19-28 are used.</p><p>In this edition, we participated in Task 2 and other tasks. Table <ref type="table" target="#tab_0">1</ref> shows the distribution of the training dataset. We can see that the table shows various measures of the dataset, such as the number of topics, the number of submissions (posts and comments), the average number of submissions per topic, the average number of days from first to last submission, and the average number of words per submission. For Task 3, which is a continuation of ERISK 2022 and 2023 Task 3, we used only the 2023 dataset, which has a total of 404,404 text questions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>This section details the processes, techniques, and tools used for Task 2 and Task 3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Task 2</head><p>Figure <ref type="figure" target="#fig_0">1</ref> shows the general architecture of our approach for Task 2. Briefly, first, we performed a preprocessing by selecting the user messages that are most relevant for anorexia identification. Second, we divided the dataset into two subsets with an 80-20 ratio: training, a subset of data that is used to train the model, and validation, a subset of data separated from the training set that is used to evaluate the model's performance during training. Third, the last hidden state of the pre-trained language models is used to obtain the text representation, and then a sentiment analysis model is used to obtain sentiment features from the texts. Finally, the last hidden state and the logits from the sentiment analysis model are concatenated to serve as input to a neural network, which is the classification head. This network includes a normalization layer (LayerNorm), a dropout layer, linear layers with Tanh as activation function, and a linear layer at the end to obtain the anorexia identification model. For this task, we used only the 2019 dataset. From Table <ref type="table" target="#tab_0">1</ref>, we can see that at the post and comment level, there are a total of 253,752 posts, of which 24,874 are anorexia type and 228,878 are control type, indicating a significant imbalance. Therefore, we performed a preprocessing to prevent the model from always learning to predict the majority class and to reduce the noise in the dataset.</p><p>Sentiment analysis involves the use of NLP techniques to identify and categorize opinions expressed in a text, specifically to determine whether the sentiment is positive, negative, or neutral. For example, <ref type="bibr" target="#b8">[9]</ref> shows the relationship between emotions and mental illness, as well as the importance of automatic recognition in the health field. In the context of anorexia, this analysis can help identify patterns in language that may indicate the presence of this disease. In this case, we used only negative texts from users with anorexia and positive and neutral texts from control users.</p><p>To address this task, we followed a supervised learning approach. To train our model, we used the two datasets obtained after the selection process. It is worth mentioning that the organizers only provided training data, so we selected a custom split for validation. The customized validation split is created using stratified sampling, in order to keep the balance between labels. Table <ref type="table" target="#tab_1">2</ref> shows the distribution of the processed data set in the training and validation sets. We can see that we end up with a total of 4,656 texts representative of users suffering from anorexia and 11,309 of those not suffering from anorexia in the training set. In the validation set, we have a total of 1,164 anorexia type texts and 2,828 that are not related to anorexia. Moreover, we also deleted all mentions, references to URLs and hashtags from the texts, and identified and removed sequences such as "amp;format=png", "amp;s=7b66887b445eb00d7d842b15e15e15e15f4759f3deb03d", among others. For this task, we evaluated the BERT <ref type="bibr" target="#b9">[10]</ref>, RoBERTa <ref type="bibr" target="#b10">[11]</ref>, and RoBERTa-large <ref type="bibr" target="#b10">[11]</ref> models for text representation and the Cardiff NLP TweetEval model for sentiment analysis of text.</p><p>BERT <ref type="bibr" target="#b9">[10]</ref> is a language model developed by Google in 2018 based on the Transformer architecture, a neural network designed to process data streams such as text or audio. BERT was pre-trained on large amounts of text, allowing it to capture general linguistic knowledge. This pre-trained model can then be tuned for specific natural language processing tasks such as sentiment analysis, machine translation, or question answering.</p><p>RoBERTa <ref type="bibr" target="#b10">[11]</ref> is an extension of Facebook AI's BERT language model. It focuses on large-scale training, eliminating specific tasks and using more robust learning dynamics. These improvements make RoBERTa more effective and accurate than BERT at a variety of natural language processing tasks.</p><p>RoBERTa-large <ref type="bibr" target="#b10">[11]</ref> is a larger and more powerful version of the RoBERTa language model. Like RoBERTa, it is based on the BERT architecture, but has more parameters and processing power. RoBERTalarge is trained on an even larger dataset for a longer period of time, allowing it to capture more complex and general linguistic patterns.</p><p>The Cardiff NLP TweetEval model <ref type="bibr" target="#b11">[12]</ref> is a RoBERTa-based model specifically trained to perform sentiment analysis tasks on Twitter tweets. It has been trained on approximately 58 million tweets and tuned for sentiment analysis using the TweetEval benchmark dataset. Finally, for early detection, we have evaluated a strategy based on making a decision when the number of signs of anorexia in a user's messages exceeds a certain threshold.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Task 3</head><p>This task involves estimating traits associated with an eating disorder diagnosis from a set of user posts. The organizers have provided a user's posting history along with a standardized eating disorder questionnaire. Thus, the primary goal of this task is to predict potential responses to the questionnaire based on the user's posting history.</p><p>The questionnaire in question is the Eating Disorder Examination Questionnaire (EDE-Q), a 28-item self-report questionnaire derived from the semi-structured interview known as the Eating Disorder Examination (EDE). In this case, our goal is to predict responses to questions 1-12 and 19-28. The dataset consists of 28 instances of users' posting history along with their corresponding responses to the EDE-Q questionnaire.</p><p>For this task, we adopted a fine-tuning approach using a sentence transformer model that uses textual similarity to measure the similarity between potential responses (user thread text) and each question in the EDE-Q. To achieve this, we processed the user text, mapped it to the 22 questions, and assigned a score based on the user's responses to the questionnaire. To derive a scale-based score, we defined specific intervals for each possible answer within the questionnaire. 0. NO DAYS / not at all (0 to 0.1) 1. 1-5 DAYS / slightly (0.1 to 0.2) 2. 6-12 DAYS / slightly (0.2 to 0.3) 3. 13-15 DAYS / moderately (0.3 to 0.4) 4. 16-22 DAYS / moderately (0.4 to 0.5) 5. 23-27 DAYS / markedly (0.5 to 0.7) 6. EVERY DAY / markedly (0.7 to 1.0) Thus, within the training set, each text is associated with specific questions and assigned a score, which is a randomly generated value that falls within the appropriate interval based on the user's response. We also chose a custom 80-20 split for validation. The training set contains 323,523 textquestion relations along with their respective scores, while the validation set contains 80,881 such relations.</p><p>For this task, the dataset was first processed by removing contractions, mentions, hashtags, URLs, and AMP expressions, and extracting emoji features using the emoji Python library. Second, we fit the multi-qa-mpnet-base-dot-v1footnotehttps://huggingface.co/sentence-transformers/multi-qa-mpnetbase-dot-v1 and sentence-transformers/all-MiniLM-L6-v2footnoteurlhttps://huggingface.co/sentencetransformers/all-MiniLM-L6-v2 models with cosine similarity as the loss function, 10 epochs, and 1000 warm-up steps. multi-qa-mpnet-base-dot-v1 is based on the MPNet (Multilingual Pretrained BERT) architecture, which is based on the BERT (Bidirectional Encoder Representations from Transformers) model. sentence-transformers/all-MiniLM-L6-v2 is a kind of all-round model tuned for many use cases and trained on a large and diverse dataset of over 1 billion training pairs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head><p>This section describes the systems submitted by our team in each run and shows the results obtained in each task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Task 2</head><p>Table <ref type="table">3</ref> shows the results of the different fine-tuning approaches on pre-trained language models with sentiment features in validation. We can see that the RoBERTa-base model has obtained the best performance with an M-F1 of 0.97, followed by BERT with an M-F1 of 0.95. However, RoBERTa-large, being the largest of the three models, has the worst result with an M-F1 of 0.94. Therefore, we used the RoBERTa-base fine-tuned model with sentiment features for the submissions.</p><p>Based on the summaries of the previous eRisk editions, we have seen that the DeBERTa approach has also given one of the best results. For this reason, we also evaluated the DeBERTa fine-tuning approach as the base model for our system.</p><p>For this task, we uploaded a total of 5 runs with different configurations and thresholds for the early detection approach.</p><p>• Run 0: This run consists of running a classification model obtained through the fine-tuning RoBERTa-base with sentiment feature within which the set of posts used has been preprocessed. The threshold used in early strategy is 10, i.e., the decision is made when more than 10 posts are identified as identifying the user's anorexia type.</p><p>• Run 1: This run uses the same classification model as Run 0, but uses 15 as threshold of the early detection strategy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 3</head><p>Results of different fine-tuning approaches on pre-trained language models with sentiment features in the validation split of Task 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Model M-P M-R M-F1</head><p>BERT 0.958116 0.949115 0.953465 RoBERTa-base 0.972686 0.969555 0.971103 RoBERTa-large 0.944243 0.951291 0.947666</p><p>• Run 2: In this run, we used the fine-tuned DeBERTa as a classification model and a threshold of 10 for early detection strategy.</p><p>• Run 3: This run uses the same classification model as Run 2, but uses a threshold of 15 for early detection strategy.</p><p>• Run 4: This run has the same structure as Run 2, but changing the brave strategy threshold to 20.</p><p>Table <ref type="table">4</ref> shows the results of the decision-based evaluation of Task 2, specifically the precision, recall, and F1 score over the five runs. Accuracy ranges from 0.14 to 0.16, indicating a low variability in the model's ability to correctly identify relevant instances. Recall is very high across all runs, between 0.98 and 0.99, demonstrating the model's effectiveness in capturing almost all relevant instances. The F1 score, which balances precision and recall, shows a slight improvement from 0.25 in run 0 to 0.27 in run 4. The ERDE 5 and ERDE 50 metrics, which measure early risk detection errors, remain relatively stable, indicating consistent early detection performance across all runs. Latency, which reflects the time it takes to make a correct prediction, increases from 18.0 in Run 0 to 35.5 in Run 4. Speed, which reflects the speed of processing, decreases slightly to a low of 0.87 in Run 4.</p><p>Overall, Run 4 achieves the highest accuracy and F1 score at the cost of higher latency, while Runs 0 and 2 offer lower latency at slightly lower accuracy and F1 score. With this result, we ranked fifth in decision-based evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>Results of UMUTeam for Task 2 in decision-based evaluation including the precision (P), recall (R), and F1-score (F1). Other metrics considering the performance of the methods are also included. Table <ref type="table">5</ref> shows the ranking based evaluation results (only 1 writing result is reported) and we can see that the five runs are identical: P@10 is systematically at 0.20, NDCG@10 at 0.12, and NDCG@100 at 0.14. The P@10 metric, which measures the accuracy in the top 10 positions, remains at 0.20. Both NDCG@10 and NDCG@100, which evaluate the quality of ranking by considering the relevance of documents in different positions, show consistent values, indicating that the model maintains a similar level of efficiency in ranking relevant results in the top 10 and top 100 positions. Overall, these results demonstrate reliable performance in ranking-based evaluation, and we have achieved 5 best results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Run</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Task 3</head><p>For this task, we presented two runs based on fine-tuning a pre-trained sentence transformer model, that uses textual similarity to measure the similarity between potential responses (user thread text) and each question in the EDE-Q.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 5</head><p>Results of UMUTeam for Task 2 in ranking-based evaluation (only 1 writing result reported).</p><p>Run P@10 NDCG@10 NDCG@100 0 0.20 0.12 0.14 1 0.20 0.12 0.14 2 0.20 0.12 0.14 3 0.20 0.12 0.14 4 0.20 0.12 0.14</p><p>• Run 0: This run consists of using the multi-qa-mpnet-base-dot-v1 fine-tuned model as a system model to identify the similarity between the EDE-Q question and the user posts. For each user post, it is fed into the system and the system calculates the degree of similarity between the EDE-Q question and the post. Based on the score obtained by the system, a possible answer to the question is assigned within the intervals defined in section X. Once all the contributions have passed through the system, the most repeated answer for each question is assigned as the final answer.</p><p>• Run 1: This run uses the same approach as Run 0, but uses sentence-transformers/all-MiniLM-L6-v2 fine-tuned model as a system model.</p><p>Table <ref type="table" target="#tab_3">6</ref> shows the results obtained in the evaluation of Task 3, evaluated according to different metrics: MAE (Mean Absolute Error), MZOE (Mean Zero-One Error), MAE_macro, GED (Global Eating Disorder Score), RS (Restraint Subscale), ECS (Eating Concern Subscale), SCS (Shape Concern Subscale), and WCS (Weight Concern Subscale).</p><p>First, we looked at the Mean Absolute Error (MAE), which measures the average size of the errors in the predictions without considering their direction. The Run 1 achieved an MAE of 2.227, while the Run 0 achieved an MAE of 2.366. This indicates that Run 1 had a higher overall accuracy in its predictions.</p><p>The MZOE metric shows the average of the errors in terms of binary hits and misses. Run 0 had an MZOE of 0.798 compared to 0.859 for Run 1. This means that Run 0 made fewer errors and was more accurate in correctly classifying cases.</p><p>As for MAE_macro, which evaluates the mean absolute error balanced across classes, Run 1 performed better with a value of 2.286 compared to 2.833 for Run 0. This result indicates that Run 1 achieved a more balanced performance between the different data categories, which is crucial in situations where all classes are equally important.</p><p>The GED measures the overall accuracy of the model in predicting eating disorders. The Run 0 had a GED of 3.261, while the Run 1 had a GED of 3.286. Although the difference is small, Run 0 showed slightly better performance on this overall measure.</p><p>For the RS, which measures accuracy in predicting dietary restraint behavior, both runs showed very similar results, with Run 1 scoring an RS of 3.269 and Run 0 scoring 3.285. This parity indicates that both runs are comparable in terms of accuracy on this specific subscale.</p><p>On the ECS, Run 0 showed better performance with an ECS of 2.659 compared to 2.911 for Run 1. This result suggests that Run 0 was more effective at capturing specific food concerns.</p><p>On the SCS, Run 1 performed better with an SCS of 2.560 compared to 2.771 for Run 0. This data suggests that Run 1 was more accurate in predicting body shape concerns.</p><p>Finally, on the WCS, Run 1 also outperformed Run 0 with a WCS of 2.026 compared to 2.218. This demonstrates a better ability of Run 1 to predict weight concern.</p><p>In summary, although Run 1 showed better overall accuracy and more balanced performance on most metrics, Run 0 excelled in specific aspects such as MZOE, GED, and ECS. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>This paper summarizes UMUTeam's participation in the eRisk collaborative task of the 2024 edition of CLEF. The eRisk Lab focuses on the development of assessment methods and metrics for the early detection of risks on the Internet, especially related to health and safety issues. In this edition, the focus is on detecting symptoms of depression, early detection of signs of anorexia, and measuring the severity of signs of eating disorders in three related subtasks.</p><p>In this shared task, we have focused on Task 2 and Task 3, which are related to early detection of signs of anorexia and measuring the severity of eating disorder signs. For this purpose, several approaches were used, including the fine-tuning of a sentence transformer model for measuring the severity of eating disorder signs and the fine-tuning of pre-trained Transformers-based language models with sentiment features for detecting anorexia signs.</p><p>In Task 2, we present 5 runs based on different settings, using different fine-tuned models as the classification model for the system and different thresholds for the early detection strategy. We ranked fifth in the decision-based evaluation, and run 4 achieved the highest accuracy and F1 score at the cost of higher latency, while runs 0 and 2 offer lower latency with slightly lower accuracy and F1 score. For the decision-based evaluation, we obtained the top 5 results. In this case, all five runs are identical: P@10 is consistently 0.20, NDCG@10 is 0.12, and NDCG@100 is 0.14.</p><p>From the results obtained, we can see that the sheer number of comments may not be enough; the context and severity of the comments are also important. We also found that removing certain negative comments from users labeled as "control" runs the risk of the model not learning to properly distinguish between negative comments that are normal and those that are indicative of a mental disorder, which could degrade the performance of the system.</p><p>In Task 3, we present two runs based on fine-tuning a pre-trained sentence transformer model that uses textual similarity to measure the similarity between possible answers (user thread text) and each question in the EDE-Q. In this case, run 1, which is based on a fine-tuned model of sentencetransformers/all-MiniLM-L6-v2, has the best result in overall accuracy and a more balanced performance on most metrics.</p><p>As a future line, we suggest adding the user's previous context as an input to improve performance, and not removing all negative comments from users marked as "control", to avoid that the model does not learn to correctly distinguish between negative comments that are normal and those that are indicative of a mental disorder. Furthermore, it is important to examine the relationship between indicators of mental illness and hate speech <ref type="bibr" target="#b12">[13]</ref>, the use of humor <ref type="bibr" target="#b13">[14]</ref>, and the demographic and psychographic characteristics of the message authors <ref type="bibr" target="#b14">[15]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: System architecture of Task 2.</figDesc><graphic coords="3,72.00,480.05,451.26,88.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Distribution datasets from Task 2.</figDesc><table><row><cell></cell><cell>2018</cell><cell></cell><cell>2019</cell><cell></cell></row><row><cell></cell><cell cols="4">Anorexia Control Anorexia Control</cell></row><row><cell>Num. subjects</cell><cell>20</cell><cell>132</cell><cell>61</cell><cell>441</cell></row><row><cell>Num. submissions (posts &amp; comments)</cell><cell>7 452</cell><cell>77 514</cell><cell>24 874</cell><cell>228 878</cell></row><row><cell>Avg num. of submissions per subject</cell><cell>372,6</cell><cell>587,2</cell><cell>407,8</cell><cell>556,9</cell></row><row><cell>Avg num. of days from first to last submission</cell><cell>803,3</cell><cell>641,5</cell><cell>≈ 800</cell><cell>≈ 650</cell></row><row><cell>Avg num. words per submission</cell><cell>41,2</cell><cell>20,9</cell><cell>37,3</cell><cell>20,9</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>The distribution of the training and validation split for Task 2.</figDesc><table><row><cell></cell><cell cols="3">Anorexia No anorexia Total</cell></row><row><cell>Training</cell><cell>4 656</cell><cell cols="2">11 309 15 965</cell></row><row><cell>Validation</cell><cell>1 164</cell><cell>2 828</cell><cell>3 992</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 6</head><label>6</label><figDesc>Results of UMUTeam for Task 3 in performance results.</figDesc><table><row><cell cols="4">Run MAE MZOE MAE macro GED</cell><cell>RS</cell><cell>ECS</cell><cell>SCS WCS</cell></row><row><cell>0</cell><cell>2.366</cell><cell>0.798</cell><cell cols="2">2.833 3.261 3.285 2.659 2.771 2.218</cell></row><row><cell>1</cell><cell>2.227</cell><cell>0.859</cell><cell cols="2">2.286 2.326 2.911 2.142 2.560 2.026</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work is part of the research projects LaTe4PoliticES (PID2022-138099OB-I00) funded by MICI-U/AEI/10.13039/501100011033 and the European Regional Development Fund (ERDF)-a way of making Europe and LT-SWM (TED2021-131167B-I00) funded by MICIU/AEI/10.13039/ 501100011033 and by the European Union NextGenerationEU/PRTR, and "Services based on language technologies for political microtargeting" (22252/PDC/23) funded by the Autonomous Community of the Region of Murcia through the Regional Support Program for the Transfer and Valorization of Knowledge and Scientific Entrepreneurship of the Seneca Foundation, Science and Technology Agency of the Region of Murcia.</p><p>Mr. Ronghao Pan is supported by the Programa Investigo grant, funded by the Region of Murcia, the Spanish Ministry of Labour and Social Economy and the European Union -NextGenerationEU under the "Plan de Recuperación, Transformación y Resiliencia (PRTR)".</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Dattani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rodés-Guirao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ritchie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Roser</surname></persName>
		</author>
		<title level="m">Mental health, Our world in data</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A systematic review and meta-analysis on the prevalence of mental disorders among children and adolescents in Europe</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sacco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Camilleri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Eberhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Umla-Runge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Newbury-Birch</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00787-022-02131-2</idno>
	</analytic>
	<monogr>
		<title level="j">European Child &amp; Adolescent Psychiatry</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Natural language processing in mental health applications using non-clinical texts †</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Calvo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Milne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Hussain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Christensen</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:17828909" />
	</analytic>
	<monogr>
		<title level="j">Natural Language Engineering</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="649" to="685" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Overview of MentalRiskES at IberLEF 2023: Early Detection of Mental Disorders Risk in Spanish</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M M</forename><surname>-R. Y Adrián Moreno-Muñoz Y Flor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miriam</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maria</forename><surname>Teresa Martín-Valdivia Y</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luis</forename><surname>Alfonso Ureña-López Y</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Arturo</forename><surname>Montejo-Raéz</surname></persName>
		</author>
		<ptr target="http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6564" />
	</analytic>
	<monogr>
		<title level="j">Procesamiento del Lenguaje Natural</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="page" from="329" to="350" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>-del-Arco y María Dolores Molina-</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Overview of eRisk 2023: Early Risk Prediction on the Internet</title>
		<author>
			<persName><forename type="first">J</forename><surname>Parapar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martín-Rodilla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Losada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Crestani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Arampatzis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Kanoulas</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Tsikrika</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vrochidis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Giachanou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Aliannejadi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Vlachos</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<meeting><address><addrLine>Nature Switzerland, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="294" to="315" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Overview of eRisk 2024: Early Risk Prediction on the Internet</title>
		<author>
			<persName><forename type="first">J</forename><surname>Parapar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martín Rodilla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Losada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Crestani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction. 15th International Conference of the CLEF Association</title>
				<meeting><address><addrLine>CLEF</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Overview of eRisk 2024: Early Risk Prediction on the Internet (Extended Overview)</title>
		<author>
			<persName><forename type="first">J</forename><surname>Parapar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martín Rodilla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Losada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Crestani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the Conference and Labs of the Evaluation Forum CLEF</title>
				<imprint>
			<date type="published" when="2024">2024. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A test collection for research on depression and language use</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Losada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Crestani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference of the Cross-Language Evaluation Forum for European Languages</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="28" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Fine grain emotion analysis in Spanish using linguistic features and transformers</title>
		<author>
			<persName><forename type="first">A</forename><surname>Salmerón-Ríos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>García-Díaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Valencia-García</surname></persName>
		</author>
		<idno type="DOI">10.7717/peerj-cs.1992</idno>
	</analytic>
	<monogr>
		<title level="j">PeerJ Computer Science</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">e1992</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno>CoRR abs/1810.04805</idno>
		<ptr target="http://arxiv.org/abs/1810.04805.arXiv:1810.04805" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">RoBERTa: A Robustly Optimized BERT Pretraining Approach</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno>CoRR abs/1907.11692</idno>
		<ptr target="http://arxiv.org/abs/1907.11692.arXiv:1907.11692" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Barbieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Camacho-Collados</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Neves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Espinosa-Anke</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2010.12421</idno>
		<title level="m">TweetEval: Unified benchmark and comparative evaluation for tweet classification</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Evaluating feature combination strategies for hate-speech detection in spanish using linguistic features and transformers</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>García-Díaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Jiménez-Zafra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>García-Cumbreras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Valencia-García</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Complex &amp; Intelligent Systems</title>
		<imprint>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Compilation and evaluation of the spanish saticorpus 2021 for satire identification using linguistic features and transformers</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>García-Díaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Valencia-García</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Complex &amp; Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="1723" to="1736" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Psychographic traits identification based on political ideology: An author analysis study on spanish politicians&apos; tweets posted in 2020</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>García-Díaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Colomo-Palacios</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Valencia-García</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Future Generation Computer Systems</title>
		<imprint>
			<biblScope unit="volume">130</biblScope>
			<biblScope unit="page" from="59" to="74" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
