<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">SINAI Participation in SimpleText Task 2 at CLEF 2024: Zero-shot Prompting on GPT-4-Turbo for Lexical Complexity Prediction Notebook for the SimpleText Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jenny</forename><surname>Ortiz-Zambrano</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Guayaquil</orgName>
								<address>
									<addrLine>Delta Av. s/n</addrLine>
									<postCode>090510</postCode>
									<settlement>Guayaquil</settlement>
									<country key="EC">Ecuador</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">César</forename><surname>Espin-Riofrio</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Guayaquil</orgName>
								<address>
									<addrLine>Delta Av. s/n</addrLine>
									<postCode>090510</postCode>
									<settlement>Guayaquil</settlement>
									<country key="EC">Ecuador</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Arturo</forename><surname>Montejo-Ráez</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Jaén</orgName>
								<address>
									<addrLine>Las Lagunillas s/n</addrLine>
									<postCode>23071</postCode>
									<settlement>Jaén</settlement>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">SINAI Participation in SimpleText Task 2 at CLEF 2024: Zero-shot Prompting on GPT-4-Turbo for Lexical Complexity Prediction Notebook for the SimpleText Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">49288D8DDD5385ECC53FD79F3849C617</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Lexical Complexity Prediction</term>
					<term>Auto-regressive models</term>
					<term>GPT-4 Turbo</term>
					<term>Prompting</term>
					<term>Zero-shot learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this article, we present our participation in Tasks 2.1 and 2.2 of the SimpleText track of CLEF 2024. Our work focused on the implementation of zero-shot learning using the GPT-4 Turbo autoregressive model. To this end, we develop and evaluate various cues to optimize the model's ability to predict lexical complexity. The results of our experiments indicated that GPT-4 Turbo can perform this task with remarkably robust performance, demonstrating its potential to assess language complexity effectively without the need for additional training.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Readability is defined as the quality that makes a text more accessible and easier to read <ref type="bibr" target="#b0">[1]</ref>, but for many people, the way a text is written can be a barrier to understanding its content <ref type="bibr" target="#b1">[2]</ref> due to the presence of infrequent or unknown words, and phrases with lexical and semantic complexity which drastically complicate the reader's understanding <ref type="bibr" target="#b2">[3]</ref>; this is especially evident in cases such as children, non-native speakers <ref type="bibr" target="#b3">[4]</ref> and people with various cognitive abilities or reading disabilities <ref type="bibr" target="#b4">[5]</ref>. The success or failure of understanding a text will depend on the reader's prior knowledge of the meaning of the words <ref type="bibr" target="#b5">[6]</ref>.</p><p>Information technologies have facilitated access to a wide and abundant amount of information in various fields such as education, news, social networks, health, government, and also science; In the case of scientific literature to the general public, it has increased thanks to digitalization. However, this wealth of information is not available to everyone, since many people face significant obstacles to understanding , such as the complexity of grammatical structures, the use of technical language and the length of sentences, which directly affects individuals with intellectual disabilities, people with low levels of literacy, and even university students who, despite their academic training and specialized knowledge, can also be found among those who experience difficulties in reading and understanding complex texts <ref type="bibr" target="#b6">[7]</ref>. In the case of scientific information, a significant barrier persists that makes direct access to scientific knowledge from the original sources difficult, where one of the main obstacles lies in the complexity of scientific texts, which presents difficulties for those without experience in the field due to to the lack of prior specialized knowledge <ref type="bibr" target="#b7">[8]</ref>.</p><p>The SimpleText laboratory <ref type="bibr" target="#b8">[9]</ref> is part of the CLEF 2024 <ref type="bibr" target="#b9">[10]</ref> initiative, which promotes the systematic evaluation of information access systems through experimentation with shared tasks. SimpleText addresses the challenges of text simplification, especially in the context of making scientific information accessible. In this sense, it provides relevant data and reference points, as the general public tends to avoid reliable sources such as scientific literature due to its complex language and lack of prior knowledge. Instead, they rely on superficial and derivative sources on the web and social media, which are often published for commercial or political rather than informational purposes.</p><p>Can simplifying the text help overcome these barriers to access? That is the central question that this task addresses. Its objective is to generate a simplified summary of several scientific documents, based on a popular science query. This summary will provide the user with an accessible overview of the specific topic, with the hope of making scientific information more accessible and understandable to a broader audience.</p><p>The main objective of this research is to demonstrate the capacity of the language model based on Transformers like GPT-4 Turbo for lexical simplification. To achieve this, several variants of prompts without samples have been created and evaluated. This approach allows us to determine which concepts in scientific summaries require additional explanation and contextualization to help the reader understand the scientific text more effectively.</p><p>The article follows the following structure: first, a brief description is given of the current stateof-the-art in the prediction of lexical complexity and the identification of complex words (which are synonymous with the same natural language processing task). Section 3 presents Task 2 of the Simple Text track. Section 5 details our solution and the results obtained using different indication variants. Finally, Section 6 presents the conclusions and raises some ideas for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>In recent years, Complex Word Identification (CWI) has attracted great interest in the scientific community and among computational linguistics researchers. These initiatives have significantly boosted development and research in the field of text simplification and improving information accessibility. The proposals include conferences, workshops and specific tasks that address challenges and promote advances in the simplification of texts, thus facilitating greater access to information for various audiences.</p><p>This advancement is reflected in the development of computational semantic analysis systems, as illustrated by several shared Complex Word Identification (CWI) tasks at notable events such as SemEval 2016 <ref type="bibr" target="#b10">[11]</ref>, NAACL-HTL 2018 <ref type="bibr" target="#b11">[12]</ref>, ALexS at IberLEF 2020 <ref type="bibr" target="#b12">[13]</ref>, the 15th edition of SemEval and the first lexical complexity prediction task <ref type="bibr" target="#b13">[14]</ref>, TSAR-2022 -the workshop on simplification, accessibility and readability of texts <ref type="bibr" target="#b14">[15]</ref>, the SimpleText tasks in CLEF in 2021 <ref type="bibr" target="#b15">[16]</ref>, 2022 <ref type="bibr" target="#b16">[17]</ref>, and 2023 <ref type="bibr" target="#b7">[8]</ref> among other relevant initiatives in this area.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">GPT-4 Turbo for solving NLP tasks</head><p>State-of-the-art deep learning models, such as BERT <ref type="bibr" target="#b17">[18]</ref>, RoBERTa <ref type="bibr" target="#b18">[19]</ref>, GPT-3 <ref type="bibr" target="#b19">[20]</ref>, and GPT-4 Turbo <ref type="bibr" target="#b20">[21]</ref>, significantly outperform existing traditional approaches. GPT-4 Turbo, the transformer-based large language model generation developed by OpenAI<ref type="foot" target="#foot_0">1</ref> , is a prominent example of these advances. These models have demonstrated superior capabilities in various natural language processing tasks, raising the bar for performance and accuracy in the field.</p><p>GPT-3 Contains 175 billion parameters. Due to its extensive language knowledge, enormous processing power, and potential to learn from large amounts of online text data, GPT-3 can perform a wide variety of general natural language-based tasks with unprecedented ease, such as the generation and classification of text. <ref type="bibr" target="#b21">[22]</ref>. The enormous scale of the model allows generating results with quality, precision and diversity of the generated content. This development has sparked considerable interest and concern in the field of NLP, the machine learning industry in general, the media, the AI ethics communities and civil society <ref type="bibr" target="#b22">[23]</ref>, The latest GPT-4 Turbo model of the OpenAI has been recognized for its ability to process text and images <ref type="bibr" target="#b23">[24]</ref>.</p><p>Although GTP-3 is a generative model, several approaches may be necessary to classify text, mainly ranging from zero-shot classification to single-shot and few-shot classification <ref type="bibr" target="#b21">[22]</ref>. In zero-shot learning no samples of data are needed <ref type="bibr" target="#b24">[25]</ref> .</p><p>3. SimpleText@CLEF-2024 Tasks 3.1. Task 2: "What is unclear?" Difficult concept identification and explanation</p><p>The goal of this task is to identify key concepts that need to be contextualized with a definition, example, and/or use-case and provide useful and understandable explanations for them. Thus, there are three subtasks:</p><p>1. To predict what are the terms in a passage of a document and their difficulty (easy/medium/difficult). 2. To generate a definition and an explanation only for the difficult terms. 3. To retrieve the provided definitions of the difficult terms in "correct" order: manual (2), generated positive 1 (1), generated positive 2 (1), generated negative 1 (0), generated negative 2 (0).</p><p>In Task 2.1, for each passage of a document, participants should provide a list of terms with corresponding difficulty scores (easy/medium/difficult). Passages (sentences) are considered to be independent, so term repetition is allowed (the same term can be detected in different sentences, even in the same document).</p><p>In Task 2.2, for each difficult term (terms that have been evaluated with the highest level of difficulty), participants should provide the definition and explanation which will be evaluated both from a qualitative point of view (manual review by terminologists) and from a quantitative point of view (applying overlapping text measures like BLUE, ROUGE, etc.).</p><p>In Task 2.3, participants should rank the set of definitions provided for the difficult terms in a way that the "best" definitions are ranked higher in the list of definitions. In particular, for each term there will be one manual definition (considered the best one) and two automatically generated good definitions that should be placed at the top of the list of retrieved definitions. Quantitative metrics (for example, P@1, P@3, rank correlation measures) will be used to evaluate participants'results. Task 2.1 and Task 2.3 can be performed independently. Participants that want to run experiments on Task 2.2 need to accomplish Task 2.1 first.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Proposed system</head><p>As mentioned above, our approach is to apply learning cues without prior examples to the GPT-4 Turbo model, extracting predictions from the generated sequence. Details of the model configuration are presented in Table <ref type="table" target="#tab_0">1</ref>. We use the OpenAI API in Python to interact with the model, allowing us to easily integrate GPT-4 Turbo into our workflow. In addition, the OpenAI Playground offers an intuitive web interface that facilitates experimentation and rapid generation of code necessary for our tests and developments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Experiments with zero-shot learning</head><p>We execute the model applying zero-shot learning where no examples are specified for learning and a sample of a generated output Novel technological advances in mobile devices and applications can be exploited in wildfire confrontation, enabling endusers to easily conduct several everyday tasks, such as access to data and information, sharing of intelligence and coordination of personnel and vehicles.</p><p>Response: json [ { "word": "exploited", "difficulty": "m", "definition": "make full use of and derive benefit from (a resource)", "explanation": "In the context, 'exploited' refers to utilizing the new technological capabilities of mobile devices and applications to their full advantage, specifically in managing and combating wildfires.", "use_case": "The novel technologies can be exploited to improve the efficiency and effectiveness of wildfire management." }, { "word": "confrontation", "difficulty": "m", "definition": "a hostile or argumentative meeting or situation between opposing parties", "explanation": "Here, 'confrontation' is used metaphorically to describe the act of dealing with wildfires, suggesting a direct and active engagement with the fires.", "use_case": "Mobile technologies are being used in the confrontation of wildfires, aiding in direct engagement and management." }, { "word": "coordination", "difficulty": "m", "definition": "the organization of the different elements of a complex body or activity so as to enable them to work together effectively", "explanation": "In this context, 'coordination' refers to the organized management of personnel and vehicles to tackle wildfires efficiently, facilitated by mobile technology.", "use_case": "Effective coordination of personnel and vehicles is crucial for the successful management of wildfire incidents." } ] ###</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Prompt PRM_ZS_TASK2_V2:</head><p>Depending on the text, select up to 5 terms considered complex within the sentence, then assign the complexity of the terms to easy, medium and difficult using the first letter of each word, for example if the complexity is easy add the letter "e", If the complexity of the term is medium, add the letter "m", and if the complexity of the term is difficult, you should add the letter "d". Then, from each complex term generate its definition, an explanation and a use case according to the context of the sentence. Finally, provide the response in the following JSON format: { "word": &lt;word&gt;, "difficulty" &lt;difficult_word&gt;, "definition": "&lt;definition&gt;", "explanation" : &lt;explanations&gt; } ## Context: Drug discovery Sentence: A pharmacophore analysis approach was used to investigate and compare different classes of compounds relevant to the drug discovery process (specifically, drug molecules, compounds in high throughput screening libraries, combinatorial chemistry building blocks and nondrug molecules).</p><p>Response: json [ { "word": "pharmacophore", "difficulty": "d", "definition": "A set of structural features in a molecule that is recognized by a biological macromolecule and is responsible for its biological activity.", "explanation": "In drug discovery, a pharmacophore analysis is used to identify these structural features which can interact with a biological target, such as a protein, to induce a therapeutic effect. This helps in designing or identifying new drugs that can mimic or block the biological activity.", "use_case": "In the sentence, a pharmacophore analysis approach helps in comparing different classes of compounds to find potential drug candidates." }, { "word": "combinatorial chemistry", "difficulty": "d", "definition": "A method in chemistry where different combinations of building blocks are systematically mixed to generate a large number of different compounds.", "explanation": "Combinatorial chemistry is used in drug discovery to rapidly synthesize and screen large libraries of compounds for potential drug activity. It allows researchers to explore a wide variety of chemical structures.", "use_case": "In the sentence, combinatorial chemistry building blocks are mentioned as part of the classes of compounds relevant to drug discovery." }, { "word": "high throughput screening", "difficulty": "d", "definition": "A method used in drug discovery to quickly conduct millions of chemical, genetic, or pharmacological tests.", "explanation": "High throughput screening (HTS) is crucial in the early stages of drug discovery. It allows researchers to quickly identify active compounds, antibodies, or genes that modulate a particular biomolecular pathway.", "use_case": "The sentence refers to compounds in high throughput screening libraries, these are tested in large-scale assays to find promising drug candidates." }, { "word": "nondrug", "difficulty": "m", "definition": "Substances or compounds that are not considered drugs and do not have therapeutic effects.", "explanation": "In the context of drug discovery, nondrug molecules are those that are used as controls or are part of the compound libraries but are not expected to lead to a therapeutic drug.", "use_case": "The sentence compares nondrug molecules with drug molecules and other compounds to understand their relevance or irrelevance in the drug discovery process." }, { "word": "compounds", "difficulty": "e", "definition": "Chemical substances composed of two or more elements chemically bonded together.", "explanation": "In the context of the sentence, 'compounds' refers to various chemical entities that are being investigated in the drug discovery process to find those with potential as drug molecules.", "use_case": "The sentence discusses investigating different classes of compounds in relation to their relevance to drug discovery." } ] ###</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Prompt PRM_ZS_TASK2_V3</head><p>Identify up to a total of 5 complex words in the sentence and determine the difficulty of each complex word by classifying it into the category of "easy", "medium" or "difficult". For the "easy" difficulty place the letter "e", for the "medium" difficulty place the letter "m", and for the "hard" difficulty place the letter "d". Furthermore, in the case of words identified as complex taking into account the context of the sentence, three things must be generated: a definition, an explanation, and a use case. Finally, provide the response in the following JSON format: { "word": &lt;word&gt;, "difficulty" :&lt;hard_word&gt;, "definition": "&lt;definition&gt;", "explanation" : &lt;explanations&gt; } ## Context: Humanoid robots Sentence: In many situations, users walk on typical paths between specific destinations at which the service of a mobile robot is needed.</p><p>Response: json [ { "word": "humanoid", "difficulty": "m", "definition": "having an appearance or character resembling that of a human.", "explanation": "In the context of robots, 'humanoid' refers to robots designed to look and sometimes behave like humans, often used to interact with environments built for human use.", "use_case": "The humanoid robot was programmed to assist customers in the store." }, { "word": "destinations", "difficulty": "m", "definition": "the place to which someone or something is going or being sent.", "explanation": "In the given sentence, 'destinations' refers to the specific places where the robot is required to perform services.", "use_case": "The tourists set off early in the morning to reach their destinations before sunset." }, { "word": "mobile", "difficulty": "e", "definition": "able to move or be moved freely or easily.", "explanation": "Here, 'mobile' describes a robot that is capable of moving around rather than being fixed in one location.", "use_case": "The company developed a mobile app to increase accessibility for users on the go." } ] ###</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Results</head><p>The Table <ref type="table" target="#tab_1">2</ref> presents various results in response to the approach of sub-task 2.1. To automatically support this task, we have proposed an innovative approach that explores the potential of GPT-4 Turbo, an advanced large language model (LLM) developed by OpenAI as in the work done by <ref type="bibr" target="#b20">[21]</ref>. On this occasion, we seek to evaluate the improvements and additional capabilities of GPT-4 Turbo in comparison with the proposal made for the same task in the SimpleText 2023 workshop where the results obtained were outstanding, demonstrating the effectiveness of the GPT-3 model, text-davinci-003 version <ref type="bibr" target="#b25">[26]</ref>.</p><p>The Table <ref type="table" target="#tab_2">3</ref> presents several results of sub-task 2.2 applying GPT-4 Turbo with zero-shot learning according to the PRM_ZS_TASK2_V2 prompt. As we can see, the table illustrates an example that corresponds to Snt_id G01.1_1000902583_1. In the solution proposed for sub-task 2.2, the model generates a definition, an explanation, and also generates a use case illustrating in a broad way the complexity of the terms for different groups of users, making suggestions for definitions, explanations and use cases based on the context of the sentence (Abstract).</p><p>As we can see, the table illustrates an example that corresponds to Snt_id G01.1_1000902583_1. In the solution proposed for sub-task 2.2, the model generates a definition, an explanation, and also generates a use case illustrating in a broad way the complexity of the terms for different groups of users, making suggestions for definitions, explanations and use cases based on the context of the sentence (Abstract). Table <ref type="table" target="#tab_3">4</ref>, we present the official results published by the organizers <ref type="bibr" target="#b26">[27]</ref>, including the scores obtained in our participation in subtasks 2.1 and 2.2. The meaning of each column is the following:</p><p>• recall overall: the proportion of terms (independently from the difficulty) that were found.</p><p>• recall average: the average of the recall of terms computed per sentence.</p><p>• recall difficult terms: the proportion of difficult terms that were found.</p><p>• precision difficult: the precision of terms that were labeled as difficult.</p><p>• bleu_nx: the BLEU score computed with ngrams n =1, 2, 3, 4.</p><p>According to the results presented in table <ref type="table" target="#tab_3">4</ref>, the UboNLP_Task2.1_phi3-oneshot team showed the best overall performance in terms of recall_overall (0.5424) and recall_average (0.5620), which indicates a high ability to retrieve relevant terms. However, its precision in difficult terms is relatively low (0.3743). Sharingans_Task2.2_GPT also performs well with a recall_overall of 0.4722 and a recall_average of 0.5302, in addition, it has a high recall_terms (0.5448), demonstrating a good ability to handle difficult terms.</p><p>According to the performance analysis in the difficult terms, the AIIRLab_Task2.2_LLaMAFT and SINAI_task_2_PRM_ZS_TASK2_V3 teams have a precision_difficult_terms of 1.0000 and 0.8333 respectively, which indicates that these models are very precise with difficult terms, although their recall_overall and recall_average are not as high in comparison with other equipment.</p><p>In the performance in BLEU Scores, the unipd_t21t22_chatgpt teams and their variants (mod1 and mod2) have the best bleu_n1_average, bleu_n2_average, bleu_n3_average and bleu_n4_average demonstrating good performance in terms of precision of n-grams in text generation. In the case of AIIRLab_Task2.2_Mistral, this team has a good balance with a recall_overall of 0.4118 and precision_difficult_terms of 0.4907, suggesting that their model is quite robust in general terms and in difficult terms. The SINAI team variants show variations in their results, with SINAI_task_2_PRM_ZS_TASK2_V2 having the best performance among its variants, with a recall_overall of 0.1556 and precision_difficult_terms of 0.7746. There are teams with values of 0 in almost all metrics, which indicates very low performance or problems in the implementation of their models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and Future Work</head><p>The model has proven to be able to generate robust responses based on the text or instruction (message) provided. We observe that GPT-4 Turbo analyzes the text, uses its extensive knowledge to identify words that can be considered complex, and categorizes them according to their complexity. Our preliminary analysis evaluates the model's ability to understand and generate arguments in specific contexts. The results show that GPT-4 Turbo is highly competent in natural language processing tasks, such as predicting lexical complexity, demonstrating its effectiveness in identifying and categorizing complex terms accurately and consistently. We have applied the GPT-4 Turbo model in the construction of several solutions for sub-tasks 2.1 and 2.2, where the model has demonstrated robust performance in the execution of natural language processing tasks, specifically in lexical simplification and identification of complex words. Not only has it successfully tackled these tasks, but it has also been able to generate detailed illustrations by creating definitions, explanations and use cases based on the context of the sentence (text). By identifying complex words and phrases and offering simpler versions, it facilitates access to scientific sources in an understandable way, helping readers to understand the complexity of terms in different user groups, which makes it very useful for a general audience. and especially valuable for readers with reading difficulties or cognitive disabilities.</p><p>The results showed a variety of performances among the participating teams. Some teams, such as UboNLP_Task2.1_phi3-oneshot and Sharingans_Task2.2_GPT, stand out for their overall performance and ability to handle difficult terms, while other teams show specific strengths such as accuracy on difficult terms or good BLEU scores. However, there are also teams with significantly low performances, which could indicate the need for improvements in their approaches or implementations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>GPT-4 Turbo model details</figDesc><table><row><cell>Parameter</cell><cell>Value</cell></row><row><cell>Model</cell><cell>gpt-4-turbo</cell></row><row><cell>Temperature</cell><cell>0</cell></row><row><cell>Max_tokens</cell><cell>500</cell></row><row><cell>Top_p</cell><cell>1</cell></row><row><cell cols="2">Frecuency Penalty 0</cell></row><row><cell>Presence Penalty</cell><cell>0</cell></row><row><cell>Logprobs</cell><cell>5</cell></row><row><cell>Stop</cell><cell>"###"</cell></row><row><cell>Prompt PRM_ZS_TASK2_V1:</cell><cell></cell></row><row><cell cols="2">Performing the following actions according to a given text:</cell></row><row><cell cols="2">Select a number from 1 to 5 terms within the sentence named text</cell></row><row><cell cols="2">Assign the difficulty of the words: easy, medium and difficult, giving</cell></row><row><cell cols="2">the initial letter of each difficulty as an answer ([e]fácil/[m]edium/[d]ifficul)</cell></row><row><cell cols="2">Give the definition of each difficult term, add an explanation and use</cell></row><row><cell>case according to context of the sentences.</cell><cell></cell></row><row><cell>Provide the response in following JSON format:</cell><cell></cell></row><row><cell>{</cell><cell></cell></row><row><cell>"word": &lt;word&gt;,</cell><cell></cell></row><row><cell>"difficulty": &lt;word_difficult&gt;,</cell><cell></cell></row><row><cell>"definition": "&lt;definition&gt;",</cell><cell></cell></row><row><cell>"explanation": &lt;explanation&gt;</cell><cell></cell></row><row><cell>}</cell><cell></cell></row><row><cell>Context: {{context}}</cell><cell></cell></row><row><cell>Text : {{text}}</cell><cell></cell></row><row><cell>Response: .</cell><cell></cell></row><row><cell>##</cell><cell></cell></row><row><cell>Context: Digital assistant</cell><cell></cell></row><row><cell>Text : Abstract</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Predictions generated by applying GPT-4 Turbo with zero-shot learning in Sub-task 2.1-Prompt PRM_ZS_TASK2_V1.</figDesc><table><row><cell>Manual</cell><cell>Snt_id</cell><cell>Term</cell><cell>Difficulty</cell></row><row><cell>0</cell><cell>G01.1_1000902583_1</cell><cell>exploited</cell><cell>d</cell></row><row><cell>0</cell><cell>G01.1_1000902583_1</cell><cell>confrontation</cell><cell>m</cell></row><row><cell>0</cell><cell>G01.1_1000902583_1</cell><cell>coordination</cell><cell>m</cell></row><row><cell>0</cell><cell>G05.1_2914002216_4</cell><cell>CRISPR/Cas9</cell><cell>d</cell></row><row><cell>0</cell><cell>G05.1_2914002216_4</cell><cell>reagents</cell><cell>m</cell></row><row><cell>0</cell><cell cols="2">G05.1_2914002216_4 high-content screen</cell><cell>d</cell></row><row><cell>0</cell><cell>G07.2_2773680786_5</cell><cell>conspiracy</cell><cell>m</cell></row><row><cell>0</cell><cell>G07.2_2773680786_5</cell><cell>emergent</cell><cell>m</cell></row><row><cell>0</cell><cell>G07.2_2773680786_5</cell><cell>pervasive</cell><cell>d</cell></row><row><cell>0</cell><cell>G11.1_2946157960_5</cell><cell>consideration</cell><cell>m</cell></row><row><cell>0</cell><cell>G11.1_2946157960_5</cell><cell>applications</cell><cell>e</cell></row><row><cell>0</cell><cell>G11.1_2946157960_5</cell><cell>deployment</cell><cell>m</cell></row><row><cell>0</cell><cell>G11.1_2946157960_5</cell><cell>networked</cell><cell>d</cell></row><row><cell>0</cell><cell>G11.1_2946157960_5</cell><cell>emission</cell><cell>m</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Predictions generated by applying GPT-4 Turbo with zero-shot learning in Sub-task 2.2 -Prompt PRM_ZS_TASK2_V1 Novel technological advances in mobile devices and applications can be exploited in wildfire confrontation, enabling end-users to easily conduct several everyday tasks, such as access to data and information, sharing of intelligence and coordination of personnel and vehicles.</figDesc><table><row><cell>Snt_id: G01.1_1000902583_1</cell><cell></cell><cell></cell></row><row><cell>Abstract: term</cell><cell cols="2">difficulty Definition/Explication</cell></row><row><cell>exploited</cell><cell>d</cell><cell>Definition: make full use of and derive benefit from (a</cell></row><row><cell></cell><cell></cell><cell>resource).</cell></row><row><cell></cell><cell></cell><cell>Explication: In the context of the sentence, exploited</cell></row><row><cell></cell><cell></cell><cell>refers to utilizing the new technological capabilities of</cell></row><row><cell></cell><cell></cell><cell>mobile devices and applications to their fullest potential</cell></row><row><cell></cell><cell></cell><cell>in managing and combating wildfires. Use case: The</cell></row><row><cell></cell><cell></cell><cell>novel technologies can be exploited to improve the effi-</cell></row><row><cell></cell><cell></cell><cell>ciency and effectiveness of wildfire management.</cell></row><row><cell>confrontation</cell><cell>m</cell><cell>Definition: a hostile or argumentative meeting or situa-</cell></row><row><cell></cell><cell></cell><cell>tion between opposing parties.</cell></row><row><cell></cell><cell></cell><cell>Explication: Here, confrontation is used metaphorically</cell></row><row><cell></cell><cell></cell><cell>to describe the act of dealing with or managing wildfires,</cell></row><row><cell></cell><cell></cell><cell>which are seen as the opposing party to human safety</cell></row><row><cell></cell><cell></cell><cell>and property. Use case: Technological advances aid in</cell></row><row><cell></cell><cell></cell><cell>the confrontation of wildfires by providing better tools</cell></row><row><cell></cell><cell></cell><cell>for managing the crisis.</cell></row><row><cell>coordination</cell><cell>m</cell><cell>Definition: the organization of the different elements</cell></row><row><cell></cell><cell></cell><cell>of a complex body or activity so as to enable them to</cell></row><row><cell></cell><cell></cell><cell>work together effectively.</cell></row><row><cell></cell><cell></cell><cell>Explication: In this context, coordination refers to the</cell></row><row><cell></cell><cell></cell><cell>organized management of personnel and vehicles, ensur-</cell></row><row><cell></cell><cell></cell><cell>ing they operate in a synchronized and efficient manner</cell></row><row><cell></cell><cell></cell><cell>during wildfire emergencies. Use case: Effective coordi-</cell></row><row><cell></cell><cell></cell><cell>nation of personnel and vehicles is crucial for rapid and</cell></row><row><cell></cell><cell></cell><cell>efficient wildfire suppression.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>SimpleText results 2024 official results of task 2</figDesc><table><row><cell>runid</cell><cell>recall_</cell><cell>recall_</cell><cell>recall_</cell><cell>precision</cell><cell>bleu_</cell><cell>bleu_</cell><cell>bleu_</cell><cell>bleu_</cell></row><row><cell></cell><cell>overall</cell><cell>average</cell><cell>difficult</cell><cell>difficult</cell><cell>n1_</cell><cell>n2_</cell><cell>n3_</cell><cell>n4_</cell></row><row><cell></cell><cell></cell><cell></cell><cell>_terms</cell><cell>_terms</cell><cell>average</cell><cell>average</cell><cell>average</cell><cell>average</cell></row><row><cell>AIIRLab_Task2.2_LLaMA</cell><cell>0.2792</cell><cell>0.3011</cell><cell>0.2642</cell><cell>0.6667</cell><cell>0.2883</cell><cell>0.1519</cell><cell>0.0497</cell><cell>0.0191</cell></row><row><cell>AIIRLab_Task2.2_LLaMAFT</cell><cell>0.0069</cell><cell>0.0056</cell><cell>0.0047</cell><cell>1.0000</cell><cell>0.2405</cell><cell>0.1171</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>AIIRLab_Task2.2_Mistral</cell><cell>0.4118</cell><cell>0.4415</cell><cell>0.1863</cell><cell>0.4907</cell><cell>0.2610</cell><cell>0.1338</cell><cell>0.0395</cell><cell>0.0128</cell></row><row><cell>Dajana&amp;Kathy_SimpleText</cell><cell>0.0118</cell><cell>0.0114</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>_Task2.2_LLAMA2_13B_CHAT</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>FRANE_AND_ANDREA</cell><cell>0.0076</cell><cell>0.0066</cell><cell>0.0094</cell><cell>0.3636</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>_SimpleText_Task2.2_LLAMA2</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>_13B_CHAT</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ruby</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>Sharingans_Task2.2_GPT</cell><cell>0.4722</cell><cell>0.5302</cell><cell>0.5448</cell><cell>0.5954</cell><cell>0.2257</cell><cell>0.1039</cell><cell>0.0300</cell><cell>0.0160</cell></row><row><cell>SINAI_task_2_PRM_ZS_</cell><cell>0.0868</cell><cell>0.0872</cell><cell>0.1014</cell><cell>0.5244</cell><cell>0.2545</cell><cell>0.1579</cell><cell>0.0821</cell><cell>0.0578</cell></row><row><cell>TASK2_V1</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>SINAI_task_2_PRM_ZS_</cell><cell>0.1556</cell><cell>0.1636</cell><cell>0.1297</cell><cell>0.7746</cell><cell>0.2774</cell><cell>0.1574</cell><cell>0.0630</cell><cell>0.0443</cell></row><row><cell>TASK2_V2</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>SINAI_task_2_PRM_ZS_</cell><cell>0.0951</cell><cell>0.1045</cell><cell>0.0472</cell><cell>0.8333</cell><cell>0.2144</cell><cell>0.1113</cell><cell>0.0377</cell><cell>0.0229</cell></row><row><cell>TASK2_V3</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>team1_Petra_and_Regina_</cell><cell>0.0042</cell><cell>0.0042</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>Task2_ST</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Tomislav&amp;Rowan_Task2.2_</cell><cell>0.0069</cell><cell>0.0040</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>LLAMA2_13B_CHAT</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Tomislav&amp;Rowan_Task2.2_</cell><cell>0.0083</cell><cell>0.0084</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>LLAMA2_13B_CHAT_1</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>UAms_Task2-1_RareIDF</cell><cell>0.0854</cell><cell>0.0942</cell><cell>0.0259</cell><cell>0.0894</cell><cell>0.0001</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>UboNLP_Task2.1_phi3-oneshot</cell><cell>0.5424</cell><cell>0.5620</cell><cell>0.3160</cell><cell>0.3743</cell><cell>0.0011</cell><cell>0.0000</cell><cell>0.0000</cell><cell>0.0000</cell></row><row><cell>unipd_t21t22_chatgpt</cell><cell>0.1340</cell><cell>0.1400</cell><cell>0.0825</cell><cell>0.6250</cell><cell>0.3045</cell><cell>0.1851</cell><cell>0.0905</cell><cell>0.0507</cell></row><row><cell>unipd_t21t22_chatgpt_mod1</cell><cell>0.2194</cell><cell>0.2371</cell><cell>0.1981</cell><cell>0.5957</cell><cell>0.3060</cell><cell>0.1783</cell><cell>0.0802</cell><cell>0.0430</cell></row><row><cell>unipd_t21t22_chatgpt_mod2</cell><cell>0.3146</cell><cell>0.3155</cell><cell>0.3420</cell><cell>0.6905</cell><cell>0.0302</cell><cell>0.0069</cell><cell>0.0031</cell><cell>0.0000</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://openai.com/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgments</head><p>This work has been partially supported by projects CONSENSO (PID2021-122263OB-C21), MODERATES (TED2021-130145B-I00), SocialTOX (PDC2022-133146-C21) funded by the Spanish Government.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">The principles of readability</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">H</forename><surname>Dubay</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
	<note>Online Submission</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Making it simplext: Implementation and evaluation of a text simplification system for spanish</title>
		<author>
			<persName><forename type="first">H</forename><surname>Saggion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Štajner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mille</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Drndarevic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Accessible Computing (TACCESS)</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1" to="36" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Lexical complexity prediction: An overview</title>
		<author>
			<persName><forename type="first">K</forename><surname>North</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zampieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shardlow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="1" to="42" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Simplifying drug package leaflets written in spanish by using word embedding</title>
		<author>
			<persName><forename type="first">I</forename><surname>Segura-Bedmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martinez</surname></persName>
		</author>
		<idno type="DOI">10.1186/s13326-017-0156-7</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Biomedical Semantics</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Making it simplext: Implementation and evaluation of a text simplification system for spanish</title>
		<author>
			<persName><forename type="first">H</forename><surname>Saggion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Štajner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mille</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Drndarevic</surname></persName>
		</author>
		<idno type="DOI">10.1145/2738046</idno>
		<ptr target="https://doi.org/10.1145/2738046.doi:10.1145/2738046" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Access. Comput</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Lecturas adaptadas a la enseñanza del español como l2: variables lingüísticas para la determinación del nivel de legibilidad, La evaluación en el aprendizaje y la enseñanza del español como</title>
		<author>
			<persName><forename type="first">A</forename><surname>Anula</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">LE L</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="162" to="170" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Hulat-ALexS CWI Task-CWI for Language and Learning Disabilities Applied to University Educational Texts</title>
		<author>
			<persName><forename type="first">R</forename><surname>Alarcón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Moreno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martínez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020)</title>
				<meeting>the Iberian Languages Evaluation Forum (IberLEF 2020)<address><addrLine>Malaga, Spain</addrLine></address></meeting>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Azarbonyad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bertin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Augereau</surname></persName>
		</author>
		<title level="m">Overview of the clef 2023 simpletext task 2: Difficult concept identification and explanation</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Overview of CLEF 2024 SimpleText track on improving access to scientific texts</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Fifteenth International Conference of the CLEF Association (CLEF 2024)</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">L</forename><surname>Goeuriot</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum, CLEF 2024</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<meeting><address><addrLine>Grenoble, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<author>
			<persName><forename type="first">G</forename><surname>Paetzold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Specia</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/S16-1085</idno>
	</analytic>
	<monogr>
		<title level="m">Task 11: Complex Word Identification</title>
				<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="560" to="569" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A Report on the Complex Word Identification Shared Task</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Yimam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Biemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Malmasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Paetzold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Specia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Štajner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zampieri</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/W18-0507</idno>
		<ptr target="https://aclanthology.org/W18-0507.doi:10.18653/v1/W18-0507" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, Association for Computational Linguistics</title>
				<meeting>the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, Association for Computational Linguistics<address><addrLine>New Orleans, Louisiana</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="66" to="78" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Overview of ALexS 2020: First workshop on lexical analysis at SEPLN</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ortiz-Zambrano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montejo-Ráez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Iberian Languages Evaluation Forum</title>
				<meeting>the Iberian Languages Evaluation Forum<address><addrLine>IberLEF</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="volume">2664</biblScope>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">SemEval-2021 task 1: Lexical complexity prediction</title>
		<author>
			<persName><forename type="first">M</forename><surname>Shardlow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Evans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">H</forename><surname>Paetzold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zampieri</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.semeval-1.1</idno>
		<ptr target="https://aclanthology.org/2021.semeval-1.1.doi:10.18653/v1/2021.semeval-1.1" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), Association for Computational Linguistics</title>
				<meeting>the 15th International Workshop on Semantic Evaluation (SemEval-2021), Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Štajner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Saggion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ferrés</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shardlow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">C</forename><surname>Sheang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>North</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zampieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Xu</surname></persName>
		</author>
		<title level="m">Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022</title>
				<meeting>the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022<address><addrLine>TSAR-</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
		</imprint>
	</monogr>
	<note>Proceedings of the Workshop on Text Simplification, Accessibility, and Readability</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Overview of SimpleText CLEF 2021 workshop and pilot tasks</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bellot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Braslavski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kamps</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mothe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nurbakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ovchinnikova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sanjuan</surname></persName>
		</author>
		<idno>CEUR-WS. org</idno>
	</analytic>
	<monogr>
		<title level="m">12th Conference and Labs of the Evaluation Forum (CLEF 2021)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">2936</biblScope>
			<biblScope unit="page" from="2212" to="2227" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Overview of the clef 2022 simpletext lab: Automatic simplification of scientific texts</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sanjuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kamps</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Huet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ovchinnikova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nurbakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Araújo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hannachi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mathurin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bellot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Barrón-Cedeño</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Da San Martino</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Degli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Esposti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Sebastiani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Macdonald</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Pasi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Hanbury</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Potthast</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><surname>Ferro</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="470" to="494" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1810.04805</idno>
		<title level="m">BERT: Pre-training of deep bidirectional transformers for language understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1907.11692</idno>
		<title level="m">RoBERTa: A robustly optimized BERT pretraining approach</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Language models are few-shot learners</title>
		<author>
			<persName><forename type="first">T</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1877" to="1901" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">K</forename><surname>Shahandashti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sivakumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Mohajer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Belle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Lethbridge</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2401.17991</idno>
		<title level="m">Evaluating the effectiveness of gpt-4 turbo in creating defeaters for assurance cases</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Kublik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Saboo</surname></persName>
		</author>
		<title level="m">GPT-3: Building innovative NLP products using large language models</title>
				<imprint>
			<publisher>O&apos;Reilly Media</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">GPT-3 and InstructGPT: technological dystopianism, utopianism, and &quot;Contextual&quot; perspectives in AI ethics and industry</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI and Ethics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="53" to="64" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Gpt-4 turbo with vision fails to outperform text-only gpt-4 turbo in the japan diagnostic radiology board examination</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Hirano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hanaoka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Nakao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Miki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kikuchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nakamura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nomura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yoshikawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Abe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Japanese Journal of Radiology</title>
		<imprint>
			<biblScope unit="page" from="1" to="9" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Bhattacharjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Moraffah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Garland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2405.04793</idno>
		<title level="m">Zero-shot llm-guided counterfactual generation for text</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Ortiz-Zambrano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Espin-Riofrio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montejo-Ráez</surname></persName>
		</author>
		<title level="m">Sinai participation in simpletext task 2 at clef 2023: Gpt-3 in lexical complexity prediction for general audience</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Overview of the CLEF 2024 SimpleText task 2: Identify and explain difficult concepts</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M D</forename><surname>Nunzio</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
