<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>MODERATES: MODERAtion of ConTEnts in Social networks using Language Technologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>L. Alfonso Ureña-López</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugenio Martínez Cámara</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Salud María Jiménez-Zafra</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Miguel Ángel García-Cumbreras</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>José M. Perea-Ortega</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arturo Montejo-Ráez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manuel García-Vega</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M.Dolores Molina González</string-name>
          <email>mdmolina@ujaen.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fernando Martínez-Santiago</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manuel Carlos Díaz-Galiano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M.Teresa Martín-Valdivia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Advanced Studies Center in ICT (CEATIC), Universidad de Jaén</institution>
          ,
          <addr-line>Campus Las Lagunillas, 23071, Jaén</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer and Telematic Systems, Universidad de Extremadura</institution>
          ,
          <addr-line>Avda. Elvas s/n. 06006, Badajoz</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The rise of digital platforms has led to an increase in social interactions, but it has also given rise to inappropriate behavior on the Web, including the spread of hate speech and ofensive language. This poses a threat to freedom of expression, exposing users to denigrating content based on personal characteristics. Such communication can have harmful psychological efects, especially on vulnerable communities. Detecting and preventing hate speech has become a crucial area of research in Natural Language Processing (NLP) and Machine Learning (ML). This project aims to develop efective solutions using advanced ML and NLP techniques to assess the ofensiveness and toxicity of Spanish text. A cloud service will automatically label and classify texts, providing an interface for detecting and substituting inappropriate language. The prototype aims to assist users in writing non-ofensive content and serve as real-time monitoring and filtering tools for social media, contributing to a more inclusive and fair discourse.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Natural Language Processing</kwd>
        <kwd>NLP</kwd>
        <kwd>human language technologies</kwd>
        <kwd>language modeling</kwd>
        <kwd>machine learning</kwd>
        <kwd>ofensive language and hate speech</kwd>
        <kwd>sentiment and emotion analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The significant increase in social interactions through
digital platforms has leveraged the presence of
inappropriate behavior on the Web, such as the propagation of
hate speech or the use of ofensive language among the
users of these platforms. Freedom of expression in these
media has exposed their users to publications that are
sometimes used to denigrate, insult or hurt with foul
or rude language based on gender, race, religion,
ideology or other personal characteristics. Unfortunately,
this type of communication can be very harmful and can
cause negative psychological efects among users,
especially among vulnerable communities such as children
and teenagers, women, LGTBI, immigrants, religious and
cultural communities.</p>
      <p>Governments and online platforms have been
addressing this problem for some years now, and measures are
continuously being adopted in the form of laws and
policies that contribute to fostering healthy coexistence in
these media. For example, since 2013, the European
Council has promoted the "No Hate Speech" movement 1
with the aim of mobilizing young people to combat hate
speech and promote human rights on the Internet. In
May 2016, the European Commission reached an
agreement with Facebook, Microsoft, Twitter and YouTube
to create a "Code of Conduct on combating illegal hate
speech online"2. Based on this agreement, the
"Protocol on combating illegal hate speech online" has been
drafted3.</p>
      <p>Between 2018 and 2020, other platforms such as
In1https://www.coe.int/en/web/committee-on-combatting-hate-speech/
home</p>
      <p>2https://ec.europa.eu/info/policies/
justice-and-fundamental-rights/combatting-discrimination/
racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_
en</p>
      <sec id="sec-1-1">
        <title>3https://www.inclusion.gob.es/oberaxe/ficheros/ejes/</title>
        <p>
          discursoodio/PROTOCOLO_DISCURSO_ODIO.pdf
stagram, Snapchat, Dailymotion and TikTok joined this considerable eforts in the generation of resources, most
Code of Conduct. The 2019 report4 highlighted that of them are only available for English. However, it is
necthreats, insults and discrimination are counted as the essary to spend eforts on the development of resources
most repeated criminal acts, with the Internet (54.9%) and systems adapted to other languages since, for
examand social networks (17.2%) being the most used means ple, in the specific case of hate speech detection, there are
to commit these actions. This situation has led the Span- important cultural diferences (jargons, slang, idiomatic
ish Parliament to approve in 2020 a law to prevent the expressions...) depending on the language or the social
spread of hate online. The 2020 report5 maintains the group under examination[
          <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
          ]. The general trend in
artrend, with the Internet (45%) and social networks (22.8%) tificial intelligence to data-driven solutions demands a
as the media through which hate crimes are most assid- growing volume of data for both training and evaluation.
uously disseminated. However, this problem does not In order to grant greater research focus to data quality
only involve governments and online platforms, but also and promote data excellence, it is also necessary to work
afects society in general, where the number of this type on the construction of specific evaluation sets to improve
of crimes and aggressive behavior on the Internet has the quality of training and test data. In addition, it is also
increased exponentially in recent years. necessary to advance in the development of algorithms
        </p>
        <p>
          Thus, it seems clear that the problem of detecting in- to build or optimize such datasets, like the creation of
appropriate behavior in general, and hate speech in par- "core" sets to deploy augmentation approaches, or more
ticular, on the Web has worsened in recent years, making concern in the debugging of text labeling errors.
it necessary to study, analyze and implement solutions Finally, the availability of tools to facilitate the task
in all areas, including the field of language technolo- of moderation for human professionals dedicated to the
gies [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Analyzing this type of harmful content on the detection of inappropriate language in social media
(comWeb requires automatic systems capable of processing ments on news in the media, posts in social networks,
and analyzing human language[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. For this reason, the responses to messages in online platforms...) is becoming
detection and prevention of hate speech and ofensive increasingly necessary due to the exponential growth
language has become one of the main research topics of in textual interactions of Internet users. The
popularizaNatural Language Processing (NLP). NLP is an important tion of the use of social media by the vast majority of
area of Artificial Intelligence that tries to understand and the population makes it completely unfeasible to
modgenerate language in the same way as humans do using erate comments and messages by strictly manual means
computational methods. In addition, the use of Machine performed by humans. The development of tools is
reLearning (ML) algorithms is allowing the development quired to assist moderators in decision making through
of powerful classification systems that, combined with early detection of this type of inappropriate behavior on
advanced NLP techniques, help us to provide answers to the Web. Having tools that not only indicate the level
many current social problems. of toxicity of a text but also identify this type of sexist,
        </p>
        <p>
          In addition, hope speech is a type of language that is xenophobic, homophobic, racist or simply bad-sounding
able to relax a hostile environment and that helps, gives expressions will help us to build a much more inclusive,
suggestions and inspires for good to a number of people friendly, social and fair discourse.
when they are in times of illness, stress, loneliness or We propose this project "MODERATE: MODERAtion
depression. Detecting it automatically, so that positive of ConTEnts in Social networks using Language
Techcomments can be more widely disseminated, can have a nologies" ((TED2021-130145B-I00)to address these issues.
very significant efect when it comes to combating sexual This work has been partially supported by MCIN/AEI/
or racial discrimination or when we seek to foster less 10.13039/501100011033 and by the “European Union
bellicose environments[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. NextGenerationEU/PRTR".
        </p>
        <p>To train these systems, it is essential to generate and The main objective of this project is to study and
decompile manually labeled linguistic resources as well as velop efective solutions based on advanced machine
to adapt existing ones to fit the particular use case. Al- learning and NLP techniques to determine the degree
though in recent years the NLP community has invested of ofensiveness/toxicity/hope speech/hate speech of a
text in Spanish and identify the spans related to this type
4https://www.interior.gob.es/opencms/pdf/ of content. It is proposed to develop a cloud service
capaaprucbhliivcaocsi-oyn-edso-cduemsceanrtgaacbiolens//dpoucbulimcaecniotanceiso-np-eyr-ipoudibclaicsa/ciones/ ble of automatically labeling/classifying texts according
informe-sobre-la-violencia-contra-la-mujer/Informe_evolucion_ to their level of ofensiveness or toxicity. In addition, an
delitos_de_odio_en-Espana_2019_126200207.pdf interface will be designed to detect terms or passages
5https://www.interior.gob.es/opencms/pdf/ that contain inappropriate language (abusive, ofensive,
archivos-y-documentacion/documentacion-y-publicaciones/ foul language, hate speech...), marking them in such a
ipnufbolrimcaec-isoonberse--dleas-ceavroglaubclieosn/-pdueb-lliocsa-cdioenlietos-s-pdeer-iooddiicoa-se/n-Espana/ way as to facilitate their substitution so that the final
Informe_evolucion_delitos_odio_Espana_2022_126200207.pdf text eliminates or reduces the degree of ofensiveness.
The service will work both as an assistant to guide the
user who writes a text and avoid possible ofensive
expressions, as well as for the construction of listening,
monitoring and filtering tools in real time on social
media (comments, responses, posts...). A prototype will be
built with diferent modules that will be integrated into a
single platform that will be freely accessible through an
Application Programming Interface (API) and through a
demonstrator that will allow the configuration of filters
and search parameters, in order to obtain diferent types
of reports.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Goals</title>
      <p>The main objective of this project is to study and develop
efective solutions based on advanced machine learning
and NLP techniques to determine the degree of
ofensiveness/toxicity of a text in Spanish. It is proposed to
develop a cloud service capable of automatically
labeling/classifying texts according to their level of
ofensiveness or toxicity. In addition, an interface will be designed
to detect terms or passages that contain inappropriate
language (abusive, ofensive, foul language, hate speech...),
marking them in such a way as to facilitate their
substitution so that the final text eliminates or reduces the
degree of ofensiveness. The service will work both as an
assistant to guide the user who writes a text and avoid
possible ofensive expressions, as well as for the
construction of listening, monitoring and filtering tools in real
time on social media (comments, responses, posts...). A
prototype will be built with diferent modules that will
be integrated into a single platform that will be freely
accessible through an Application Programming
Interface (API) and through a demonstrator that will allow the
configuration of filters and search parameters, in order
to obtain diferent types of reports.</p>
      <p>To achieve this goal, innovative techniques in NLP and
advanced machine learning algorithms will be studied,
designed and evaluated, including the generation of
language models, application of deep learning and transfer
learning techniques, as well as the latest experiences in
multitask learning and zero-shot learning that allow the
detection of inappropriate behaviors, including ofensive
language and hate speech, through the integration of
sentiment analysis techniques, emotion recognition or
toxicity detection, among others. These afective
computing elements are proving to be very eficient in the
detection of inappropriate language, as they go beyond simple
pattern detection, surface learning or lexicon searches,
integrating advanced knowledge extracted from large
text collections into the trained models.</p>
      <p>This global goal can be divided into the following
subobjectives:
• Construction and compilation of new tools and
resources based on human language technologies
to infer, create and utilize knowledge applied to
digital content, focusing on the creation of
semiassisted annotators and their application in the
annotation process to generate labeled data sets.
• Identification of valid technologies for “listening”
the interactions of individuals with their digital
and social environment, so these interactions can
be further analysed.
• Study, development and implementation of
language technologies together with advanced
machine learning algorithms focused on the
detection of inappropriate behavior on social networks
and hope speech.
• Study and development of deep learning
algorithms to model diferent targeted forms of
aggressive communication or risky situations,
building artificial intelligence solutions to protect
citizens.
• Development and implementation to
determine the degree of ofensiveness/toxicity/hope
speech/hate speech of a text in Spanish.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>To achieve these objectives, a methodology and
recommendations on search tools in social networks will be
defined and established. Filters and search parameters
will be defined, and artificial intelligence algorithms
based mainly on language technologies (language
models, entity recognition, external knowledge integration,
author profiling...) and advanced machine learning
techniques oriented to neural networks (deep learning,
transfer learning, multitask learning, zero-shot learning) will
be proposed, although classic techniques such as SVM
or logistic regression will also be evaluated. Existing
and new or adapted linguistic resources that can
contribute to the discovery of aggressive language patterns
will also be taken into account. We will analyze which
social networks and social media are the most suitable
for monitoring and how to approach the problem in each
of them.</p>
      <p>The first step will be an in-depth study of the state of
the art as well as an analysis of the problem. We will
also decide the scenarios and social media to work on,
establishing a working methodology for each of them.</p>
      <p>After the analysis and establishment of the
methodology, we will start with the design and implementation
of the platform to help to detect hate speech and
ofensive language. A prototype with the integration of the
diferent modules will be freely accessible, allowing the
configuration of filters and search parameters, in order
to obtain diferent types of reports. During the
development, various learning algorithms, language models and
linguistic resources will be tested and evaluated in order the project. Firstly, there exists an important lack of
to refine the final result and adjust the application for the resources, even more when dealing with the Spanish
lantask of analysis and detection of ofensive language dis- guage. In the project we will focus on generating data
course. For each of the selected domains and scenarios, and developing methods, techniques and tools to secure
the following activities will be carried out: our interactions in the digital society. New architectures
for artificial intelligence represent the main approach to
1. Data collection: Good practices will be applied</p>
      <p>address social media monitoring, but further questions
for the retrieval, processing and storage of big</p>
      <p>remain, as new linguistic resources, specific and adapted
data. Since we will be working with textual
in</p>
      <p>language models and integration solutions are needed to
formation obtained from social networks, it is</p>
      <p>reach the objectives proposed by this project.
important to identify those technologies that are</p>
      <p>On the other hand, the impact of the COVID-19
panvalid for "listening" and storing the interactions</p>
      <p>demic has leveraged our dependency to the digital
chanof individuals with their digital and social
envi</p>
      <p>nels of communication such as social media and social
ronment.</p>
      <p>networks, increasing the potential negative efect on the
2. Design of techniques and tools based on NLP: population, especially among vulnerable communities
Development of algorithms based on neural net- such as children and teenagers, women, LGTBI,
immiworks to model diferent specific forms of commu- grants, religious and cultural communities.
nication, building artificial intelligence solutions For all these reasons it is expected that the project will
that will act as early detection systems. These have a scientific impact (both nationally and
internationalgorithms will be trained from available labeled ally) in diferent fields, such as the creation and use of
efdatasets. Furthermore, once these algorithms are fective linguistic, semantic and computational resources
implemented and start collecting interactions, the for diferent languages, especially in Spanish; the
develalgorithms will be fine-tuned by re-training with opment and integration of specific methods and tools for
the captured and classified information to adapt the retrieval, extraction, automatic analysis,
summarizato the variability and flexibility of language over tion and representation in digital entities of information
time: slang, ill-formed expressions, typos, gram- coming from diferent textual genres as well as the
immar that occur very often in social networks and provement or creation of resources and components such
other interpersonal communications. as crawlers, sentiment and opinion analysis systems and
3. Integration of technologies: These trained mod- integration tools, which will have a strong impact on the
els will be incorporated into a web application scientific community and society. It is also expected to
that will act as a monitor of ofensive language have transferable results in the medium term, working
messages in social networks, so that, for each with real practical cases.
message retrieved, its content will be
automatically analyzed to detect whether it contains hate
speech, ofensive language, toxicity... and the re- 5. Social and Economic Impact
sult will be visualized through diferent graphs.</p>
      <p>The user will have the possibility to generate a This proposal focuses on a significant strategic societal
report and analyze the content of the data based challenge given that the detection and prevention of
inon filters. appropriate behavior on the Internet is of the greatest
im4. Evaluation of automatic systems: An essential portance for the contribution to the goal of smart,
sustainstep is the evaluation of the developed algorithms able and inclusive growth within Europe in accordance
to estimate the accuracy and coverage that these with European, national and regional policy-making.
algorithms could have in a real scenario, i.e. when Digital media have become a space where hoaxes, hate
monitoring the Web to detect this type of inap- speech or abusive behavior proliferate, among other
conpropriate behavior. tents that directly and negatively harm the users of this
space in particular and society in general. Thus, this</p>
      <p>The diferent activities described in the previous steps project will ofer the modeling of the behavior of digital
can be visualized in Figure 1. content, being able to contribute, in the detection,
mitigation and prevention of harmful digital content, in pursuit
4. Scientific and Technical Impact of a sanitation of social media on the Internet helping
to ensure a respectful, safe and reliable communication
environment.</p>
      <p>The project will have an important social impact from
the point of view of digital content modeling. This
modeling will provide a framework for specialists to develop</p>
      <sec id="sec-3-1">
        <title>The MODERATES Project focuses on a series of scientific challenges in the field of Human Language Technologies (HLT) which must be faced with the experience and techniques developed by the research team involved in</title>
        <p>and implement information systems to address negative of the project justification. Companies like Facebook or
social phenomena, protect society from dangers that may Twitter spend huge amounts of money developing
autobe posed to citizens, etc. Also, positive digital content matic models for detecting inappropriate content. Major
can improve digital literacy and people’s e-safety skills social media companies spend huge amounts of money
and awareness to prevent the risks of harmful content on developing automatic models for detecting inappropriate
the Internet, and thus improve users’ experience on the content. The estimated investment in content
moderaInternet. Given that service providers have legal respon- tion will rise from 5 billion today to almost 12 billion in
sibility for content, having tools for monitoring illegal the next five years. Due to problems with human
moderahate speech would help to comply with the protocols tion, companies already make use of artificial intelligence
imposed by legislation . for automatic moderation in 60% of the content.
Hav</p>
        <p>In summary, a direct social impact is to reduce the ing platforms and services such as the one proposed by
harmful efects that inappropriate content causes on so- the project would have a series of clear impacts on
concial networks, especially among the most vulnerable sec- tent moderation: 1) Increase the performance of manual
tors of society: young people and children, people with moderation, 2) Reduce the costs of developing and
intedisabilities and people susceptible to harassment (women, grating automatic detection solutions, and 3) Promote
LGTBI, immigrants, religious and cultural communities). the moderation of content in Spanish, a language that</p>
        <p>In terms of economic impact, several sectors could ben- already accounts for 8% of all communications on social
efit such as communication media, tourism, fake-news networks, with a constant increase each year (reaching
detection, digital security, hate speech detection, con- 15.6% in the most widespread network, Facebook).
structive discourse promotion or self-learning. On the Despite the clear interest of such large corporations,
other hand, big tech companies could be interested in this numerous startups and technological mid-size companies
technology, also providing a great economic impact on are ofering automatic tools for textual analysis in a wide
the transfer of this knowledge as mentioned in Section 2 range of sectors such as press, social media, e-commerce...
Content moderation is needed by many online services
(anyone where clients of users are able to publish
comments). The technology and solutions released by the
project could help local and international companies in
providing more robust and performance products,
empowering Spanish oriented products and improving
competitiveness for such as mid-size companies.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <sec id="sec-4-1">
        <title>This work has been partially supported by project MODERATES (TED2021-130145B-I00) funded by MCIN/AEI/ 10.13039/501100011033 and by the “European Union NextGenerationEU/PRTR"</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Parihar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thapa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <article-title>Hate speech detection using natural language processing: Applications and challenges</article-title>
          ,
          <source>in: 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1302</fpage>
          -
          <lpage>1308</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Plaza-Del-Arco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Molina-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Ureña-López</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Martín-Valdivia</surname>
          </string-name>
          ,
          <article-title>A multi-task learning approach to hate speech detection leveraging sentiment analysis</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>112478</fpage>
          -
          <lpage>112489</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Garcia-Cumbreras</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>García-Baena</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Garcia-Díaz</surname>
            ,
            <given-names>B. R.</given-names>
          </string-name>
          <string-name>
            <surname>Chakravarthi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Valencia-García</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          <string-name>
            <surname>Ureña-López</surname>
          </string-name>
          , Overview of hope at iberlef 2023:
          <article-title>Multilingual hope speech detection</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          )
          <fpage>371</fpage>
          -
          <lpage>381</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F. M. P.</given-names>
            <surname>Del Arco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B. P.</given-names>
            <surname>Portillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Úbeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gil</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-T.</surname>
          </string-name>
          Martín-Valdivia,
          <article-title>Share: A lexicon of harmful expressions by spanish speakers</article-title>
          ,
          <source>in: Proceedings of the Thirteenth Language Resources and Evaluation Conference</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1307</fpage>
          -
          <lpage>1316</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F. M. P.</given-names>
            <surname>Del Arco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Montejo-Ráez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A. U.</given-names>
            <surname>Lopez</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-T.</surname>
          </string-name>
          Martín-Valdivia,
          <article-title>Ofendes: A new corpus in spanish for ofensive language research</article-title>
          ,
          <source>in: Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP</source>
          <year>2021</year>
          ),
          <year>2021</year>
          , pp.
          <fpage>1096</fpage>
          -
          <lpage>1108</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>