<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>14-ExLab@UniTo for AMI at IberEval2018: Exploiting Lexical Knowledge for Detecting Misogyny in English and Spanish Tweets</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Endang Wahyu Pamungkas</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandra Teresa Cignarella</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valerio Basile</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viviana Patti</string-name>
          <email>pattig@di.unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dipartimento di Informatica, Universita degli Studi di Torino</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>PRHLT Research Center, Universitat Politecnica de Valencia</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>234</fpage>
      <lpage>241</lpage>
      <abstract>
        <p>We describe our participation to the Automatic Misogyny Identi cation (AMI) shared task at IberEval 2018. The task focused on the detection of misogyny in English and Spanish tweets and was articulated in two sub-tasks addressing the identi cation of misogyny at di erent levels of granularity. We describe the nal submitted systems for both languages and sub-tasks: Task A is a classical binary classi cation task to determine whether a tweet is misogynous or not, while Task B is a ner grained classi cation task devoted to distinguish di erent types of misogyny, where systems must predict (i) one out of ve categories of misogynistic behaviours and (ii) if the abusive content was purposely addressed to a speci c target or not. We propose an SVM-based architecture and explore the use of several sets of features, including a wide range of lexical features relying on the use of available and novel lexicons of abusive words, with a special focus on sexist slurs and abusive words targeting women in the two languages at issue. Our systems ranked rst in Task A for both English and Spanish (accuracy score of 0.913 for English; 0.815 for Spanish), outperforming the baselines and the other participant systems, and rst in Task B on Spanish.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In the era of mass online communication, more and more episodes of hateful
language and harassment against women occur in social media 3. Hate Speech
(HS) can be de ned as any type of communication that is abusive, insulting,
intimidating, harassing, and/or incites to violence or discrimination, and that
disparages a person or a group on the basis of some characteristics such as
race, color, ethnicity, gender, sexual orientation, nationality, religion, or other
characteristics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In particular, when HS is gender-oriented, and it speci cally
targets women, we refer to it as misogyny [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Recently, an increasing number of scholars is focusing on the task of
automatic detection of abusive or hateful language online [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] where hate speech is
3https://www.amnesty.org/en/latest/research/2018/03/online-violenceagainst-women-chapter-3
characterized by some key aspects which distinguish it from o ine, face-to-face
communication and make it potentially more dangerous and hurtful. In
particular, hate speech in the form of racist and misogynist remarks are a common
occurrence on social media [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], therefore recent works on the detection of HS
focused on HS related to race, religion, and ethnic minorities [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and on
genderbased hate, which is also the focus of the AMI shared task.
      </p>
      <p>
        Detecting misogynist content and its author is still a di cult task for social
media platforms. For instance, the popular social network Facebook is still unable
to deal with this issue and it relies on its community to report misogynistic
content4. The work of Hewitt et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is a rst study that attempts to detect
misogyny in Twitter manually, in which the authors used several terms related
to slurs against women to gather the data from Twitter. However, the automatic
detection of misogynistic content is still an open problem, with few approaches
proposed only recently [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        In this paper, we describe the systems we submitted for detecting misogyny
in the context of the Automatic Misogyny Identi cation (AMI) shared task at
IberEval 2018 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], de ned as a two-fold task on detecting misogyny in English
and Spanish tweets at di erent levels of granularity. In particular, considering
the role of lexical choice in gender stereotypes, we decided to explore the role of
lexical knowledge in detecting misogyny, by experimenting with lexical features
based on both generic lexicons of slurs and abusive words, and on speci c lexicons
of sexist slurs and hate words targeting women.
2
We built two similar systems for misogyny detection, one for English and one for
Spanish. Several sets of features were considered based on a linguistically
motivated approach, including stylistic, structural and lexical features. In particular,
in order to explore the role of lexical knowledge in this task, we experimented the
use of (i) generic lexicons of abusive words and slurs; (ii) speci c lexicons of
sexist slurs and hate words re ecting speci cally gender-based hate and well-known
cultural gender bias and stereotypes. In particular, we experimented for the rst
time in this task the use of a new multilingual lexicon (HurtLex), including an
inventory of hate words compiled by the Italian linguist Tullio De Mauro [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], which
has been semi-automatically translated from Italian into English and Spanish
both relying on BabelNet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        The list of lexical features includes: Bag of Words (BoW): sparse vector
encoding the occurrence of unigrams, bigrams and trigrams in a tweet. Swear
Word Count: this feature represents the number of swear words contained in
a tweet. We used the list of swear words from noswearing dictionary 5. Swear
Word Presence: this feature is a binary value representing the presence of swear
words. We used the same dictionary from noswearing. Sexist Slurs Presence:
4https://www.nytimes.com/2013/05/29/business/media/facebook-says-itfailed-to-stop-misogynous-pages.html
5https://www.noswearing.com/dictionary
we use a small set of sexist words aimed towards women from prior work [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
This feature has a binary value 0 (there is no sexist slur in the tweet) and 1 (there
is at least one sexist slur in the tweet). Woman-related Words Presence: this
feature is used to represent the target of misogyny. Therefore, we manually built
a small set of words in English containing synonyms or other words related to
the word \woman" 6. Additionally, we extracted a set of features based on the
presence of words from the HurtLex lexicon [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This lexicon includes a wide
inventory of about 1,000 Italian hate words originally compiled in a manual
fashion by De Mauro [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] organized in 17 categories grouped in di erent macro levels:
(a) Negative stereotypes: ethnic slurs (PS); locations and demonyms (RCI);
professions and occupations (PA); physical disabilities and diversity (DDF);
cognitive disabilities and diversity (DDP); moral and behavioral defects
(DMC); words related to social and economic disadvantage (IS).
(b) Hate words and slurs beyond stereotypes: plants (OR); animals (AN); male
genitalia (ASM); female genitalia (ASF); words related to prostitution
(PR); words related to homosexuality (OM).
(c) Other words and insults: descriptive words with potential negative
connotations (QAS); derogatory words (CDS); felonies and words related to crime and
immoral behavior (RE); words related to the seven deadly sins of the Christian
tradition (SVP). The lexicon has been translated into English and Spanish
semiautomatically by extracting all the senses of all the words from BabelNet [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
manually discarding the senses that were not relevant to the context of hate, and
nally retrieving all the English and Spanish lemmas for the remaining senses.
Thanks a manual inspection we identi ed ve categories as speci cally related
to gender-based hate: DDF and DDP related to negative stereotypes; PR, ASM
and ASF beyond stereotypes (highlighted in bold).
      </p>
      <p>The structural features employed by our systems include: Bag of Hashtags
(BoH): similarly to BoW, we exploit the hashtags. Bag of Emojis (BoE): we
also utilized the Emojis in the tweets as a feature. We used their CLDR short
name 7 in our feature matrix. Therefore, we converted the emoji unicode to its
CLDR short name by using PyPI library8. Hashtag Presence: this feature has
a binary value 0 (if there is no hashtag in the tweet) or 1 (if there is at least
one hashtag in the tweet). Link Presence: presence of URLs in the tweets as
a binary value: 0 if there is no link, 1 if there is at least one link in the tweet.
All the features are encoded as xed-size numerical or one-hot vector
representations, allowing us to experiment extensively with their combination.
3</p>
    </sec>
    <sec id="sec-2">
      <title>Experiments and Results</title>
      <p>In this section, we report on the result of the evaluation of our system for
misogyny detection according to the benchmark established by the AMI task.</p>
      <p>6For the Spanish system development, we translated all the English word lists
described here by using Google Translate: https://translate.google.com/.
7https://unicode.org/emoji/charts/full-emoji-list.html
8https://pypi.org/project/emoji/
3.1</p>
      <sec id="sec-2-1">
        <title>AMI: Tasks Description and Dataset Composition</title>
        <p>The organizers of AMI proposed an automatic detection task of misogynistic
content on Twitter, in English (EN) and Spanish (SP). Two di erent tasks were
proposed: Task A is a binary classi cation task, where every system should
determine whether a tweet is misogynous or not misogynous. Task B is composed
of two distinct classi cation tasks. First, participants were asked to classify
the misogynous tweets into ve categories of misogynistic behavior including:
\stereotype &amp; objecti cation", \dominance", \derailing", \sexual harassment &amp;
threats of violence", and \discredit". Secondly, they were asked to classify the
misogynous tweets based on their target, labeling whether it is active (i.e.
referring to one woman in particular) or passive (i.e. referring to a group of women).</p>
        <p>Task A is evaluated in terms of accuracy, while for Task B the evaluation
consists in the macro-average of the F1-scores on the positive classes. Each
participating team could submit a maximum of 5 runs, pertaining to two di erent
scenarios: constrained and unconstrained.</p>
        <p>Dataset As summarized in Table 1, the organizers provided 3,251 tweets for the
English training set and 3,307 tweets for the Spanish training set. Each tweet,
in both languages, was annotated at three levels: 1) presence of misogynous
content, 2) categories of misogynistic behavior, as described in Section 3.1, and 3)
target of misogyny (active or passive). The organizers provided a balanced label
Task A</p>
        <p>Task B</p>
        <p>English Spanish
Misogynistic</p>
        <p>1,568
Not misogynistic</p>
        <p>1,683
Total</p>
        <p>Stereotype</p>
        <p>Dominance
1,649 Derailing</p>
        <p>Sexual Harassment
Discredit
Active</p>
        <p>Passive
1,658 No class
distribution for Task A (misogynous vs. not misogynous ), while the distribution
of data for Task B was highly unbalanced, re ecting the natural distribution of
misogynistic behaviours and targets in the corpus.
3.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Experimental Setup</title>
        <p>We built two variants of our system and trained them on the available training
sets. We tuned the system on the basis of the results of a 10-fold cross validation,
using accuracy as an evaluation metric for Task A. The system for English is
based on SVM with Radial Basis Function (RBF) kernel, while the system built
for Spanish is based on SVM with a linear kernel. Both systems were built by
using scikit-learn Python library9. Additionally, we performed an ablation test
on our feature sets to study the impact of the di erent features on the system
performance. Table 2 shows the features selected for each of our submissions and
accuracy scores from cross-validation on training sets for English and Spanish.
For what concerns features based on HurtLex, in S4 (EN) and S3 (SP) we
explored the impact of hate words belonging to categories speci cally related to
gender-based hate (see Sec. 2). In addition, we tested the performance of the
best-performing sets of features of one language applied to the other language,
to gauge the multilingual potential of the best systems: the English submission
5 is based on the best-performing (in cross-validation) combination of features
for Spanish, and the Spanish submission 5 is based on the best-performing
combination of features for English. For Task B, we used exactly the same features
as Task A in each submission. We only submitted constrained runs.
3.3</p>
        <p>O</p>
        <p>cial Results and Analysis
Table 3 until Table 6 shows our submission ranking based on the competition
o cial results 10. The submission name is based on the submission numbering
on Table 2 (run 1 is result of S1 and so on). Our systems ranked rst in Subtask
A for both English (accuracy 0:913 by run 1) and Spanish (accuracy 0:815 by
run 3). Meanwhile, for Subtask B (Table 5 and Table 6), one of our systems was
the best result on Spanish (average Macro F-measure 0:446 by run 2) and the
6th on English (average Macro F-measure 0:370 by run 5).</p>
        <p>Our experiment in testing the multilingual setting proved to be a challenge.
Not surprisingly, both submissions 5 were the worst-performing compared to
other submissions. However, the English S5 shows a comparatively good
performance in absolute terms. On Table 3, we can see that all of our submissions in
English were above the competition baseline. However as we can see on Table 4,
with the same system applied to the Spanish dataset, we obtained a very low
accuracy score in Spanish (ranked 24th, accuracy 0:537). This asymmetry
indicates that the combination of BoW, BoH and BoE is a better representation of
tweets in a multilingual setting than more ad-hoc, task-speci c features.
rank submissions accuracy
1 14-exlab.c.run1 0:913
2 14-exlab.c.run2 0:902
3 14-exlab.c.run4 0:898
4 14-exlab.c.run3 0:879
: : : : : : : : :
10 14-exlab.c.run5 0:824
: : : : : : : : :
15 ami-baseline 0:784
rank submissions accuracy
1 14-exlab.c.run3 0:815
4 14-exlab.c.run1 0:812
5 14-exlab.c.run2 0:812
6 14-exlab.c.run4 0:809
: : : : : : : : :
18 ami-baseline 0:767
: : : : : : : : :
24 14-exlab.c.run5 0:536</p>
        <p>On Task B, most participants achieved relatively low results, showing the
di culty of this task, especially in classifying misogynistic behavior categories.
We found the datasets' unbalanced distribution of labels to be the main issue.
Based on the detailed result provided by the organizers, we note that most of
the submitted system are not able to detect the less represented classes
including derailing (29), dominance (49), and stereotype &amp; objecti cation (137). Also
classifying the target of misogyny (active and passive) has not been an easy task,
which can be seen looking at the F1-score of the result on o cial results.
Features including Swear Words Count, Swear Words Presence, Hashtag
Presence, Link Presence, Sexist Slurs, and Woman-related Words outperformed all
10https://amiibereval2018.wordpress.com/important-dates/results/
rank submissions F1-score
: : : : : : : : :
6 14-exlab.c.run5 0:369
8 14-exlab.c.run3 0:351
10 14-exlab.c.run4 0:343
12 14-exlab.c.run2 0:342
15 14-exlab.c.run1 0:338
: : : : : : : : :
16 ami-baseline 0:337
rank submissions F1-score
1 14-exlab.c.run2 0:446
2 14-exlab.c.run3 0:445
3 14-exlab.c.run4 0:444
5 14-exlab.c.run1 0:441
: : : : : : : : :
14 ami-baseline 0:410
: : : : : : : : :
20 14-exlab.c.run5 0:279
other submissions in English. In Spanish, the use of terms from the HurtLex
lexicon, which were selected as related to gender-based hate, improves system
performance in submission 3. However, not all the lexicon categories have been
shown useful on this task, as indicated by the result of submission 4.
4</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussion and Conclusion</title>
      <p>In this paper we described the 14-ExLab@UniTO submission for the Automatic
Misogyny Identi cation (AMI) shared task at IberEval 2018. Our approach based
on lexical knowledge was successful and our systems turned out to be the
bestperforming out of the ones participating in the Task A for both English and
Spanish. We also introduced a novel hate-speci c lexical resource which helped
to improve the performance on the misogyny identi cation task.</p>
      <p>For what concerns Task B, it was hard for all systems to classify misogynous
tweets into the 5 categories proposed. After a manual inspection of the data, it
emerged that there is no clear demarcation line between one category and the
other and that the high presence of swearing in categories such as dominance
and/or discredit just depends on the focus (e.g. the agent (man) vs. the wounded
part, the target (woman)). At the same time, stereotype &amp; objecti cation is not
so conceptually distant from the sexual harassment category, due to a strong
use of language referring to sexual body parts or vulgar sexual practices. Some
examples from the English and Spanish datasets:
stereotype &amp; objectification (EN): No girl is even capable of developing morals
until they get the slut fucked out of them. Welcome to my generation
dominance (EN): Bad girls get spankings
derailing: Women want u to automatically believe women who scream rape they
don't understand our position....
sexual harassment &amp; threats of violence (EN): @ SynergyFinny hey bitch
wassup bitch suck my dick bitch
discredit (EN): @ Herbwilson1967 Fuck that money whore @HillaryClinton Too stupid
to know consensual touching or grabbing is not assault. Only @ChelseaClinton is dumber
stereotype &amp; objectification (ES): Que cuza antes la calle, una mujer inteligente
o una tortuga vieja? Una tortuga vieja porque las mujeres inteligentes no existen : : :
dominance (ES): \Voy a ensen~arle a esta perra como se trata a un hombre"
LMAO IN LOVE WITH EL TITI
sexual harassment &amp; threats of violence (ES): @ genesismys1985 Me gustar a
abrirte las piernas y clavarte toda mi polla en tu culo.</p>
      <p>discredit (ES): Porque ladra tanto mi perra? La puta madre callate un poco
We are planning to participate to the upcoming AMI shared task at EVALITA
2018, in order to validate our approach also for the Italian language.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>V. Basile and V. Patti were partially funded by Progetto di Ateneo/CSP 2016
(Immigrants, Hate and Prejudice in Social Media, S1618 L2 BOSC 01).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Erjavec</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovacic</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          : \
          <article-title>You don't understand, this is a new war!" Analysis of hate speech in news web sites' comments</article-title>
          .
          <source>Mass Communication and Society</source>
          <volume>15</volume>
          (
          <year>2012</year>
          )
          <volume>899</volume>
          {
          <fpage>920</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Manne</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <source>Down Girl: The Logic of Misogyny</source>
          . Oxford University Press (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiegand</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A survey on hate speech detection using natural language processing</article-title>
          .
          <source>In: Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media</source>
          . (
          <year>2017</year>
          )
          <volume>1</volume>
          {
          <fpage>10</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Waseem</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hovy</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Hateful symbols or hateful people? predictive features for hate speech detection on Twitter</article-title>
          .
          <source>In: Proceedings of the NAACL student research workshop</source>
          . (
          <year>2016</year>
          )
          <volume>88</volume>
          {
          <fpage>93</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Sanguinetti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poletto</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosco</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patti</surname>
            , Stranisci,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An Italian Twitter Corpus of Hate Speech against Immigrants</article-title>
          .
          <source>In: Proc. of the 11th International Conference on Language Resources and Evaluation (LREC</source>
          <year>2018</year>
          ), ELRA (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hewitt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tiropanis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bokhove</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The problem of identifying misogynist language on Twitter (and other online social spaces)</article-title>
          .
          <source>In: Proceedings of the 8th ACM Conference on Web Science</source>
          , ACM (
          <year>2016</year>
          )
          <volume>333</volume>
          {
          <fpage>335</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Anzovino</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fersini</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Automatic Identi cation and Classi cation of Misogynistic Language on Twitter</article-title>
          .
          <source>In: Proc. of the 23rd Int. Conf. on Applications of Natural Language &amp; Information Systems</source>
          , Springer (
          <year>2018</year>
          )
          <volume>57</volume>
          {
          <fpage>64</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Fersini</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anzovino</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Overview of the Task on Automatic Misogyny Identi cation at IberEval</article-title>
          .
          <source>In: Proc. of 3rd Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval</source>
          <year>2018</year>
          )
          <article-title>co-located with SEPLN 2018), CEUR-WS.org (</article-title>
          <year>2018</year>
          )
          <volume>57</volume>
          {
          <fpage>64</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. De Mauro,
          <string-name>
            <surname>T.</surname>
          </string-name>
          :
          <article-title>Le parole per ferire</article-title>
          .
          <source>Internazionale</source>
          (
          <year>2016</year>
          ) 27 settembre
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Bassignana</surname>
          </string-name>
          , E.:
          <article-title>HurtLex: Developing a multilingual computational lexicon of words to hurt (2018) Bachelor's thesis</article-title>
          . Supervisor:
          <string-name>
            <given-names>V.</given-names>
            <surname>Patti</surname>
          </string-name>
          , Co-supervisor: V. Basile.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Fasoli</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carnaghi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paladino</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          :
          <article-title>Social acceptability of sexist derogatory and sexist objectifying slurs across contexts</article-title>
          .
          <source>Language Sciences</source>
          <volume>52</volume>
          (
          <year>2015</year>
          )
          <volume>98</volume>
          {
          <fpage>107</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Navigli</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ponzetto</surname>
            ,
            <given-names>S.P.:</given-names>
          </string-name>
          <article-title>BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>193</volume>
          (
          <year>2012</year>
          )
          <volume>217</volume>
          {
          <fpage>250</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>