<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Corciulo);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Towards the construction of a dataset of art-related synaesthetic metaphors: methods and results</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Simona Corciulo</string-name>
          <email>simona.corciulo@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viviana Patti</string-name>
          <email>viviana.patti@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rossana Damiano</string-name>
          <email>rossana.damiano@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dipartimento di Informatica, Università di Torino</institution>
          ,
          <addr-line>Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper describes a model of synaesthetic metaphor in non-poetic and art-related texts, whose ultimate goal is to suggest sensory alternatives for contents accessed mainly by sight in museums and art galleries. We created and applied a multi-level annotation scheme to create a manually annotated resource of synaesthetic metaphors extracted from museum catalogues and designed a pipeline for the automatic detection and interpretation of synaesthetic metaphors in texts. Finally, we tested a preliminary implementation of this pipeline on real data, shedding light on the relevance and complexity of this phenomenon and the possible improvement areas.</p>
      </abstract>
      <kwd-group>
        <kwd>Synaesthetic metaphors</kwd>
        <kwd>multi-sensory design</kwd>
        <kwd>NL resources</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>This paper is structured as follows. After surveying the related work in Section 2, we describe
the methodology behind the manual annotation and design of the pipeline (Section 3). Section
4 illustrates the evaluation of the proposed pipeline. Discussion and Conclusion end the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        Synaesthesia in language Synesthetic metaphors consist of two subjects: the tenor and
the vehicle, so the first can be economically described by a transfer of the implicit and explicit
attributes of the second [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The terms corresponding to tenor and vehicle in Lakof’s theory are target and source, two
structures (or domains) underlying cognitive processes detectable in language [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
One of the first systematic studies concerning synaesthetic metaphors was related to the
directionality of mapping. Ullmann (1957) argues that in synaesthetic metaphors concepts of
the so-called “lower” senses often correspond to the source, while concepts of the “higher”
senses regularly regarding the target domain [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Interpretation and detection tasks Su et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] suggested a method for interpreting
syneshetic metaphors simulating cross-modal similarity between diferent perceptual
modalities. Their model can exhaustively consider the semantic knowledge of the features, perceptual
modality and sentiment and incorporate the cross-modality relations.
      </p>
      <p>
        Tekiroglu et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] examined how sensory features afect the recognition of metaphors. They
ofer a method to automatically identify these correlations from a dependency-parsed corpus
and make use of an existing vocabulary linking English terms to sensory modalities. The
ifndings reveal that sensory features are essential for detecting metaphors.
      </p>
      <p>
        Lievers [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] provided a technique for the semi-automatic extraction of synesthetic metaphors
to use with general-purpose corpora. Most transfers are spotted according to the Ullmann’s
schema, but some cases reverse transfers were reported (e.g., terrible cold).
      </p>
      <p>Our approach rely on a more complete, explicit model of synaesthetic metaphor and difers
from previous approaches in the use of annotated resources (lexica and datasets). Moreover, the
model is tailored to synaesthetic metaphors in art descriptions.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology and Resources</title>
      <p>In order to create resources and tools for analyzing synaesthetic metaphors in artwork
descriptions, we combined manual annotation and automatic tools to design a pipeline for detecting and
analysing synaesthetic metaphors. After identifying the relevant syntactic, lexical and semantic
features of synaesthetic metaphor from the literature, we manually identified and analysed
its occurrences in the description of artworks in a set of museum catalogues. Secondly, we
designed a pipeline which relies on a combination of supervised and unsupervised classification
methods to detect and analyse the occurrences of this specific metaphor type. The ultimate
goal is two fold: on the one side, leveraging manual annotation to collect ground truth data; on
the other side, testing tools for the detection and study of synesthetic metaphors.</p>
      <sec id="sec-3-1">
        <title>3.1. Manual annotation</title>
        <p>
          Annotation Scheme Based on the syntactic, lexical and semantic features emerged from the
literature on synaesthetic metaphor, we created a multi-level annotation scheme.
One of the authors manually annotated a corpus of 633 artwork descriptions (in English) taken
from the catalogues of the Turin Gallery of Modern Art (Galleria d’Arte Moderna, GAM), the
Irish Museum of Modern Art (IMMA) of Dublin, and the Hong Kong National Museum, with
the goal of collecting ground truth data on synaesthetic metaphors in our domain.
The text was annotated by using GATE, a general architecture for text engineering [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
Each description consists of an average of 345 tokens.
        </p>
        <p>The annotation scheme includes:
• sentence-level contextual information: whether the excerpt includes visual information
(e.g., painted or sculpted figures’ position in space, argumentative description about
visual features such as shapes, colors, brightness), abstract information (e.g., feelings
and concepts), other sensory details (e.g., materials, textures), and historical data (e.g.,
biographies, historical events)
• multi-word synaesthetic metaphors, annotated for:
– scheme in Ullmann’s hierarchy
– syntactic pattern
– word-level information:
∗ source and target that can be experienced through the sensory modalities
∗ source and target that are associated with concrete</p>
        <sec id="sec-3-1-1">
          <title>Painting: The Wood Pigeon’s Nest (1874)</title>
          <p>Contextual annotation:
&lt;Collins’ style developed out of a fascination and connection with the Irish landscape. His
practice would often involve detailed studies of twigs and moss&gt;&lt;biographical information&gt;.
&lt;Misty hues&gt; &lt;synaesthetic metaphor&gt; dominate his palette, contributing to the atmospheric
mood. &lt;Although Collins was primarily a landscape artist it was the concept of the land that
concerned him – his work rarely related to a particular place but captures the romantic notion of
a poetic Ireland&gt;&lt;biographical information&gt;. ‘The Wood Pigeon’s Nest’ perfectly encapsulates
this atmospheric style. The image of nest and egg is central &lt;visual description&gt;&lt;sensory
cue:position&gt; The vulnerability and fragility of the nest is evoked&lt;abstract concept&gt;, the subject
hew out of an abstract background &lt;visual description&gt;.</p>
          <p>Annotation of synesthetic metaphors:
misty&lt;haptic&gt;&lt;hues&gt;&lt;visual&gt;
The annotated dataset As it can be seen in Table 1, there are significant diferences in the
synaesthetic metaphors found in the various museums. Some museums catalogues, in fact,
seem to be more productive in terms of synaesthetic metaphors: for IMMA, in particular, only
13 synaesthetic metaphors were found for overall 493 artworks; on the other extreme, the
GAM museum yielded 20 metaphors for only 38 artworks. Despite these huge diferences,
which may be motivated by cultural and linguistic diferences in the tradition art writing and
deserve further investigation, the role of synaesthetic metaphor proves to be relevant in artwork
description.</p>
          <p>From the GAM’s catalog, 24 synaesthetic metaphors were identified for 12 diferent artworks.
Most identified pairs belong to the adjective-name pattern, while rarely to the verb-noun pattern.
For the IMMA (Irish Museum of Modern Art) catalog, 14 synaesthetic metaphors were identified
for 13 diferent artworks. As well for the GAM’s catalog, most pairs belong to the adjective-name
pattern, while infrequently to the verb-noun pattern.</p>
          <p>From the point of view of detection and interpretation, then, it is also worth considering the
frequency of the various schemes. In the IMMA and GAM’s catalogues, we have identified
40 synaesthetic metaphors: over 80% reflect the mapping between touch and vision (e.g., cold
picture,warm colour, sensuous tone). In parallel, all metaphors identified in the IMMA catalogue
represent the same directionality. For the analyzed catalogues, synaesthetic metaphors rarely
involve the gustatory, olfactory and hearing modalities. For the GAM catalogue, a few samples
of this type were identified, with a frequency of less than 12% (e.g., erotic flavour , fresh tone);
the synaesthetic metaphors collected from the IMMA’s catalogue also feature a shallow range
of modalities, excluding the gustatory and olfactory ones. Nevertheless, for this catalogue, a
rarefied tactile dimension emerges without reference to taste or smell. The preference for visual,
haptic mapping is confirmed for the Hong Kong museum. Compared to the IMMA and GAM
catalogues, 25% of the collected metaphors from the Hong Kong Museum are descriptive of the
brushstrokes on the canvas, e.g. described as delicate, crisp, and rigorous.</p>
          <p>Notice that this ordering confirms the findings are also in line with the hierarchy of modalities,
according to which synaesthetic metaphors proceed from the higher to lower modalities.
Collecting art-related descriptions In order to gather further text data for testing the
manual and automatic annotation tools, we created a corpus of artwork descriptions (Corpus
A) via web scraping from Google Arts and Culture. Launched in February 2011 by Google,
it hosts around six million high-resolution images of works of art worldwide, sometimes
complemented by a textual description and metadata such as title, author, and date. This corpus
includes artworks descriptions from a set of ten diferent western and eastern countries (Table
2), selected on the basis of the diversity of collections, cultures and geographical area.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Metaphor detection and interpretation pipeline: a proposal</title>
        <p>The annotation pipeline we designed and tested on the annotated data encompasses three main
steps.</p>
        <p>1. Syntactic annotation. Artwork descriptions from the museum catalogues are parsed
and the word pairs which match the syntactic patterns of synaesthetic metaphor are
extracted, yielding a set of candidate synaesthetic metaphors.
2. Identification of sensory modalities . In this phase, the candidate word pairs are
automatically annotated for the sensory modalities using a multi-classifier. This step
relies on lexical resources for the identification of the sensory modalities of words.
3. Filtering. In this phase, the candidate word pairs, enriched with the sensory domains in
the previous step, are matched against the synaesthetic metaphor schemes provided by
Ullmann (Section 2). Only the pairs which realize one of the possible schemes are kept,
while the others are discarded: for example, word pairs where source and target belong
to the same sensory modality cannot qualify as synaesthetic metaphors.
4. Classification . In this phase, a binary classifier is run on the obtained word pairs to
identify the actual synaesthetic metaphors.</p>
        <p>
          In the following, we describe the experiments carried out to assess the feasibility of the pipeline.
Syntactic annotation In order to extract from art descriptions the syntactic patterns which
characterize synaesthetic metaphor, we used a well-established, standard format and pipeline for
syntactic annotation (Universal Dependencies, UD) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. This tool relies on the GUM treebank for
English, developed on top of UD, which includes as genres academic, blog, fiction, government,
news, nonfiction, social, spoken, web, and wiki [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. We used the UDeasy suite [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] to parse
the syntactically annotated linguistic data and extract the syntactic patterns from a CoNLL-U
format.
        </p>
        <p>
          After parsing, we extracted four dependency patterns, each composed of a word pair:
• adjective (adj) – noun (nn) where nn is parent of adj &amp; adj precedes nn by exactly one
position,
• adjective (adj) – noun (nn) where nn is parent of adj &amp; adj precedes nn by exactly two
positions,
• adjective (adj) – noun (nn) where nn is parent of adj &amp; adj precedes nn by exactly three
positions,
• verb (vrb) – noun (nn) where nn in parent of vrb &amp; vrb precedes nn by exactly one position.
Identification of sensory modalities We applied a multi-class classification with logistic
regression to map the words involved in synaesthetic metaphors onto seven classes [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] which
represent the five sensory modalities and the abstractness/concreteness dimension.
In order to maximise model performance, we focused on the main three hyper-parameters :
solver, penalty and regularization strength. We configured the LogisticRegression class for
multinomial logistic regression by setting the ‘multi_class’ argument to ‘multinomial’ and the
‘solver’ argument to a solver that supports multinomial logistic regression, the ‘lbfgs’.
We evaluated the classification model using a stratified 10-fold cross-validation. Stratification
ensures that each fold of the cross-validation has approximately the same distribution of
examples in each class of the entire training dataset. In this case, the multinomial logistic
regression model with default penalty achieved an average classification accuracy of 69.9%.
We used two datasets: the first dataset (Dataset 1), issued from manual annotation, maps
words to sensory modalities [12][13][14]; the other dataset maps words to the concreteness and
abstractness dimensions based on individuals’ neural activation [15]. Although abstractness
and concreteness are irrelevant in the detection task, since words in synesthetic metaphors can
have similar values for the two dimension [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], we used these to predict the perceptual strength
of words that were unclassified in one of the sensory classes but could still be experienced
through one of the five modalities. For example, the word diamond belongs to the concrete
dimension, but can also be recognized as visual-related in more refined classification tasks.
• Dataset 1 (which actually includes 3 dataset developed along time) was introduced by
Lievers and Winters to investigate how sensory information is encoded across lexical
categories [16]. It includes 1,123 words: 423 adjectives and 400 nouns from Lynott and
Connell [12][13], and 300 verbs from Winter[14] [17].
• Dataset 2 was introduced by Conca et al. [15] to measure neural response to abstract and
concrete concepts. It includes 96 abstract and 96 concrete nouns categorized in main four
classes: the abstracts to emotions, cognitions, attitudes, and human actions including 24
stimuli for each category, the concrete ones to biological entities and artifacts including
48 stimuli for each category.
        </p>
        <p>A relevant issue is given by the fact that the datasets employed to map words to the five senses
contain considerable noise. For example, in Lynott and Connell’s dataset of nouns paired
with sensory domains, many nouns are highly abstract, and only some are directly related to
perception [13].</p>
        <p>Filtering Since the model encompasses constraints on the cross-modal directionality, we
applied the scheme provided by Ullmann (1957) for the directionality of the sensory mapping to
the extracted word pairs
Classification We run the Hugging-Face zero-shot classification pipeline to classify the
resulting word pairs into metaphorical and non-metaphorical classes.</p>
        <p>Yin et al.[18] proposed a method for using pre-trained NLI models as classifiers of zero-shot
sequences. Thus, we used Facebook’s bart-largemnli model [19], which is a checkpoint model
further trained on the MNLI (Multi-Natural Language Inference) dataset, as basic model for
the zero-shot classification model. The used candidate labels were “metaphorical” and
“nonmetaphorical”.</p>
        <p>The use of this pipeline, not been previously tested for synaesthetic metaphor, represents a first
attempt to automatically identify the occurrences of this specific metaphor type. In addition, it
can provide insight on the complex relationship between metaphor and synaesthetic metaphor.
The Vrije Universiteit Amsterdam Amsterdam Metaphor Corpus is the largest available corpus
hand-annotated for all metaphorical language use, regardless of lexical field or source domain.
The VUA corpus was annotated to detect indirect, direct, and implicit metaphors, personification,
metaphor signal and borderline cases. It does not include synthetic metaphors [20].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Pipeline evaluation</title>
      <p>The pipeline has been tested on the dataset of the Hong Kong museum, which yielded the
highest number of occurrences of synaesthetic metaphor in the manual annotation phase. In
particular, starting from the results of the syntactic analysis (see Table 3), we focused on the
steps which rely on less established tools, namely step 2 (Automatic annotation of sensory
modalities) and 4 (Zero-shot classification). The output of these two steps has been compared
with the manually annotated data to assess the performance of the automatic tools, and gain
insight from discrepancies.</p>
      <p>Automatic annotation of sensory modalities The multi-class classification was used to
annotate the 133 word pairs, extracted from the catalogue of the museum, which belonged
to Ullmann schemes (see Table 4). By doing so, we can evaluate the obtained classification,
we considered not only the pairs corresponding to the actual metaphors found by the human
annotator, but the overall set of pairs identified by filtering the candidate word issued from
the syntactic analysis with the schemes identified by Ullmann. By doing, so we can assess the
performance of the classifier within the context of the real pipeline. The accuracy of the
multiclass sensory modality classification was measured by comparing the manual classification of
sensory modalities with the classes returned by the automatic annotation.</p>
      <p>The number of correctly assigned sensory modality schemes was 54 out of 133 (40,60%).
Given the relevance of the syntactic patterns in the detection pipeline, we also report their
distribution according to the syntactic scheme (the number of - indicate the distance between
the nodes):
• Adjective - Noun: haptic → visual: 41 pairs (of which 5 synaesthetic metaphors, 13%)
• Adjective - Noun: haptic → taste: 1 pair (of which, 1 synaesthetic metaphor, 100%)
• Adjective - Noun: haptic → auditory: 1 pair (of which, 1 synaesthetic metaphor, 100%)
• Adjective - - Noun: haptic → visual: 3 pairs (of which, 3 synaesthetic metaphors, 100%)
• Adjective - - - Noun: haptic → visual: 8 pairs (of which, 2 synaesthetic metaphors, 25%)
The multi-class classification model sufers from the noise of the lexical resources employed,
many related to the documented cross-modality of adjectives and nouns, errors in vocabulary
sampling or annotator misunderstandings [17]. Despite this, sensory features remain essential
for recognition.</p>
      <p>Zero-shot classification The zero-shot model was tested on metaphorical and
nonmetaphorical pairs.</p>
      <p>We tested the accuracy of the zero-shot classification model on a set of 40 metaphorical (20)
and non-metaphorical (20) pairs mined from the Hong Kong museum’s catalogue (issued from
manual annotation). The results can be found in Table 5.</p>
      <p>
        Moreover, the zero-shot classification model was evaluated also on a set of 63 metaphorical
(34) and non-metaphorical (29) pairs provided by Su et. al [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. While the latter dataset has not
been extracted from the art domain, it provides the only annotated resource for English which
belongs to a related domain, namely poetic expression.
      </p>
      <p>For the Honk Kong museum, Precision (P) and recall (R) of the metaphorical pairs are:
• P = 100%
• R = 5,8%</p>
      <sec id="sec-4-1">
        <title>Precision (P) and recall (R) of the non-metaphorical pairs:</title>
        <p>• P = 100%
• R = 25%
For the Su et. all dataset (2019), Precision (P) and recall (R) of the non-metaphorical pairs are
the following:
• P = 74%
• R = 100%
• P = 79%
• R = 100%
Precision (P) and recall (R) of the metaphorical pairs are:
As it can be observer from these data, the zero-shot classification model sufers from a very low
recall score regarding the Hong Kong’s pairs rather while it works well for the Chinese pairs.
The recall percentage for the metaphorical pairs from the Hong Kong’ dataset is 5,8% and for
the Su et al. one is 100%. This disparity in scoring may be related with the presence of iconic
metaphors in the Su et al.’s dataset (2019).</p>
        <p>Discussion Considering the dificulties emerged in the application of the pipeline to real data,
the preliminary results reported above suggest diferent research lines.</p>
        <p>
          First of all, the quality of resources and their suitability for this task require further investigation.
As acknowledged by the literature, in fact, the mapping of words onto sensory modality is not
fully reliable, and is afected by the method by which the mapping has been obtained. Only
in half of the cases, in fact, the sensory modality scheme has been correctly assigned by the
multi-classifier, partly due to wrong mappings in the datasets employed for the task.
Secondly, the relationship with the models and resources for the detection and interpretation
metaphors appears intricate. Metaphors are usually characterized by abstract concepts, while
synaesthetic metaphors are intrinsically rooted in concreteness, being related with perception
of the physical word through senses. The significance of the better performance of the zero-shot
classification on the dataset of synaesthetic metaphors by [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] is dificult to assess, since this
dataset has been partly obtained with synthetic methods from a corpus of occurrences extracted
from a diferent, yet related domain, namely, poetry.
        </p>
        <p>Finally, this preliminary investigation points out the importance of specific modalities. For
example, if we observe the data reported in Table 4, which reports the sensory modality schemes
of the candidate word pairs, the primary role of the Hearing-Sight scheme, and of Sight in
general, clearly emerges. This prevalence is confirmed also by the manual annotation. On the
one side, this represents an opportunity for creating alternative sensory experiences of art by
replacing sight with hearing; other examples associated with the tactility of painting surfaces,
provide a basis for enhancing the experience of art by touch (see the discussion in Section
3.1). On the other side, these data orient the research towards specific, more frequent patterns
and schemes, suggesting that the creation and enhancement of resources should address these
sensory modalities to improve the classification tasks.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, we presented a preliminary model and pipeline for the detection and interpretation
of synaesthetic metaphors in artwork description. In order to explore the occurrence and
types of these metaphors in real data, we annotated a corpus of texts extracted from museum
catalogues. Moreover, we designed a pipeline which leverages automatic methods for identifying
synaesthetic metaphors based on syntactic, lexical and semantic features. This pipeline has
been implemented with state of the art tools and evaluated on a set of real data. Although
preliminary, this experiment confirms the relevance of this phenomenon and its potential for
implementing alternative ways for experiencing art, in a universal access perspective.
For future research, we intend to improve the quality and coverage of the resources used for
the classification of sensory modalities by crowdsourcing annotations on museum data. Also,
we will investigate the emotional valence and range conveyed by synaesthetic metaphors, in
order to explore their potential in a more comprehensive way.
[12] D. Lynott, L. Connell, Modality exclusivity norms for 423 object properties, Behavior</p>
      <p>Research Methods 41 (2009) 558–564.
[13] D. Lynott, L. Connell, Modality exclusivity norms for 400 nouns: The relationship between
perceptual experience and surface word form, Behavior research methods 45 (2013)
516–526.
[14] B. Winter, Taste and smell words form an afectively loaded and emotionally flexible part
of the english lexicon, Language, Cognition and Neuroscience 31 (2016) 975–988.
[15] F. Conca, E. Catricalà, M. Canini, A. Petrini, G. Vigliocco, S. F. Cappa, P. A. Della Rosa,
In search of diferent categories of abstract concepts: a fmri adaptation study, Scientific
reports 11 (2021) 1–11.
[16] F. S. Lievers, B. Winter, Sensory language across lexical categories, Lingua 204 (2018)
45–61.
[17] B. Winter, The sensory structure of the English lexicon, University of California, Merced,
2016.
[18] W. Yin, J. Hay, D. Roth, Benchmarking zero-shot text classification: Datasets, evaluation
and entailment approach, arXiv preprint arXiv:1909.00161 (2019).
[19] N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks,
arXiv preprint arXiv:1908.10084 (2019).
[20] T. Krennmayr, G. Steen, Vu amsterdam metaphor corpus, in: Handbook of linguistic
annotation, Springer, 2017, pp. 1053–1071.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Newmark</surname>
          </string-name>
          ,
          <article-title>Stephen ullman. the principles of semantics</article-title>
          . new york: Philosophical library,
          <year>1957</year>
          . 342 pp,
          <source>Philosophy of Science</source>
          <volume>26</volume>
          (
          <year>1959</year>
          )
          <fpage>163</fpage>
          -
          <lpage>164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>C. E. Osgood,</surname>
          </string-name>
          <article-title>The cognitive dynamics of synesthesia and metaphor, in: Cognition and ifgurative language</article-title>
          ,
          <source>Routledge</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>203</fpage>
          -
          <lpage>238</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lakof</surname>
          </string-name>
          ,
          <article-title>The neural theory of metaphor</article-title>
          , Available at SSRN 1437794 (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>A model of synesthetic metaphor interpretation based on cross-modality similarity</article-title>
          ,
          <source>Computer Speech &amp; Language</source>
          <volume>58</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Tekiroğlu</surname>
          </string-name>
          , G. Özbal,
          <string-name>
            <given-names>C.</given-names>
            <surname>Strapparava</surname>
          </string-name>
          ,
          <article-title>Exploring sensorial features for metaphor identification</article-title>
          ,
          <source>in: Proceedings of the Third Workshop on Metaphor in NLP</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F. S.</given-names>
            <surname>Lievers</surname>
          </string-name>
          ,
          <article-title>Synaesthesia: A corpus-based study of cross-modal directionality</article-title>
          ,
          <source>Functions of language 22</source>
          (
          <year>2015</year>
          )
          <fpage>69</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cunningham</surname>
          </string-name>
          , Gate, a
          <article-title>general architecture for text engineering</article-title>
          ,
          <source>Computers and the Humanities</source>
          <volume>36</volume>
          (
          <year>2002</year>
          )
          <fpage>223</fpage>
          -
          <lpage>254</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>M.-C. De Marnefe</surname>
            ,
            <given-names>C. D.</given-names>
          </string-name>
          <string-name>
            <surname>Manning</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nivre</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Zeman</surname>
          </string-name>
          , Universal dependencies,
          <source>Computational linguistics 47</source>
          (
          <year>2021</year>
          )
          <fpage>255</fpage>
          -
          <lpage>308</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bouma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Seddah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zeman</surname>
          </string-name>
          ,
          <article-title>From raw text to enhanced universal dependencies: The parsing shared task at iwpt 2021</article-title>
          ,
          <source>in: Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT</source>
          <year>2021</year>
          ),
          <year>2021</year>
          , pp.
          <fpage>146</fpage>
          -
          <lpage>157</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Villa</surname>
          </string-name>
          ,
          <article-title>Udeasy: a tool for querying treebanks in conll-u format</article-title>
          ,
          <source>in: Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-10)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Pranckevičius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Marcinkevičius</surname>
          </string-name>
          ,
          <article-title>Application of logistic regression with part-of-thespeech tagging for multi-class text classification</article-title>
          ,
          <source>in: 2016 IEEE 4th workshop on advances in information, electronic and electrical engineering (AIEEE)</source>
          , IEEE,
          <year>2016</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>