<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>CLEF 2017: Multimodal Spatial Role Labeling Task Working Notes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Parisa Kordjamshidi</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Taher Rahgooy</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marie-Francine Moens</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>James Pustejovsky</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Umar Manzoor</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kirk Roberts</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Brandies University</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Katholieke Universiteit Leuven</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>The University of Texas Health Science Center at Houston</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Tulane University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>The extraction of spatial semantics is important in many real-world applications such as geographical information systems, robotics and navigation, semantic search, etc. Moreover, spatial semantics are the most relevant semantics related to the visualization of language. The goal of multimodal spatial role labeling task is to extract spatial information from free text while exploiting accompanying images. This task is a multimodal extension of spatial role labeling task which has been previously introduced as a semantic evaluation task in the SemEval series. The multimodal aspect of the task makes it appropriate for the CLEF lab series. In this paper, we provide an overview of the task of multimodal spatial role labeling. We describe the task, sub-tasks, corpora, annotations, evaluation metrics, and the results of the baseline and the task participant.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The multimodal spatial role labeling task (mSpRL) is a multimodal extension of the
spatial role labeling shared task in SemEval-2012 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Although there were proposed
extensions of the data and the task in more extensive schemes in Kolomiyets et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
and Pustejovsky et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the SemEval-2012 data was more appropriate for the goal
of incorporating the multimodality aspect. SemEval-2012 annotates CLEF IAPRTC-12
Image Benchmark [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which includes touristic pictures along with a textual description
of the pictures. The descriptions are originally provided in multiple languages though
we use the English annotations for the purpose of our research.
      </p>
      <p>
        The goal of mSpRL is to develop natural language processing (NLP) methods for
extraction of spatial information from both images and text. Extraction of spatial
semantics is helpful for various domains such as semantic search, question answering,
geographical information systems, and even in robotic settings when giving robots
navigational instructions or instructions for grabbing and manipulating objects. It is also
essential for some specific tasks such as text to scene conversion (or vice-versa), scene
understanding as well as general information retrieval tasks when using a huge amount
of available multimodal data from various resources. Moreover, we have noticed an
increasing interest in the extraction of spatial information from medical images that are
accompanied by natural language descriptions. The textual descriptions of a subset of
images are annotated with spatial roles according to spatial role labeling annotation
scheme [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We should note that considering the vision and language modalities and
combining the two media has become a very popular research challenge nowadays.
We distinguish our work and our data from the existing research related to vision and
language (inter alia, [
        <xref ref-type="bibr" rid="ref11 ref3">11, 3</xref>
        ]) in considering explicit formal spatial semantics
representations and providing direct supervision for machine learning techniques by our annotated
data. The formal meaning representation would help to exploit explicit spatial
reasoning mechanisms in the future. In the rest of this overview paper, we introduce the task
in Section 2; we describe the annotated corpus in Section 3; the baseline and the
participant systems are described in Section 4; Section 5 reports the results and the evaluation
metrics. Finally, we conclude in Section 6.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Task Description</title>
      <p>
        The task of text-based spatial role labeling (SpRL) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] aims at mapping natural language
text to a formal spatial meaning representation. This formal representation includes
specifying spatial entities based on cognitive linguistic concepts and the relationships
between those entities, in addition to the type of relationships in terms of qualitative
spatial calculi models. A concise ontology of the main target concepts is drawn in
Figure 1 and the details are described later in this section. The applied ontology includes
a subset of concepts proposed in the scheme described in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We divide this task to
three sub-tasks. To clarify these sub-tasks, we use the example of Figure 2. This figure
shows a photograph and a few English sentences that describe it. Given the first
sentence “About 20 kids in traditional clothing and hats waiting on stairs.”, we need to do
the following tasks:
is-a
PP
spatial indicator
composed-of
Region
      </p>
      <p>is-a
is-a is-a
PO
is-a
EC
is-a</p>
      <p>DC</p>
      <p>EQ
trajector</p>
      <p>landmark
composed-of composed-of
spatial relation</p>
      <p>is-a
Direction
is-a is-a</p>
      <p>front
back
is-a
is-a
is-a</p>
      <p>is-a
is-a</p>
      <p>Distance</p>
      <p>
        above
left
right
below
– Sub-task 1: The first task is to identify the phrases that refer to spatial entities and
classify their roles. The spatial roles include a) spatial indicators, b) trajectors, c)
landmarks. Spatial indicators indicate the existence of spatial information in a
sentence. Trajector is an entity whose location is described and landmark is a reference
object for describing the location of a trajector. In the above-mentioned sentence,
the location of about 20 kids that is the trajector has been described with respect to
the the stairs that is the landmark using the preposition on that is the spatial
indicator. These are examples the spatial roles that we aim to extract form the sentence.
– Sub-task 2: The second sub-task is to identify the relations/links between the
spatial roles. Each spatial relation is represented as a triplet of (spatial-indicator,
trajector, landmark). Each sentence can contain multiple relations and individual phrases
can even take part in multiple relations. Furthermore, occasionally roles can be
implicit in the sentence (i.e., a null item in the triplet). In the above example, we have
the triplet (kids,on,stairs) that form a spatial relation/link between the three above
mentioned roles. Recognizing the spatial relations is very challenging because there
could be several spatial roles in the sentence and the model should be able to
recognize the right connections. For example (waiting, on, stairs) is a wrong relation
here because “kids” is the trajector in this sentence not “waiting”.
– Sub-task 3: The third sub-task is to recognize the type of the spatial triplets. The
types are expressed in terms of multiple formal qualitative spatial calculi models
similar to Figure 1. At the most course-grained level the relations are classified into
three categories of topological (regional), directional, or distal. Topological
relations are classified according to the well-known RCC (regional connection
calculus) qualitative representation. A variation of RCC8 with five relations that is shown
in Figure 1 includes Externally connected (EC), Disconnected (DC), Partially
overlapping (PO), Proper part (PP), and Equality (EQ). The data is originally annotated
by RCC8 which distinguishes between Proper part (PP), Tangential proper part
(TPP) and Inverse tangential proper part inverse (TPPI). For this lab the original
RCC8 annotations are used. In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], these categories are merged due to the lack of
examples in each of those in the corpus and also being semantically closely
related. Directional relations include 6 relative directions: left, right, above, below,
back, and front. In the above example, we can state the type of relation between
the roles in the triplet (kids,on,stairs) is “above”. In general, we can assign multiple
types to each relation. This is due to the polysemy of spatial prepositions as well
as the difference between the level of specificity of spatial relations expressed in
the language compared to formal spatial representation models. However, multiple
assignments are not frequently made in our dataset.
      </p>
      <p>
        The task that we describe here is similar to the specifications that are provided in
Kordjamshidi et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], however, the main point of this CLEF lab was to provide an
additional resource of information (the accompanying images) and investigate the ways
that the images can be exploited to improve the accuracy of the text-based spatial
extraction models. The way that the images can be used is left open to the participants.
Previous research has shown that this task is very challenging [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], particularly given the
small set of available training data and we aim to investigate if using the images that
accompany textual data can improve the recognition of the spatial objects and their
relations. Specifically, our hypothesis is that the images could improve the recognition of
the type of relations given that the geometrical features of the boundaries of the objects
in the images are closer to the formal qualitative representations of the relationships
compared to the counterpart linguistic descriptions.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Annotated Corpora</title>
      <p>
        The annotated data is a subset of the IAPR TC-12 image Benchmark [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It contains
613 text files with a total of 1,213 sentences. The original corpus was available without
copyright restrictions. The corpus contains 20,000 images taken by tourists with
textual descriptions in up to three languages (English, German, and Spanish). The texts
describe objects and their absolute or relative positions in the image. This makes the
corpus a rich resource for spatial information. However the descriptions are not always
limited to spatial information which makes the task more challenging. The data has
been annotated with the roles and relations that were described in Section 2, and the
annotated data can be used to train machine leaning models to do this kind of extractions
automatically. The text has been annotated in previous work (see [
        <xref ref-type="bibr" rid="ref6 ref7">7, 6</xref>
        ]). The role
annotations are provided on phrases rather than single words. The statistics about the data
is given in Table 1. For this lab, we augmented the textual spatial annotations with a
reference to the aligned images in the xml annotations and fixed some of the annotation
mistakes to provide a cleaner version of the data.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>System Descriptions</title>
      <p>
        We, as organizers of the lab, provided a baseline inspired by previous research for the
sake of comparison. The shared task had one official participant who submitted two
systems. In this section, we describe the submitted systems and the baseline.
– Baseline: For sub-task 1 and classifying each role (Spatial Indicator, Trajector,
and Landmark), we created a sparse perceptron binary classifier that uses a set of
lexical, syntactical, and contextual features, such as lexical surface patterns,
headwords phrases, part-of-speech tags, dependency relations, subcategorization, etc.
For classifying the spatial relations, we first trained two binary classifiers on pairs
of phrases. One classifier detects Trajector-SpatialIndicator pairs and another
detects Landmark-SpatialIndicator pairs. We used the spatial indicator classifier from
sub-task 1 to find the indicator candidates and considered all noun phrases as role
candidates. Each combination of SpatialRole-SpatialIndicator candidates
considered as a pair candidate and the pair classifiers are trained on. We used a number
of relational features between the pairs of phrases such as distance, before, etc
to classify them. In the final phase, we combined the predicted phrase pairs that
have a common spatial indicator in order to create the final relation/triplet for
subtask 2. for example if (kids,on) pair is classified as Trajector-SpatialIndicator and
(stairs,on) is predicted as Landmark-SpatialIndicator then we generate the triplet,
(on,kids,stairs) as a spatial triplet since both trajector and landmark relate to the
same preposition ‘on‘. The features of this baseline model are inspired by the work
in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. For sub-task 3 and training general type and specific value classifiers, we
used a very naive pipeline model as the baseline. In this pipeline, the predicted
triplets from the last stage are used for training the relations types. For these type
classifiers, simply, the phrase features of each argument of the triplets are
concatenated and used as features. Obviously, we miss a large number of relations at the
stage of spatial relation extraction in sub-task 2 since we depend on its recall.
– LIP6: The LIP6 group built a system for sub-task 3 that classifies relation types. For
the sub-task 1 and 2, the proposed model in Roberts and Harabagiu [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] was used.
Particularly, an implementation of that model in the Saul [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] language/library was
applied. These models are assigning roles to single words rather than phrases.
However, since our evaluation counts the overlapping phrases, the correctly classified
single words will be counted as correct predictions. For every relation, an
embedding is built with available data: the textual relation triplet and visual features from
the associated image. Pre-trained word embeddings are used [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] to represent the
trajector and landmark and a one-hot vector indicates which spatial indicator is
used; the visual features and embeddings from the segmented regions of the
trajectors and landmarks are extracted and projected into a low dimensional space.
Given those generated embeddings, a linear SVM model is trained to classify the
spatial relations and the embeddings remain fixed. Several experiments were made
to try various classification modes and discuss the effect of the model parameters,
and particularly to investigate the impact of the visual modality. As the best
performing model ignores the visual modality, these results highlight that considering
multimodal data for enhancing natural language processing is a difficult task and
requires more efforts in terms of model design.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Evaluation Metrics and Results</title>
      <p>About 50% of the data was used as the test set for the evaluation of the systems. The
evaluation metrics were precision, recall, and F1-measure, defined as:</p>
      <p>T P</p>
      <p>T P + F N
recall =
; precision =</p>
      <p>T P
T P + F P
; F 1 =
2 recall precision
(recall + precision)
where, TP (true positives) is the number of predicted components that match the ground
truth, FP (false positives) is the number of predicted components that do not match
the ground truth, and FN (false negatives) is the number of ground truth components
that do not match the predicted components. These metrics are used to evaluate the
performance on recognizing each type of role, the relations and each type of
relation separately. Since the annotations are provided based on phrases, the overlapping
phrases are counted as correct predictions. The evaluation with exact matching between
phrases would provide lower performance than the reported ones. The relation type
evaluation for sub-task 3 includes course- and fine-grained metrics. The coarse-grained
metric (overall-CG) averages over the labels of region, direction, and distance. The
fine-grained metric (overall-FG) shows the performance over all lower-level nodes in
the ontology including the RCC8 types (e.g., EC) and directional relative types (e.g.,
above, below).</p>
      <p>
        Table 2 shows the results of our baseline system that was described in the previous
section. Though the results of the roles and relation extraction are fairly comparable to
the state of the art [
        <xref ref-type="bibr" rid="ref14 ref9">14, 9</xref>
        ], the results of the relations type classifiers are less matured
because a simple pipeline, described in Section 4, was used. Table 3 shows the results
of the participant systems.
      </p>
      <p>
        As mentioned before, LIP6 uses the model suggested in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and its implementation
in Saul [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for sub-task 1 and sub-task 2. It has a focus in designing a model for
subtask 3. The experimental results using textual embeddings alone are shown under text
only in the table, and a set of results are reported by exploiting the accompanying images
and training the visual embeddings from the corpora. The LIP6’s system significantly
outperforms the provided baseline for relation type classifiers. Despite our expectations,
the results that use the visual embeddings perform worse than the one that ignores
images. In addition to the submitted systems, the LIP6 team improved their results
slightly by using a larger feature size in their dimensionality reduction procedure with
their text-only features. This model outperforms their submitted systems and is listed in
Table 3 as Best model.
      </p>
      <sec id="sec-5-1">
        <title>Label SP TR LM</title>
        <p>Overall</p>
      </sec>
      <sec id="sec-5-2">
        <title>Triplets</title>
        <p>
          Discussion. Confirming the previous research results, the results of LIP6 team show
this task is challenging, particularly, when using this small set of training data. LIP6 was
able to outperform the provided baseline using the textual embeddings for relation types
but the results of combining the images, in the contrary, dropped the performance. This
result indicates that integrating the visual information needs more investigation
otherwise it can only add noise to the learning system. One very basic question to be
answered is whether the images of this specific dataset can potentially provide
complementary information or help resolving ambiguities in the text at all; this investigation
might need a human analysis. Although the visual embeddings did not help the best
participant system with the current experiments, using other alternative embeddings
trained from large corpora might help improving this task. Given the current interest of
the vision and language communities in combining the two modalities and the benefits
that this trend will have for the information retrieval, there are many new corpora
becoming available(e.g. [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]) which can be valuable sources of information for obtaining
appropriate join features. There is a separate annotation on the same benchmark that
includes the ground-truth of the co-references in the text and image [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. This annotation
has been generated for co-reference resolution task but it seems to be very useful to
be used on top of our spatial annotations for finding better alignment between spatial
roles and image segments. In general, current related language and vision resources do
not consider formal spatial meaning representation but can be used indirectly to train
informative representations or be used as source for indirect supervision for extraction
of formal spatial meaning.
6
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>The goal of the multimodal spatial role labeling lab was to provide a benchmark to
investigate how adding grounded visual information can help understanding the spatial
semantics of natural language text and mapping language to a formal spatial meaning
representation. The prior hypothesis has been that the visual information should help
the extraction of such semantics because spatial semantics are the most relevant
semantics for visualization and the geometrical information conveyed in the vision media
should be able to easily help in disambiguation of spatial meaning. Although, there are
many recent research works on combining vision and language, none of them consider
obtaining a formal spatial meaning representation as a target nor provide supervision
for training such representations. However, the experimental results of our mSpRL lab
participant show that even given ground truth segmented objects in the images and
having the exact geometrical information about their relative positions, adding useful
information for understanding the spatial meaning of the text is very challenging. The
experimental results indicate that using the visual embeddings and using the
similarity between the objects in the image and spatial entities in the text can turn to adding
noise to the learning system reducing the performance. However, we believe our prior
hypothesis is still valid, but finding an effective way to exploit vision for spatial
language understanding, particularly obtaining a formal spatial representation appropriate
for explicit reasoning, remains as an important research question.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Grubinger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Mu¨ller, H.,
          <string-name>
            <surname>Deselaers</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>The IAPR benchmark: a new evaluation resource for visual information systems</article-title>
          .
          <source>In: Proceedings of the International Conference on Language Resources and Evaluation (LREC)</source>
          . pp.
          <fpage>13</fpage>
          -
          <lpage>23</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Kazemzadeh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ordonez</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matten</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>T.L.</given-names>
          </string-name>
          :
          <article-title>Referit game: Referring to objects in photographs of natural scenes</article-title>
          .
          <source>In: EMNLP</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Kiros</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salakhutdinov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zemel</surname>
            ,
            <given-names>R.S.:</given-names>
          </string-name>
          <article-title>Unifying visual-semantic embeddings with multimodal neural language models</article-title>
          .
          <source>CoRR abs/1411</source>
          .2539 (
          <year>2014</year>
          ), http://arxiv.org/abs/1411.2539
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kolomiyets</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kordjamshidi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bethard</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Semeval-2013 task 3: Spatial role labeling</article-title>
          .
          <source>In: Second Joint Conference on Lexical and Computational Semantics (*SEM)</source>
          , Volume
          <volume>2</volume>
          :
          <source>Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval</source>
          <year>2013</year>
          ). pp.
          <fpage>255</fpage>
          -
          <lpage>262</lpage>
          . Atlanta, Georgia, USA (
          <year>June 2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kordjamshidi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bethard</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          :
          <article-title>SemEval-2012 task 3: Spatial role labeling</article-title>
          .
          <source>In: Proceedings of the First Joint Conference on Lexical and Computational Semantics: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval)</source>
          .
          <source>vol. 2</source>
          , pp.
          <fpage>365</fpage>
          -
          <lpage>373</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kordjamshidi</surname>
            , P., van Otterlo,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Spatial role labeling annotation scheme</article-title>
          . In: Pustejovsky J.,
          <string-name>
            <surname>I.N</surname>
          </string-name>
          . (ed.)
          <source>Handbook of Linguistic Annotation</source>
          . Springer Verlag (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kordjamshidi</surname>
            , P., van Otterlo,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          :
          <article-title>Spatial role labeling: task definition and annotation scheme</article-title>
          . In: Calzolari,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Khalid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Bente</surname>
          </string-name>
          , M. (eds.)
          <source>Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10)</source>
          . pp.
          <fpage>413</fpage>
          -
          <lpage>420</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kordjamshidi</surname>
            , P., van Otterlo,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          :
          <article-title>Spatial role labeling: towards extraction of spatial relations from natural language</article-title>
          .
          <source>ACM - Transactions on Speech and Language Processing</source>
          <volume>8</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>36</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kordjamshidi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.F.:</given-names>
          </string-name>
          <article-title>Global machine learning for spatial ontology population</article-title>
          .
          <source>Web Semant. 30(C)</source>
          ,
          <volume>3</volume>
          -
          <fpage>21</fpage>
          (
          <year>Jan 2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kordjamshidi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Saul:
          <article-title>Towards declarative learning based programming</article-title>
          .
          <source>In: Proc. of the International Joint Conference on Artificial Intelligence (IJCAI) (7</source>
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Krishna</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Groth</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johnson</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Hata</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kravitz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalantidis</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shamma</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Visual genome: Connecting language and vision using crowdsourced dense image annotations</article-title>
          .
          <source>International Journal of Computer Vision</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Pennington</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Socher</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manning</surname>
          </string-name>
          , C.D.: Glove:
          <article-title>Global vectors for word representation</article-title>
          .
          <source>In: EMNLP</source>
          . vol.
          <volume>14</volume>
          , pp.
          <fpage>1532</fpage>
          -
          <lpage>1543</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Pustejovsky</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kordjamshidi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moens</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levine</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dworman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yocum</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>SemEval-2015 task 8: SpaceEval</article-title>
          .
          <source>In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval</source>
          <year>2015</year>
          ),
          <source>9th international workshop on semantic evaluation (SemEval</source>
          <year>2015</year>
          ), Denver, Colorado,
          <fpage>4</fpage>
          -
          <lpage>5</lpage>
          June 2015. pp.
          <fpage>884</fpage>
          -
          <lpage>894</lpage>
          . ACL (
          <year>2015</year>
          ), https://lirias.kuleuven.be/handle/123456789/500427
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Roberts</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harabagiu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <string-name>
            <surname>UTD-SpRL</surname>
          </string-name>
          :
          <article-title>A joint approach to spatial role labeling</article-title>
          .
          <source>In: *SEM 2012: The First Joint Conference on Lexical and Computational Semantics</source>
          , Volume
          <volume>2</volume>
          :
          <source>Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval'12)</source>
          . pp.
          <fpage>419</fpage>
          -
          <lpage>424</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>