<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>From Language towards Formal Spatial Calculi</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Parisa Kordjamshidi</string-name>
          <email>parisa.kordjamshidi@cs.kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martijn Van Otterlo</string-name>
          <email>martijn.vanotterlo@cs.kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marie-Francine Moens</string-name>
          <email>sien.moens@cs.kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Katholieke Universiteit Leuven, Departement Computerwetenschappen</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>We consider mapping unrestricted natural language to formal spatial representations. We describe ongoing work on a two-level machine learning approach. The rst level is linguistic, and deals with the extraction of spatial information from natural language sentences, and is called spatial role labeling. The second level is ontological in nature, and deals with mapping this linguistic, spatial information to formal spatial calculi. Our main obstacles are the lack of available annotated data for training machine learning algorithms for these tasks, and the di culty of selecting an appropriate abstraction level for the spatial information. For the linguistic part, we approach the problem in a gradual way. We make use of existing resources such as The Preposition Project (TPP) and the validation data of General Upper Model (GUM) ontology, and we show some computational results. For the ontological part, we describe machine learning challenges and discuss our proposed approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        An essential function of language is to convey spatial relationships between
objects and their relative locations in a space. It is a challenging problem in
robotics, navigation, query answering systems, etc. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Our research considers
the extraction of spatial information in a multimodal environment. We want to
represent spatial information using formal representations that allow spatial
reasoning. An example of an interesting multimodal environment is the domain of
navigation where we expect a robot to follow navigation instructions. By placing
a camera on the robot, it should be able to recognize visible objects and their
location. In this context, mapping natural language to a formal spatial
representation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] has several advantages. First, generating language from vision and
vice versa visualizing the language is more feasible if a formal intermediate layer
is employed [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Second, applying the same representation for extraction from
image/video data allows combining multimodal features for better recognition
and disambiguation in each modality. Finally, a uni ed representation for various
modalities enables spatial reasoning based on multimodal information.
In our work we identify two main layers of information (see also [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]):
1) a linguistic layer, in which (unrestricted) natural language is mapped onto
ontological structures that convey spatial information, and 2) a formal layer, in
which the ontological information is mapped onto a speci c spatial calculus such
as region connection calculus (RCC) (cf. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]). For example, in the sentence the
book is on the table the rst step should identify that there is a spatial relation
(on) between book and table, after which this could be mapped to a speci c,
formal relation AboveExternallyConnected(book; table) between two tokens
book and table that denote two physical objects in some Euclidean space. For
both transformations we propose machine learning techniques to deal with the
many sources of ambiguity in this task. This has not been done systematically
before; most often a restricted language is used to extract highly speci c and
application-dependent relations and usually one focuses on phrases of which it
is known that spatial information is present [
        <xref ref-type="bibr" rid="ref11 ref19 ref6 ref8">8, 6, 19, 11</xref>
        ].
      </p>
      <p>
        To apply machine learning e ectively, a clear task de nition as well as
annotated data are needed. Semantic hand-labeling of natural language is an
ambiguous, complex and expensive task and in our two-level view we have to cope with
the lack of available data two times. In our recently proposed semantic labeling
scheme [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], we tag sentences with the spatial roles according to holistic
spatial semantic (HSS) theory [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] and also formal spatial relation(s). For mapping
between language and spatial information, we de ned spatial role labeling and
performed experiments on the (small amount of) available annotated corpora.
The Preposition Project (TPP) data is employed for spatial preposition
recognition in the context of learning the main spatial roles trajector and landmark
from data. We have conducted initial experiments on the small corpus of the
GUM [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] spatial ontology, and the results indicate that machine learning based
on linguistic features can indeed be employed for this task.
      </p>
      <p>
        The second layer of our methodology consists of mapping the extracted
spatial information onto formal spatial systems capable of spatial reasoning. Here we
propose to annotate data with spatial calculi relations and use machine learning
to obtain a probabilistic logical model [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] of spatial relations for this mapping.
Such models can deal with both the structural aspects of spatial relations, as
well as the intrinsic ambiguity and vagueness in such mappings (see also [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]). In
the following sections we will describe both the linguistic and the formal steps,
and results of our initial machine learning experiments.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Linguistic Level and Spatial Role Labeling</title>
      <p>
        To be able to map natural language to spatial calculi we should rst extract
the components of spatial information. We call this task spatial role labeling.
It has not been well-de ned before and has not been considered as a
standalone linguistic task. We de ne it analogous to semantic role labeling (SRL) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
targeting semantic information associated with speci c phrases (usually verbs),
but as a stand-alone linguistic task utilizing speci c (data) resources.
Task de nition. We de ne spatial role labeling (SpRL) as the automatic
labeling of natural language with a set of spatial roles. The sentence-level spatial
analysis of text deals with characterizing spatial descriptions, denoting the
spatial properties of objects and their location (e.g. to answer
"what/who/where"questions). A spatial term (typically a preposition) establishes the type of spatial
relation and other constituents express the participants of the spatial relation
(e.g. a location). The roles are drawn from a pre-speci ed list of possible spatial
roles and the role-bearing constituents in a spatial expression must be identi ed
and their correct spatial role labels assigned.
      </p>
      <p>
        Representation based on spatial
semantics. The spatial role set we
employ contains the core roles of
trajector, landmark, spatial indicator, and
motion indicator [
        <xref ref-type="bibr" rid="ref21 ref6">6, 21</xref>
        ], as well as the
features path and frame of reference.
      </p>
      <p>
        Our set of spatial roles are motivated
by the theory of holistic spatial
semantics upon which we have de ned an
annotation scheme in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. We describe
these terms brie y. A trajector is the
entity whose (trans)location is of
relevance. It can be static or dynamic; a
person or an object. It can also be
expressed as a whole event. Other terms Fig. 1. Parse tree with spatial roles
often used for this concept are the local object, locatum, gure object, referent
and target. A landmark is the reference entity in relation to which the
location or the trajectory of motion of the trajector is speci ed. Alternative terms
include reference object, ground and relatum. A spatial indicator is a token
which de nes constraints on the spatial properties, such as the location of the
trajector with respect to the landmark (e.g. in, on). It explains the type of the
spatial relation and usually is a preposition, but can also be a verb, a noun, etc.
It is the pivot of a spatial relation, and in terms of GUM ontology it is called a
spatial modality. A motion indicator is a spatial term which is an indicator of
motion, e.g. motion verbs. We also consider other conceptual aspects like frame
of reference and the path of a motion that are important for spatial semantics
and roles [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
      <p>
        Linguistic challenges. Given a sentence, SpRL should answer: Q1. Does the
sentence contain spatial information? Q2. What is the pivot of the spatial
information? (spatial indicator) Q3. Starting from the pivot how can we
identify/classify the related arguments with respect to prede ned set of spatial roles?
Spatial relations in English are mostly expressed using prepositions [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], but verbs
and even other lexical categories can be central spatial terms. Hence SpRL
consists of identifying the boundaries of the arguments of the identi ed spatial term
and then labeling them with spatial roles (argument classi cation). However,
there are very sparse and limited resources for learning spatial roles. Other work
typically uses a limited set of words, often based on a set of spatial prepositions
and speci c grammatical patterns in a speci c domain [
        <xref ref-type="bibr" rid="ref13 ref8">13, 8</xref>
        ].
      </p>
      <p>
        General extraction of spatial relations is hindered by several things. First,
there is not always a regular mapping between a sentence's parse tree and its
spatial semantic structure. This is more challenging in complex expressions which
convey several spatial relations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]; see the following sentence (and Fig. 1).
      </p>
      <p>The vase is on the ground on your left.</p>
      <p>
        Here a dependency parser relates the rst \on" to \vase" and \ground". This
will produce a valid spatial relation. But the second \on" is related to \ground"
and \left", producing a meaningless spatial relation (ground on your left). For
more complex relations and nested noun phrases, deriving spatially valid
relations is not straightforward and depends on the lexical meaning of the words.
Other linguistic phenomena such as spatial-focus-shift and ellipsis of trajector
and landmark [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] make the extraction more di cult. Recognizing the right
PPattachment (i.e. whether the preposition is attached to the verb phrase or noun
phrase) could help the identi cation of spatial arguments when the verb in the
sentence conveys spatial meaning. Spatial motion detection and recognition of
the frame of reference are additional challenges but will not be dealt with here.
Approach. We aim to tackle the problem using machine learning, in a way
similar to SRL, but with important di erences. The rst di erence is that the
main focus of SRL is on the predicate, its related arguments and their roles [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
On the other hand, in SpRL the spatial indicator plays the main role and should
be identi ed and disambiguated beforehand. Second, the set of main roles is
quite di erent in SpRL and a large enough English corpus is not available from
which spatial roles can be learned directly. Hence new data resources are needed.
The main point is that we aim at domain-independent and unrestricted language
analysis. This prohibits using very limited data or a small set of extraction rules.
However, utilizing existing linguistic resources which can partially or indirectly
help to set up a (relational) joint learning framework will be of great advantage.
It can relinquish the necessity of expensive labeling of one huge corpus. Our
results for preliminary experiments are brie y described in Section 4.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Towards Spatial Calculi and Spatial Formalizing</title>
      <p>
        Mapping the spatial information in a sentence onto spatial calculi is the second
step in our framework. We denote this as spatial formalizing task.
Task de nition. We de ne spatial formalizing as the automatic mapping of the
output of SpRL to formal relations in spatial calculi. In the previous section we
have assumed that our spatial role representation covers all the spatial semantic
aspects according to HSS. For the target representation of spatial formalizing
we also require that it can express various kinds of spatial relations.
Spatial challenges. Ambiguity and under-speci cation of spatial information
conveyed in the language, but also overspeci cation of spatial calculi models,
make a direct mapping between the two sides di cult [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Most of the
qualitative spatial models focus on a single aspect, e.g. topology, direction, or shape [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
This is a drawback, particularly from a linguistic point of view and with respect
to the pervasiveness of the language. Hence spatial formalizing should cover
multiple aspects with practically acceptable level of generality. In the work of [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] the
alignment between the linguistic and logical formalizations is discussed. Since
these two aspects are rather di erent and provide descriptions of the
environment from di erent viewpoints, constructing an intermediate, linguistically
motivated ontology is proposed to establish a exible connection between them.
GUM (Generalized Upper Model ) is the state-of-the-art example of such an
ontology [
        <xref ref-type="bibr" rid="ref1 ref17">1, 17</xref>
        ]. Moreover, in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] S connections are suggested as a similarity-based
model to make a connection between various formal spatial systems and mapping
GUM to various spatial calculi. However, obtaining an annotated corpus is the
main challenge of machine learning for mapping to the target relations/ontology.
In this respect using an intermediate level with a fairly large and ne-grained
division of concepts is to some extent di cult and implies the need to have a huge
labeled corpus. In addition, the semantic overlap between the included relations
in the large ontologies makes the learning model more complex.
      </p>
      <p>
        Moreover, mapping to spatial calculi is an inevitable step for spatial
reasoning. Hence even if a corpus is constructed by annotating with a linguistically
motivated ontology, mapping to spatial calculi still should be handled as a
separate and di cult step. Even at this level, it is not feasible to de ne a
deterministic mapping by formulating rules because bridging models to each other
is not straightforward and external factors, context and all the involved spatial
components, discourse features, etc in uence this nal mapping. Therefore the
relationships between instances in di erent domains are not deterministic and
they are often ambiguous and uncertain [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Given that for each learning step,
a corpus should be available, we argue that it seems most e cient to learn a
mapping from SpRL to (one or several) spatial calculi directly.
      </p>
      <p>
        Representation based on spatial calculi. To deal with these challenges we
proposed an annotation framework [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] inspired by the works of SpatialML [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
and a related scheme in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. We suggest to map the extracted spatial indicators
and the related arguments onto the general type of the related spatial relation
Region, Direction, Distance because these relations cover all coarse-grained aspects
of space (except shape). The speci c relation expressed by the indicators is stated
in the suggested scheme with an attribute named speci c-type. If the
generaltype is REGION then we map this onto topological relations in a qualitative
spatial reasoning formalism, so the speci c-type will be RCC8 which is a
popular formal model. For directions the speci c type gets a value in fABSOLUTE,
RELATIVEg. For absolute directions we use fS(south), W(west), N(north),
E(east), NE(northeast), SE(southeast), NW(northwest), SW(southwest)g and
for relative directions fLEFT, RIGHT, FRONT, BEHIND, ABOVE, BELOWg
which can be used in qualitative direction calculi. Distances are tagged with
fQUALITATIVE, QUANTITATIVEg (cf. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]). To provide su cient exibility
in expressing all possible spatial relations our idea is to allow more than one
formal relation to be connected to one linguistic relation, helped by a (probabilistic)
logical representation. The following examples illustrate this.
a)...and next to that left of that is my computer, perhaps a meter away.
Let X=my computer, Y=that, then a SpRL gives nextTo(X; Y), leftOf(X; Y), and a
resulting spatial formalization is DC(X; Y), LEFT(X; Y), Distance(X; Y;0 value0) which in GUM
corresponds to leftprojectionexternal.
b) The car is between two houses.
      </p>
      <p>SpRL: between(car; houses), spatial relations: left(car; houses) AND
right(car; houses) which corresponds to GUM's Distribution.
c) The wheat eld is in line with crane bay.</p>
      <p>
        SpRL: inline(wheatfield; cranebay), spatial relations: behind(wheatfield;
cranebay) XOR front(wheatfield; cranebay) GUM: RelativeNonProjectionAxial
Approach. The above mentioned examples show that a logical combination of
basic relations can provide the required level of expressivity in the language.
These annotations will enable learning probabilistic logical models relating
linguistic spatial information to relations in multiple spatial calculi. Afterwards
qualitative (or even probabilistic) spatial reasoning will be feasible over the
produced output. The learned relations could be considered as probabilistic
constraints about most probable locations of the entities in the text. Probabilistic
logical learning [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] provides a tool in which considerable amounts of (structured)
background knowledge can be used in the presence of uncertainty. The available
linguistic background knowledge and features includes i) the features of the rst
step of spatial role labeling (syntactic, lexical and semantical information from
the text) and ii) linguistic resources such as WordNet, FrameNet, language
models and word co-occurences [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. These could be combined with visual features
extracted from visual resources in a multimodal environment for more speci
cation of spatial relations. Structured outputs (i.e. the mapping to formal
relations) could be learned in a joint manner. By exploiting a joint learning platform,
annotating a corpus by aforementioned spatial semantics in addition to
annotating by the nal spatial relations (derived from spatial calculi) is less expensive
than annotating and learning the two levels independently. Implementing such
a learning setting is ongoing work.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Current Experiments</title>
      <p>To start with empirical studies, we have performed experiments on the rst
SpRL learning phase. We learn to identify spatial indicators and their arguments
trajector and landmark. We do not treat motion, path and frame of reference in
this paper, and focus solely on prepositions as spatial indicators here.
Spatial preposition. For unrestricted language it seems valuable to rst
recognize whether there is any spatial indicator in the text. Since prepositions mostly
play key roles for the spatial information in the rst step we examine whether
an existing preposition in a sentence conveys a spatial sense. Here we use
linguistically motivated features, such as parse and dependency trees and semantic
roles. We extracted these features from the training and test data of the TPP
data set and tested several classi ers. The current results are a promising
starting point for the spatial sense recognition and the extraction of spatial relations.
The selected features were evaluated experimentally and our nal coarse-grained
MaxEntropy sense classi er outperformed the best system of the SemEval-2007
challenge by providing an F1 measure of about 0.874. We achieved an accuracy
of about 0.88 for the task of recognizing whether a preposition has a spatial
meaning in a given context.</p>
      <p>Extraction of trajector and landmark. In the second SpRL step we
extract the trajector and landmark arguments. Our features are inspired by those
in SRL. The main di erence is that the pivot of the semantic relations here is
the preposition, and not the predicate. The features from the parse/dependency
tree and semantic role labeler are extracted from GUM examples. We labeled the
nodes in the parse tree with GUM labels trajector(locatum), landmark(relatum)
and spatial indicator (spatialModality).</p>
      <p>We assume the spatial indicator (prepo- Method F1(T) F1(L) Acc(All)
sition) is correctly disambiguated and given, NBayes 0.86 0.70 0.94
i.e. we perform a multi-class classi cation of MaxEnt 0.91 0.767 0.965
parse tree nodes by trajector, landmark and CRF 0.928 0.901 0.921
none, for which we employed standard
classi</p>
      <p>
        ers (naive Bayesian (NB), and maximum en- Table 1. Extraction of trajector
tropy (MaxEnt)). In addition, we tagged the (T) and landmark (L)
sentences as sequences using the same features
and applied a simple sequence tagger based on conditional random elds (CRF).
The spatial annotations of GUM were altered in some instances to be able to
obtain more regular patterns for machine learning. We labeled the continuous
words (prepositions) and their modi ers as one spatial modality even if they
had been tagged as individual relations in GUM, and we do not tag implicit
trajectors/landmarks. In ongoing experiments we classify the headwords instead
of whole constituents [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Table 1 presents the preliminary results for \trajector"
(T) and \landmark" (L) recognition including overall accuracy evaluated by
10fold cross validation. The simple multi-class classi cation ignores the global
correlations between classes and as Table 1 indicates, more sophisticated CRF
models can improve the results in particular for landmarks. Since the main sources
of errors are a lack of data and the dependency of spatial semantics on lexical
information, we will employ additional (lexical) features and ideally will use a
larger corpus in our future experiments. However the current results show the
rst step of applying machine learning for SpRL and indicate a promising start
towards achieving the entire automatic mapping from language to spatial calculi.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and Future Directions</title>
      <p>We have introduced a model for mapping natural language to spatial calculi.
Both aspects of spatial role labeling and spatial formalizing have been described.
A number of related problems that cause di culties and ambiguities were
addressed, and we have shown preliminary results for experiments on SpRL and
the extraction of trajectors and landmarks. Our main idea for future work is to
obtain (i.e. create) a corpus which is labeled by holistic spatial semantics plus a
combination of spatial calculi. Each relation in the language can be connected
to a set of relations belonging to prede ned spatial calculi. This gives a logical
representation of the language based on spatial calculi. We aim to learn
statistical relational models for this. This enables adding probabilistic background
knowledge related to structural information and spatial semantic notions, and
supports (probabilistic) spatial reasoning over the learned models.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>J.</given-names>
            <surname>Bateman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tenbrink</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Farrar</surname>
          </string-name>
          .
          <article-title>The role of conceptual and linguistic ontologies in discourse</article-title>
          .
          <source>Discourse Processes</source>
          ,
          <volume>44</volume>
          (
          <issue>3</issue>
          ):
          <volume>175</volume>
          {
          <fpage>213</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Bateman</surname>
          </string-name>
          .
          <article-title>Language and space: a two-level semantic approach based on principles of ontological engineering</article-title>
          .
          <source>Int. J. of Speech Tech.</source>
          ,
          <volume>13</volume>
          (
          <issue>1</issue>
          ):
          <volume>29</volume>
          {
          <fpage>48</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>3. L. De Raedt</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Frasconi</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Kersting</surname>
          </string-name>
          , and S. Muggleton, editors.
          <source>Probabilistic Inductive Logic Programming</source>
          , volume
          <volume>4911</volume>
          <source>of LNCS</source>
          . Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Antony</given-names>
            <surname>Galton</surname>
          </string-name>
          .
          <article-title>Spatial and temporal knowledge representation</article-title>
          .
          <source>Journal of Earth Science Informatics</source>
          ,
          <volume>2</volume>
          (
          <issue>3</issue>
          ):
          <volume>169</volume>
          {
          <fpage>187</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>J.</given-names>
            <surname>Hois</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          .
          <article-title>Counterparts in language and space { similarity and sconnection</article-title>
          .
          <source>In Proceedings of the 2008 Conference on Formal Ontology in Information Systems</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kelleher</surname>
          </string-name>
          .
          <article-title>A Perceptually Based Computational Framework for the Interpretation of Spatial Language</article-title>
          .
          <source>PhD thesis</source>
          , Dublin City University,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kelleher</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Costello</surname>
          </string-name>
          .
          <article-title>Applying computational models of spatial prepositions to visually situated dialog</article-title>
          .
          <source>Comput. Linguist.</source>
          ,
          <volume>35</volume>
          (
          <issue>2</issue>
          ):
          <volume>271</volume>
          {
          <fpage>306</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>T.</given-names>
            <surname>Kollar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tellex</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roy</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Roy</surname>
          </string-name>
          .
          <article-title>Toward understanding natural language directions</article-title>
          .
          <source>In HRI</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>P.</given-names>
            <surname>Kordjamshidi</surname>
          </string-name>
          , M van Otterlo, and
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Moens</surname>
          </string-name>
          .
          <article-title>Spatial role labeling: Automatic extraction of spatial relations from natural language</article-title>
          .
          <source>Technical report, Katholieke Universiteit Leuven</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>P.</given-names>
            <surname>Kordjamshidi</surname>
          </string-name>
          , M van Otterlo, and
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Moens</surname>
          </string-name>
          .
          <article-title>Spatial role labeling: task de nition and annotation scheme</article-title>
          .
          <source>In LREC</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>and J. Zhao.</surname>
          </string-name>
          <article-title>The extraction of trajectories from real texts based on linear classi cation</article-title>
          .
          <source>In Proceedings of NODALIDA</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. W. Liu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Renz</surname>
          </string-name>
          .
          <article-title>Combining RCC-8 with qualitative direction calculi: algorithms and complexity</article-title>
          .
          <source>In IJCAI</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>K.</given-names>
            <surname>Lockwood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Forbus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Halstead</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Usher</surname>
          </string-name>
          .
          <article-title>Automatic categorization of spatial prepositions</article-title>
          .
          <source>In Proceedings of the 28th Annual Conference of the Cognitive Science Society</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. I. Mani,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hitzeman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Richer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Quimby</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Wellner</surname>
          </string-name>
          . SpatialML:
          <article-title>Annotation scheme, corpora, and tools</article-title>
          .
          <source>In LREC</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. L.
          <string-name>
            <surname>Marquez</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Carreras</surname>
            ,
            <given-names>K. C.</given-names>
          </string-name>
          <string-name>
            <surname>Litkowski</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Stevenson</surname>
          </string-name>
          .
          <article-title>Semantic role labeling: An introduction to the special issue</article-title>
          . Comp. Ling.,
          <volume>34</volume>
          (
          <issue>2</issue>
          ):
          <volume>145</volume>
          {
          <fpage>159</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Mooney</surname>
          </string-name>
          .
          <article-title>Learning to connect language and perception</article-title>
          .
          <source>In AAAI</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>R. Ross</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Vierhu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <article-title>Krieg-Bruckner, and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Bateman</surname>
          </string-name>
          .
          <article-title>Towards dialogue based shared control of navigating robots</article-title>
          .
          <source>In Proceedings of Spatial Cognition IV: Reasoning</source>
          , Action, Interaction, pages
          <volume>478</volume>
          {
          <fpage>499</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>Q.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Jiang</surname>
          </string-name>
          .
          <article-title>Annotation of spatial relations in natural language</article-title>
          .
          <source>In Proceedings of the International Conference on Environmental Science and Information Application Technology</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Tappan</surname>
          </string-name>
          .
          <article-title>Knowledge-Based Spatial Reasoning for Automated Scene Generation from Text Descriptions</article-title>
          .
          <source>PhD thesis</source>
          , New Mexico State University Las Cruces, New Mexico,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>M. Tenorth</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Nyga</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Beetz</surname>
          </string-name>
          .
          <article-title>Understanding and Executing Instructions for Everyday Manipulation Tasks from the World Wide Web</article-title>
          .
          <source>In ICRA</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>J.</given-names>
            <surname>Zlatev</surname>
          </string-name>
          .
          <article-title>Spatial semantics</article-title>
          .
          <source>In Hubert Cuyckens and Dirk Geeraerts (eds.) The Oxford Handbook of Cognitive Linguistics, Chapter</source>
          <volume>13</volume>
          , pages
          <fpage>318</fpage>
          {
          <fpage>350</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>