<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using Stories to Create Qualitative Representations of Motion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Juan Purcalla Arrufi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexandra Kirsch</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Human-Computer-Interaction and Artificial Intelligence, Universit ̈at Tu ̈bingen</institution>
          ,
          <addr-line>Sand 14, 72076 Tu ̈bingen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>19</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>Qualitative representations of motion transform kinematic floating point data into a finite set of concepts. Their main advantage is that they usually reflect a human understanding of the moving system, so we can more straightforwardly implement human-like navigation rules; in addition, they reduce the overhead of floating point computations. Therefore, they are an asset for mobile robots or unmanned vehiclesboth terrestrial and aerial-especially those that interact with humans. In this paper we provide a method to create new qualitative representations of motion from any qualitative spatial representation by using a story-based approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Description and interpretation of moving entities (humans, animals, robots, or inert objects) are at the core
of many disciplines such as mobile robotics, human-robot interaction, geographic information systems, animal
behaviour, high-level computer vision, and knowledge representation, among others. Qualitative representations
transform the mass of quantitative data (positions and velocities) into a reduced group of concepts. Therefore,
they simplify data so that these are easier to understand and to process (e.g. in modelling, planning, learning,
or control).</p>
      <p>
        Nonetheless, the work in qualitative representations of motion is still reduced in number, when compared to
spatial representations1 [6, p. 16 ][7, p. 5187], and mostly restricted to point-like entities moving in one or two
dimensions [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Moreover, spatial representations deal with regions [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and three or more dimensions [
        <xref ref-type="bibr" rid="ref1 ref11">11, 1</xref>
        ],
but this is unusual in representations of motion.
      </p>
      <p>To fill the gap, in this paper we profit from the available spatial representations to systematically increase the
number of representations of motion: we introduce a method that creates qualitative representations of motion
given any qualitative spatial representation.</p>
      <p>
        This has direct applications, for example, we may create a representation of motion using Hall’s spatial
categorisation, proxemics [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which is based on the social distances. Such a representation of motion would
describe trajectories according to the personal space and, thus, it could be used to make robot navigation in
human environments more friendly.
      </p>
      <p>Our method centres on the concept of ’stories’ which, we believe, opens a new perspective in dealing with
representations. A spatial representation can classify two static entities, or equivalently, each snapshot of two
moving entities. If we therefore consider the complete sequence of snapshots—what we call the ‘story’ (Def. 2)—,
we have a qualitative description of the motion.</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <sec id="sec-2-1">
        <title>Qualitative Representations of Motion</title>
        <p>
          An overview of representations is found in a survey by Dylla et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]: in a total of 40 representations surveyed
they classify three as representations of motion: QRPC [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], RfDL-3-12 [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], and, the most used, QTC [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. The
survey of spatial representations of Chen et al. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] also mentions three motion representations: Dipole Calculus
[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], DIA [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], and QTC.
        </p>
        <p>
          Representations of orientation and relative direction, such as OPRA [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] or Dipole Calculus [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], are sometimes
used to represent moving entities; nevertheless, they are not primarily intended for such a task.
        </p>
        <p>
          All the aforementioned representations are limited to point-like entities moving in one or two dimensions.
There is, however, a particular qualitative relation of motion for regions [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] that is built combining RCC and
distances.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Sequences of Qualitative Relations</title>
        <p>
          Continuous sequences of qualitative relations, such as the temporal sequences of Def. 1 (p. 4), are based on
Freska’s foundational concept conceptual neighbourhood [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Connecting the qualitative relations of a certain
representation that are conceptual neighbours we obtain the conceptual neighbourhood graph[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] (See example in
Fig. 3). So paths in the conceptual neighbourhood graph and continuous sequences of qualitative relations are
equivalent.
        </p>
        <p>
          Sequences of relations are used to analyse real data by Delafontaine et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], and specifically in human-robot
interaction by Hanheide et al. [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] from which we borrow the term ‘temporal sequence of qualitative relations‘
(Def. 1).
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Temporal Sequences of Relations and Stories</title>
      <p>In this section, we define and illustrate the key concepts—stories and stories set —that we use to create qualitative
representations of motion (Sect. 5). But first of all we define the underlying concept: temporal sequence of
relations.
DC
x
x
x
vk
t =</p>
      <p>1.2s
t = 0.20s</p>
      <p>TPP</p>
      <p>y
t = 2.00s</p>
      <p>
        DC
k
Definition 1. A Temporal Sequence of Relations [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] is a chronologically ordered sequence of qualitative
relations of any kind, e.g., space or motion, generated by the motion of two entities in a time interval (ta, tb).
      </p>
      <p>The time interval (ta, tb) can be freely chosen, e.g., it can be totally unbounded, i.e., extend to the whole time
(1 , 1 ), be half-bounded (1 , tb), or bounded (ta, tb).</p>
      <p>We obtain the temporal sequence of relations of two entities in a certain time interval by mapping their
trajectories ~xk(t) and ~xl(t) into the qualitative relations of the representation we are using. We describe a
sequence of relations as a list in parenthesis: (R1, R2, . . . , Ri, . . . ). We say a temporal sequence of relations is
finite, if it has a finite number of relations, or infinite, if it has an infinite number. Notice that even though the
entities’ motion occurs in a continuous space throughout a continuous time interval, the temporal sequences are
finite, when the trajectories have a finite number of transitions between qualitative relations; this happens in
Fig. 1, the sequence is finite, (DC, EC, DC), because we have only two transitions: DC ! EC and EC ! DC.</p>
      <p>Now, based on the temporal sequences, we define the stories.</p>
      <p>Definition 2. A Story is a temporal sequence of relations of two entities that is defined over the whole
unbounded time interval (1 , 1 ).</p>
      <p>A story describes the qualitative relation of two moving entities at any instant of time. Thus, any temporal
sequence of relations is a substring of a certain story. We can see each story as a complete qualitative description
of the motion of a two-entities system. We characterise stories with the letter S and, if necessary, an appropriate
subscript.</p>
      <p>Example 1. The temporal sequence S =(DC, EC, DC) in Fig. 1 is a story. Any proper substring is not a story,
but just a temporal sequence of relations, because it does not happen in the whole unbounded interval (1 , 1 ).
For instance, the substring (EC,DC) is not a story, because it happens on [0, +1 ).</p>
      <p>Example 2. The temporal sequence (DC, EC, PO, TPP, NTPP, TPP, PO, EC, DC) in Fig. 2 is a story.
Substrings, such as (PO, TPP, NTPP, TPP) or (DC, EC, PO), are not stories, but just temporal sequence of
relations.</p>
      <p>Definition 3. The Stories Set is the set of all possible stories of two entities.</p>
      <p>If there is no constraint on the stories, the stories set contains an infinite number of stories. We refer to the
stories set with the letter ⌃ (see Sect. 5); we add a subscript, e.g., ⌃ 0, when we deal with a set of stories that is
not the stories set, but a subset thereof.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Restricting the Stories: Uniform</title>
    </sec>
    <sec id="sec-5">
      <title>Motion</title>
      <p>The central idea of this paper is to classify motions through stories: we assign the same category to the motions
that belong to the same story. (Sect. 5). Thus, the total number of categories in our novel motion representation
is the cardinality of the stories set, i.e., its number of elements. However, an awkward situation arises: the
cardinality of the stories set is infinite—some stories are also infinite—, if we do not restrict the motions that
create the stories.</p>
      <p>Consequently, we suggest restricting the type of motions considered in order to obtain a tractable motion
representation. We choose to restrict the stories by considering, from now on, only uniform motion, i.e., the
velocity vectors are constant. This has two desirable properties:
i. Each story in uniform motion is finite, i.e., has a finite number of relations (See Prop. 1 in Appendix A).
ii. The set of all possible stories in uniform motion, i.e., the stories set (Def. 3), is finite (See Prop. 2 in in
Appendix A). Consequently it partitions the whole phase space, i.e., the coordinates space of the positions
and velocities of the two entities (~xk, ~vk; ~xl, ~vl).</p>
      <p>
        The restriction to uniform motion stories is a standard assumption, if we classify motion situations that
are specified only by the current position and velocity of two entities, i.e., (~xk, ~vk; ~xl, ~vl)—the acceleration is
disregarded, as in QTC [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Though we note that our method may remain valid with other kind of restrictions.
Definition 4. A Rigid Story is the story of two entities that move with the same velocity, i.e., ~vk = ~vl.
      </p>
      <p>Rigid stories play a special role in uniform motion: each of them is a singleton—it has a single element, a
constant spatial relation. But not all singleton stories are rigid, e.g., the story S11 = (DC) is not rigid but is a
singleton (Fig. 4).
x</p>
      <p>y
x y
y</p>
      <p>x</p>
      <sec id="sec-5-1">
        <title>TPPI</title>
      </sec>
      <sec id="sec-5-2">
        <title>NTPP</title>
        <p>x
y
y x</p>
      </sec>
      <sec id="sec-5-3">
        <title>NTPPI</title>
        <p>We describe the method to create a representation of motion from any given spatial representation. In practice,
our method yields always two representations of motion: the simple one, which is just formed by the stories, and
the augmented variant, which is refined by adding the spatial relations to each story—we combine the power of
‘story‘ and ‘snapshots‘. We illustrate the method in the example below using the spatial representation RCC
(Fig. 3), and thus, the two new generated representations of motion are Motion-RCC (Eq. (1) on page 6, and
Fig. 4), and its augmented variant Augmented-Motion-RCC (Eq. (2) on page 6).</p>
        <p>The method is as follows:
1. We have a spatial representation.
2. We calculate the stories set, ⌃, for the given spatial representation. In case it is a finite set, e.g., when
restricted to uniform motion, we can work out a method to calculate it.
3. The obtained stories set is a novel representation of motion, where each story is a qualitative relation—every
motion state is classified according to the story it belongs to.
4. (optional) We can create the augmented representation of motion from the first one by specifying the spatial
relations in each story.</p>
        <sec id="sec-5-3-1">
          <title>Example: Creating a representation of motion from RCC</title>
          <p>We illustrate the method above using the spatial representation RCC. (Fig. 3). RCC relates two regions according
to their overlapping. So it yields 8 possible relations: DC, regions do not overlap; EC, regions are tangent
nonoverlapping; PO, regions overlap in the interior but none is contained in the other; TPP, region x is contained in
y and is tangent to the border; TPPI, region y is contained in x and is tangent to the border; EQ, both regions
overlap completely; NTPP, x is contained in y and does not overlap the border of y; NTPPI, y is contained in x
and does not overlap the border of x.</p>
          <p>1. We have the spatial representation RCC
2. We calculate the RCC stories set restricted to uniform motion as ⌃ = ⌃ 0 [ ⌃ 1. ⌃ 0 = {(DC), (EC), (PO),
(TPP), (NTPP)} are the rigid stories and ⌃ 1 ={(DC), (DC, EC, DC), (DC, EC, PO, EC, DC), (DC, EC,
PO, TPP, PO, EC, DC), (DC, EC, PO, TPP, NTPP, TPP, PO, EC, DC)} are the non-rigid stories. We
rename the rigid stories into S0i, ⌃ 0 = {S01, S02, S03, S04, S05}, and the non-rigid which we rename into S1i,
⌃ 1 = {S11, S12, S13, S14, S15} according to Fig. 4.
3. The stories set ⌃ is the qualitative representation of motion—note, though, that that story S01 and S11 are
equal to (DC) therefore S01 drops to avoid repetition. We call this representation ‘Motion-RCC’:</p>
          <p>Motion-RCC = {S02, S03, S04, S05, S11, S12, S13, S14, S15}
This representation assigns to every motion state (~xk, ~vk; ~xl, ~vl) the corresponding story Si, i.e., the
corresponding relation of motion.
4. (optional) We can augment the resolution of the representation of motion Motion-RCC by specifying the
spatial relations in each story—for the singleton stories this process is redundant, as they have a single
spatial relation. So we obtain the representation of motion ‘Augmented-Motion-RCC’.
(1)
(2)
Augmented-Motion-RCC = {
S02(EC), S03(PO), S04(TPP), S05(NTPP),
S11(DC), S12(DC ), S12(EC), S12(DC+),
S13(DC ), S13(EC ), S13(PO), S13(EC+), S13(DC+),
S14(DC ), S14(EC ), S14(PO ), S14(TPP),
S14(PO+), S14(EC+), S14(DC+),
S15(DC ), S15(EC ), S15(PO ), S15(TPP ), S15(NTPP),</p>
          <p>S15(TPP+), S15(PO+), S15(EC+), S15(DC+)}
For example, the relation S12(EC) indicates that the entities are moving in the story S12 at the moment of
tangency, i.e., EC. If the spatial relation appears multiple times in the story, such as EC in S3, we distinguish
each appearance, for example, S13(EC ) is chronologically the first EC, and S13(EC+), the last EC.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Applications of Qualitative Representations of Motion</title>
      <p>We outline two possible applications of qualitative representations
• Recognition of trajectories (i.e., motion patterns)</p>
      <p>
        Through the qualitative relations in the new representation of motion, we can characterise and therefore
recognise certain types of motion [
        <xref ref-type="bibr" rid="ref15 ref7">7, 15</xref>
        ], for example an ‘avoidance manoeuvre’, as in Eq. (3). This
motion sequence begins with the collision story, S15(DC ), and ends with a collision free story,S11(DC)—the
augmented indices, DC, show that nowhere a collision takes place.
      </p>
      <p>S15(DC ) ! S14(DC ) ! S13(DC ) ! S12(DC ) ! S11(DC)
(3)
• Trajectory control</p>
      <p>
        We can use the conceptual neighbourhood graph of our new representation of motion to take decisions in order
to control trajectories [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. For example, in the case of Motion-RCC, if we want to avoid a collision we have
necessarily to reach the relation S11(DC). Accordingly, the shortest paths in the conceptual neighbourhood
graph leading to the relation S11(DC) may provide the needed control operations to avoid the collision.
7
      </p>
    </sec>
    <sec id="sec-7">
      <title>Discussion</title>
      <p>We have presented a a story-based method (Sect. 5) that should be able to generate qualitative representations
of motion out of any spatial representation. The created representation of motion inherits the properties of
the used spatial representation, e.g., dimensions, or type of entities considered. The method has proven to be
e↵ective to generate meaningful qualitative representations of motions for the representation RCC (Sect. 5).
With our generated motion representation, Augmented-Motion-RCC, we have outlined two applications of motion
representations: recognition of trajectories, i.e., motion patterns; and control of trajectories.</p>
      <p>Our generating method is most e↵ective, when we restrict the trajectories of the entities, e.g., setting velocity
constant, so that our stories set is finite. This can be seen as a limitation or as the advantage to tailor the
generated representation of motion to the features of our trajectories. We have restricted the trajectories to have
uniform motion.
S12</p>
      <p>S13
S14
S14</p>
      <p>S13</p>
      <p>S15</p>
      <p>We argue that the use of ‘stories’ to classify motions borrows from a cognitive idea: we can better recall a
series of items, when they are linked by way of a story—Stories seem quite a natural way for humans to relate,
connect, or classify items.</p>
      <p>
        The next step are to test the e↵ectiveness of this method with other spatial representations, for instance,
three dimensional [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] or those dealing with orientation [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Further, we can add the transition time between
relations in the stories to obtain quantitative results.
      </p>
      <p>A</p>
    </sec>
    <sec id="sec-8">
      <title>Appendix</title>
      <p>Proposition 1. Finitude of the Stories in Uniform Motion
We can reasonably show that for two regular enough2 entities the stories in uniform motion are finite.</p>
      <p>We build the proof on two properties: first, stories in uniform motion have extreme relations (Lemma 1);
second, temporal sequences of relations in uniform motion are finite over a finite time interval (Lemma 2).
Proof. According to Lemma 1 two regular enough entities in uniform motion have extreme relations. That is,
we can find two time instants ta and tb, with ta &lt; tb, so that in the time interval (1 , ta) the entities’ relation
remains constant—we call it ra—and in the time interval (tb, +1 ) the entities’ relation remains constant.—we
call it rb.</p>
      <p>Now, According to Lemma 2, these regular enough entities moving in uniform motion have a finite temporal
sequence of relations in the interval [ta, tb], say (r1, . . . , rn).</p>
      <p>Consequently the story of the two entities, i.e., the temporal sequence of relations in the interval (1 , ta) S
[ta, tb] S (tb, 1 ), would be finite, as it is obtained by concatenating the two extreme relations and the temporal
sequence: (ra, r1, . . . , rn, rb). In case any extreme relation coincides with its border relation, i.e., ra = r1 or
rb = rn, we exclude the repeated ones.</p>
      <sec id="sec-8-1">
        <title>Definition 5. Extreme Relations</title>
        <p>The extreme relations are those relations of a story that remain unchanged when t ! 1 or t ! +1 . That is,
a relation ra is extreme in t ! 1 , if and only if 9 ta, so that in the time interval (1 , ta) the relation between
entities is ra. Analogously, a relation rb is extreme in t ! +1 if and only if 9 tb, so that in the time interval
(tb, +1 ) the relation between entities is rb.</p>
        <p>Lemma 1. Existence of extreme relations for two entities in uniform motion.</p>
        <p>Two regular enough2 entities that move in uniform motion and are described by a qualitative representation based
on overlapping, intersection, or orientation, have a story with extreme relations both for t ! 1 and t ! +1 .
Proof. We name the entities k and l and they have constant velocities ~vk and ~vl.</p>
        <p>
          1. In the case ~vk = ~vl the relation between both entities, ri, remains constant — this relation is the whole
story —, therefore, trivially, ri is the extreme relation for both t ! 1 and t ! +1 .
2. In the case ~vk 6= ~vl we distinguish two subcases regarding what feature the representation bases on:
overlapping-intersection of entities, or relative orientation.
(a) Representations based on overlapping-intersection of finite entities have either one or two qualitative
relations for the case of ‘no overlapping-intersection’, e.g., the relation DC in RCC (Fig. 3); the relation
disjoint in 9-Int [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]; or the relations ‘&lt;’ and ‘&gt;’ in Allen’s Algebra [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The mentioned relations must
be the extreme relations for each representation, because the distance between two entities that move at
di↵erent velocities tends to infinity for t ! ±1 ; and consequently the entities do not overlap-intersect
any more.
(b) Representations based on relative orientation between entities use the connecting unit vector between
them, i.e., k~ˆl(t) = k~~xxll((tt)) ~~xxkk((tt))k , for which in uniform motion, i.e., ~xk(t) = ~vkt+~xk0 and ~xl(t) = ~vlt+~xl0,
we obtain both limits :
        </p>
        <p>lim
t! +1
k~ˆl(t) =
~vl
k~vl
~vk
~vkk
(4a)</p>
        <p>lim k~ˆl(t) =
t!1</p>
        <p>lim
t! +1
k~ˆl(t) (4b)
2Enough regular entities are those finite in size with a finite number of features, i.e., a finite number of vertices, edges, concavities,
holes, . . .
Because both limits for the connecting vector exist, the extreme relations of any story exist; they are
the relations neighbouring each limit.</p>
        <p>Lemma 2. Finitude of the Temporal Sequences of Relations in Finite Time Intervals
In uniform motion, for regular enough2 entities, a temporal sequence of relations in a finite time interval is also
finite.</p>
        <p>Proof. A qualitative representation partitions the phase space of two regular enough finite entities in a finite
number of regions, i.e., the qualitative relations. Therefore by moving in uniform motion in a finite time interval
the system goes through a finite number of such regions, i.e., the resultant temporal sequence of relations must
be finite.</p>
        <p>Proposition 2. Finitude of the Stories Set
The set of stories in uniform motion, i.e., the stories set, is finite.</p>
        <p>Proof. We cannot rigorously prove that the stories set is finite, but Lemma 3 gives an equivalent condition that
help us to see that the number of possible stories must be finite in most qualitative representations: if we prove
that there is a story with more or an equal number of relations than any other, then the stories set must be
finite. This is the case in RCC (Fig. 4), where the longest story is S5.</p>
        <p>Lemma 3. The longest story
The stories set is finite, if and only if it exists a longest story, i.e., a story that has more or equal relations than
any other.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Albath</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leopold</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maglia</surname>
            ,
            <given-names>A.M.:</given-names>
          </string-name>
          <article-title>RCC-3D : Qualitative Spatial Reasoning in 3D</article-title>
          .
          <source>In: CAINE-</source>
          <year>2010</year>
          , 23nd International Conference on Computer Applications in Industry and Engineering, Sponsored by the
          <source>International Society for Computers and Their Applications (ISCA)</source>
          . pp.
          <fpage>74</fpage>
          -
          <lpage>79</lpage>
          . Las Vegas, Nevada, USA (
          <year>2010</year>
          ), http://web.mst.edu/~chaman/home/pubs/2010CAINE_LasVegas.pdf
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          :
          <article-title>Maintaining knowledge about temporal intervals</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>26</volume>
          (
          <issue>11</issue>
          ),
          <fpage>832</fpage>
          -
          <lpage>843</lpage>
          (
          <year>1983</year>
          ), https://urresearch.rochester.edu/institutionalPublicationPublicView.action? institutionalItemId=
          <fpage>10115</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Balbiani</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Condotta</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Del Cerro</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          :
          <article-title>A model for reasoning about bidimensional temporal relations. In: PRINCIPLES OF KNOWLEDGE REPRESENTATION</article-title>
          AND
          <string-name>
            <surname>REASONING-INTERNATIONAL</surname>
            <given-names>CONFERENCE</given-names>
          </string-name>
          -. pp.
          <fpage>124</fpage>
          -
          <lpage>130</lpage>
          . MORGAN KAUFMANN PUBLISHERS (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohn</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ouyang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>A survey of qualitative spatial representations</article-title>
          .
          <source>The Knowledge Engineering Review</source>
          <volume>30</volume>
          (
          <issue>01</issue>
          ),
          <fpage>106</fpage>
          -
          <lpage>136</lpage>
          (
          <year>2015</year>
          ), http://www.journals.cambridge. org/abstract_S0269888913000350
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Cohn</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gooday</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bennett</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>A Comparison of Structures in Spatial and Temporal Logics</article-title>
          .
          <source>Philosophy and the Cognitive Sciences: Proc. 16th lntl. Wittgenstein Symposium</source>
          (
          <year>1994</year>
          ), http://www.researchgate.net/profile/Anthony_Cohn/publication/2645325_A_Comparison_ Of_Structures_In_Spatial_And_Temporal_Logics/links/0046351bedba38dd6b000000.pdf
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Cohn</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hazarika</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          :
          <article-title>Qualitative Spatial Representations and Reasoning: An Overview</article-title>
          .
          <source>Fundamenta Informaticae</source>
          <volume>46</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Delafontaine</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohn</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          , Van De Weghe, N.:
          <article-title>Implementing a qualitative calculus to analyse moving point objects</article-title>
          .
          <source>Expert Systems with Applications</source>
          <volume>38</volume>
          (
          <issue>5</issue>
          ),
          <fpage>5187</fpage>
          -
          <lpage>5196</lpage>
          (
          <year>2011</year>
          ), http://dx.doi.org/10.1016/ j.eswa.
          <year>2010</year>
          .
          <volume>10</volume>
          .042
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Dylla</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frommberger</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , Wallgru¨n,
          <string-name>
            <given-names>J.O.</given-names>
            ,
            <surname>Wolter</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          , Nebel, B., W¨olfl, S.: SailAway:
          <article-title>Formalizing navigation rules</article-title>
          .
          <source>In: Proceedings of the Artificial and Ambient Intelligence Symposium on Spatial Reasoning and Communication</source>
          ,
          <source>AISB'07</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          (
          <year>2007</year>
          ), http://citeseerx.ist.psu.edu/viewdoc/download?doi= 10.1.1.64.6638{&amp;}rep=rep1{&amp;}type=pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Dylla</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mossakowski</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schneider</surname>
          </string-name>
          , T.,
          <string-name>
            <surname>van Delden</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          ., van de Ven, J.,
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>A Survey of Qualitative Spatial and Temporal Calculi - Algebraic and Computational Properties</article-title>
          .
          <source>CoRR abs/1606</source>
          .0(
          <issue>212</issue>
          ) (
          <year>2016</year>
          ), http://arxiv.org/abs/1606.00133
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Dylla</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Wallgru¨n,
          <string-name>
            <surname>J.O.</surname>
          </string-name>
          :
          <article-title>Qualitative Spatial Reasoning with Conceptual Neighborhoods for Agent Control</article-title>
          .
          <source>Journal of Intelligent and Robotic Systems</source>
          <volume>48</volume>
          (
          <issue>1</issue>
          ),
          <fpage>55</fpage>
          -
          <lpage>78</lpage>
          (jan
          <year>2007</year>
          ), http://link.springer.
          <source>com/10.1007/ s10846-006-9099-4</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Egenhofer</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          :
          <article-title>Reasoning about binary topological relations</article-title>
          . In: Gu¨nther,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>Schek</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.J</surname>
          </string-name>
          . (eds.)
          <source>Advances in Spatial Databases: 2nd Symposium, SSD '91 Zurich, Switzerland, August 28-30</source>
          ,
          <year>1991</year>
          Proceedings, pp.
          <fpage>141</fpage>
          -
          <lpage>160</lpage>
          . Springer Berlin Heidelberg, Berlin, Heidelberg (
          <year>1991</year>
          ), http://dx.doi.org/10.1007/ 3-540-54414-3_
          <fpage>36</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Freksa</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Temporal reasoning based on semi-intervals</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>54</volume>
          ,
          <fpage>199</fpage>
          -
          <lpage>227</lpage>
          (
          <year>1992</year>
          ), http: //www.icsi.berkeley.edu/ftp/global/global/pub/techreports/1990/tr-90-016.pdf
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Glez-Cabrera</surname>
            ,
            <given-names>F.J.,</given-names>
          </string-name>
          <article-title>A´lvarez-</article-title>
          <string-name>
            <surname>Bravo</surname>
            ,
            <given-names>J.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          ´ıaz, F.:
          <article-title>QRPC: A new qualitative model for representing motion patterns</article-title>
          .
          <source>Expert Systems with Applications</source>
          <volume>40</volume>
          (
          <issue>11</issue>
          ),
          <fpage>4547</fpage>
          -
          <lpage>4561</lpage>
          (
          <year>2013</year>
          ), http://dx.doi.org/10.1016/j. eswa.
          <year>2013</year>
          .
          <volume>01</volume>
          .058
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>E.T.</given-names>
          </string-name>
          :
          <article-title>The hidden dimension</article-title>
          .
          <source>Doubleday Anchor Books</source>
          ,
          <string-name>
            <surname>Doubleday</surname>
          </string-name>
          (
          <year>1966</year>
          ), http://www.edwardthall. com/hiddendimension/
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Hanheide</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peters</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellotto</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          :
          <article-title>Analysis of human-robot spatial behaviour applying a qualitative trajectory calculus</article-title>
          .
          <source>Proceedings - IEEE International Workshop on Robot and Human Interactive</source>
          Communication pp.
          <fpage>689</fpage>
          -
          <lpage>694</lpage>
          (sep
          <year>2012</year>
          ), http://webpages.lincoln.ac.uk/nbellotto/doc/Hanheide2012.pdf
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Kurata</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>Interpreting Motion Expressions in Route Instructions Using Two Projection-Based Spatial Models</article-title>
          .
          <source>In: Proc. of KI</source>
          <year>2008</year>
          (
          <article-title>LNCS)</article-title>
          . vol.
          <volume>12</volume>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Moratz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Representing relative direction as a binary relation of oriented points</article-title>
          . In: Brewka,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Coradeschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Perini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Traverso</surname>
          </string-name>
          , P. (eds.) ECAI, pp.
          <fpage>407</fpage>
          -
          <lpage>411</lpage>
          (
          <year>2006</year>
          ), http://ebooks.iospress.nl/ volumearticle/2721
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Moratz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Renz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Qualitative spatial reasoning about line segments</article-title>
          .
          <source>In Proceedings of the 14th European Conference on Artificial Intelligence (ECAI</source>
          <year>2000</year>
          ) pp.
          <fpage>234</fpage>
          -
          <lpage>238</lpage>
          (
          <year>2000</year>
          ), http://www. informatik.uni-bremen.de/kogrob/papers/ecai2000_dipol.pdf
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Mossakowski</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moratz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <source>Qualitative Reasoning about Relative Direction on Adjustable Levels of Granularity</source>
          pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          (
          <year>2010</year>
          ), http://arxiv.org/abs/1011.0098
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Randell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>a</year>
          .,
          <string-name>
            <surname>Cui</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohn</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          :
          <article-title>A Spatial Logic based on Regions and Connection</article-title>
          .
          <source>In: Third International Conference on Principles of Knowledge Representation and Reasoning (KR1992)</source>
          . pp.
          <fpage>165</fpage>
          -
          <lpage>176</lpage>
          (
          <year>1992</year>
          ), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.7809{&amp;}rep=rep1{&amp;}type=pdf
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Renz</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A spatial odyssey of the interval algebra: 1. Directed intervals</article-title>
          .
          <source>In: IJCAI International Joint Conference on Artificial Intelligence</source>
          . pp.
          <fpage>51</fpage>
          -
          <lpage>56</lpage>
          . No.
          <string-name>
            <surname>August</surname>
          </string-name>
          , Morgan Kaufmann Publishers Inc. (
          <year>2001</year>
          ), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.1364{&amp;}rep=rep1{&amp;}type=pdf
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Van de Weghe</surname>
          </string-name>
          , N.:
          <article-title>Representing and Reasoning about Moving Objects: A Qualitative Approach</article-title>
          .
          <source>Ph.D. thesis</source>
          , Ghent University (
          <year>2004</year>
          ), http://hdl.handle.net/1854/LU-668977
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Claramunt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Towards a Qualitative Representation of Movement</article-title>
          . In: Indulska,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Purao</surname>
          </string-name>
          , S. (eds.)
          <source>Advances in Conceptual Modeling, Lecture Notes in Computer Science</source>
          , vol.
          <volume>8823</volume>
          , pp.
          <fpage>191</fpage>
          -
          <lpage>200</lpage>
          . Springer International Publishing (
          <year>2014</year>
          ), http://dx.doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -12256-4_20http:// link.springer.com/10.1007/978-3-
          <fpage>319</fpage>
          -12256-4_
          <fpage>20</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>