=Paper= {{Paper |id=None |storemode=property |title=Mobile Objects and Sensors within a Video Surveillance System: Spatio-temporal Model and Queries |pdfUrl=https://ceur-ws.org/Vol-1075/05.pdf |volume=Vol-1075 |dblpUrl=https://dblp.org/rec/conf/immoa/CodreanuMS13 }} ==Mobile Objects and Sensors within a Video Surveillance System: Spatio-temporal Model and Queries== https://ceur-ws.org/Vol-1075/05.pdf
     Mobile objects and sensors within a video surveillance
          system: Spatio-temporal model and queries
                                Dana Codreanu, Ana-Maria Manzat, Florence Sedes
                                         Université de Toulouse – IRIT – UMR 5505
                                   118 Route de Narbonne, 31062 Toulouse Cedex 9, France
                                              {codreanu, manzat, sedes}@irit.fr

ABSTRACT                                                                     available for all these videos are the id of the camera (eventually
The videos recorded by video surveillance systems represent a key            GPS coordinates) and a local date/timestamp that are not
element in a police inquiry. Based on a spatio-temporal query                homogenous throughout the different systems.
specified by a victim, (e.g., the trajectory of the victim before and             A great majority of the existing video surveillance systems
after the aggression) the human operators select the cameras that            are manual or semi-automatic (they employ some form of video
could contain relevant information and analyse the corresponding             processing but with significant human intervention) [11]. Taking
video contents. This task becomes cumbersome because of the                  into account the huge amount of video contents that need to be
huge volume of video contents and the cameras’ mobility. This                handled, the purely manual approach (agents watching the videos
paper presents an approach, which assists the operator in his task           and detecting events) becomes insufficient. The main objective in
and reduces the research space. We propose to model the                      the video surveillance domain is to provide users with tools that
cameras’ network (fixed and mobile cameras) on top of the city’s             could assist them in their research by reducing the research space
transportation network. We consider the video surveillance system            and therefore the response time. These tools depend on the
as a multilayer geographic information system, where the cameras             research context and complexity (e.g., real time surveillance of big
are situated into a distinct layer, which is added on top of the             events, police inquiry) [22].
other layers (e.g., roads, transport) and is related to them by the
location. The model is implemented in a spatio-temporal database.                  Our work is situated in the context of the police inquiry
Our final goal is that based on a spatio-temporal query to                   which involves an a posteriori processing of the data in order to
automatically extract the list of cameras (fixed and mobile)                 help the investigator to highlight (isolate) the relevant elements
concerned by the query. We propose to include this automatically             (e.g., persons, events). To do that, the investigators dispose of the
computed relative position of the cameras as an extension of the             set of recorded videos from different video surveillance systems
standard ISO 22311.                                                          (e.g., public, private, RATP). In order to assist the investigators in
                                                                             their tasks, it is important that the different outputs of the systems
                                                                             are interoperable, which is not currently the case. The
                                                                             interoperability between any video surveillance systems from the
1. INTRODUCTION                                                              simple ones with only few cameras to the large scale systems is
      The number of video surveillance cameras increases in                  the main goal of the standard ISO 223111. It specifies a format for
public and private areas (e.g., in train and metro stations, on-             the data which can be exchanged between the video surveillance
board of buses and trains, inside commercial areas, inside                   systems in the inquiry context.
enterprises buildings). For example, some estimations show that                   This standard does not consider the video surveillance
there are more than 400000 cameras in London and that only the               cameras’ mobility or their fields’ of view modification. In fact, at
RATP also known as Régie Autonome des Transports Parisiens                   the beginnings of video surveillance systems the cameras were
(English: Autonomous Operator of Parisian Transports)                        placed in fixed locations in order to monitor indoor and outdoor
surveillance system comprises around 9000 cameras in Paris. In               places. With the improvements in the hardware and software
these conditions, any person that lives and walks in those two big           technologies, on-board cameras are more and more employed in
European capitals is likely to be captured many times during a day           mobile vehicles (e.g., buses, police cars). This cameras’ mobility
(up to 300 times in London) by several video surveillance systems            makes the task of security agents even more difficult in the
(e.g., the traffic surveillance cameras, the cameras in the subway,          context of an inquiry, when they have to analyse a huge amount of
and the cameras of a commercial centre). The only markers                    video contents and to have supplementary knowledge on the
                                                                             system’s characteristics (e.g., the bus timetables, the city transport
                                                                             plan) in order to select the most appropriate video contents.
                                                                                  In this context, our goal is to provide users with tools that
                                                                             could assist them in their research and reduce the research space.
                                                                             In order to achieve this objective, in this article, we propose an
                                                                             extension of the ISO 22311 standard in order to take into account

                                                                             1
                                                                              http://www.iso.org/iso/fr/catalogue_detail.htm?csnumber=5346

                                                                         1
          Proceedings IMMoA’13                                          52        http://www.dbis.rwth-aachen.de/IMMoA2013/
the cameras’ mobility. We consider the video surveillance system                          Camera type: optical, thermal, infrared
as a multilayer geographic information system, where the cameras                          Sensor type and dimension: CMOS, CCD
are situated on a distinct layer, which is added on top of the other                      Transmission type: analogous/ IP
layers (e.g., roads, transport) through the location. We                                  Angle of view (horizontal and vertical), focal
implemented our solution using a spatial database in order to                              distance, pan-tilt-zoom, field of view orientation,
select the cameras that might have acquired video contents                                 visible distance etc.
corresponding to a user’s spatio-temporal query.
     The remainder of this paper is organized as follows. After a
review of related work concerning the three aspects addressed in
this paper, video surveillance systems, standard ISO 22311 and
mobile objects modelling in the Section 2, Section 3 presents our
multilayer modelling approach. This model is implemented using
a spatio-temporal database. Some queries that can be answered
based on this database are presented in Section 4. Finally, Section
5 concludes and discusses possible future research.
                                                                                 Figure 2: Examples of video surveillance cameras having
                                                                                       the same position but different fields of view

2. STATE OF THE ART                                                              We started by analysing the way a query is processed in a
2.1 Video Surveillance Systems                                              video surveillance system today. When a person (victim of an
      The generic schema of a video surveillance system is                  aggression for example) files a complaint, he is asked to fill a
illustrated in Figure 1. The content is captured and stored in a            form describing the elements that could help the investigators to
distributed manner and analysed in a control centre by human                find the relevant video segment (the Figure 3 illustrates an
operators that watch a certain number of screens displayed in a             example of such form). Based on the spatial and the temporal
matrix (the Video Wall in Figure 1).                                        aspects of the query, the surveillance operator uses his own
                                                                            knowledge concerning the spatial disposal of the cameras’
                                                                            network in order to select the most relevant video contents. Then
                                                                            he analyses these contents by playing them on the different
                                                                            screens that he has in front of him. The monitors themselves show
                                                                            no spatial relationship of any kind, only the numbering of the
                                                                            cameras is in a somewhat logical order.




                                                                                     Figure 3: Example of a form filled by a victim
         Figure 1: Video surveillance system’s schema                            Therefore, the operators’ tasks become cumbersome taking
                                                                            into consideration the huge volume of video contents to be
                                                                            analysed, the mobility and the different characteristics of cameras.
     There is a big diversity of cameras and sensors that constitute        Moreover, in the current systems, most of the stored contents is
the acquisition part of surveillance systems and a heterogeneity of         not exploitable because of the recording’s low quality. This lack
their installation contexts (e.g., on the halls or platforms of             of quality is often caused by inappropriate installation of cameras,
railway or metro stations, on-board of trains and buses, on the             bad shooting, bad illumination conditions etc. The operator has no
streets, in commercial centres or office buildings). Therefore, we          a priori knowledge on the quality of the video contents and thus
have fixed and mobile cameras having different technical                    he loses time by visualizing the low quality contents also.
characteristics (most of the time dynamic) (see Figure 2 for an
example of such cameras) [14]:




                                                                        2
          Proceedings IMMoA’13                                         53        http://www.dbis.rwth-aachen.de/IMMoA2013/
                                                    Figure 4: ISO 22311 sensor description


     The video surveillance domain has seen a big number of                    for the data issued from video surveillance systems and the
commercial systems developed [8]. In the research area, many                   metadata needed to exploit that data.
projects were developed as well: CROMATICA [5],
                                                                                    In the following, we are going to present the ISO 22311
CARETAKER2 [3], VANAHEIM3 for the indoor static video
                                                                               standard, especially the part concerning the description of the
surveillance,    and     SURTRAIN         [20],     BOSS4      [13],
                 5                                                             cameras characteristics and mobility. We are going to highlight
PROTECTRAIL projects for the on-board mobile surveillance.
                                                                               the interesting elements which relate to our research.
All these heterogeneous projects concentrate on the development
of the system’s physical architecture and of better detection
algorithms in order to obtain a fully automatic system [12], [24].
     We can summarize by saying that there is a growing concern
                                                                               2.2 Standard ISO 22311
                                                                                     The Standard ISO 22311 defines an interoperability format
in the research and industrial environments for developing
                                                                               for the data generated by video surveillance systems and for the
algorithms for video content analysis (VCA) in order to
                                                                               metadata needed to exploit these huge volumes of data.
automatically index content and detect objects (e.g., abandoned
packets or luggage) and events (e.g., intrusions, people or vehicles                 The audio visual packages (containing audio, video or
going the wrong way) [16] or to draw operators’ attention to                   metadata files) have to be structured hierarchically (in files,
events of interest (e.g., alarms). However, solutions for assistance           folders and groups of folders) according to time intervals in
to a posteriori investigation are at a lesser stage of maturity, and to        Coordinated Universal Time (UTC). For each group of folders it
date most of the data remain unexploited.                                      is mandatory for the system to provide a XML description of the
                                                                               source(s) (e.g., cameras, GPS, video analysis tools), codec(s), file
     In this article, we are going to address also the lack of
                                                                               formats and a temporal index enabling an easy access to the
interoperability between different surveillance systems. In the
                                                                               content.
context of an inquiry, the police might need to analyse data from
different sources (systems), so it is important that the different                   The current technologies and processing power enable the
outputs of the systems to be interoperable. As a consequence, the              analysis of video content and the extraction of metadata
big actors of the domain started to unify efforts in order to                  describing objects, events, scenes etc. This analysis depends on
standardize the structure of folders and of metadata files generated           the acquisition context (e.g., the position of the camera, the image
by video surveillance systems. A result of these efforts is                    quality, the type of sensors). Therefore, the standard distinguishes
represented by the ISO 22311 standard that proposes a structure                between the systems, those that can generate such metadata (i.e.,
                                                                               level 2 systems) and provides a general structure and dictionary
                                                                               for describing sensors and events (i.e., metadata).
2
    http://cordis.europa.eu/ist/kct/caretaker_synopsis.htm                           As in this paper we are going to address the problem of
3
    http://www.vanaheim-project.eu/                                            cameras’ geo-localization we present the schema for the sensors
4
    http://celtic-boss.mik.bme.hu/                                             description in Figure 4.
5
    http://www.protectrail.eu/

                                                                           3
            Proceedings IMMoA’13                                          54        http://www.dbis.rwth-aachen.de/IMMoA2013/
      Each camera has an absolute location (GPS coordinates) as              defined by [9] (e.g., moving points, moving lines, moving
more and more of the installed cameras have an embedded GPS                  regions), or using the dynamic attributes [23] (e.g., motion vector)
transmitter. But, there are many cases when the GPS is not                   which enables to limit the size of the data that has to be stored and
enough because: (1) we need to model the position of the camera              queried.
with regards to the video surveillance system and not to the world;
                                                                                  As far as we know, the video content’s mobility is not taken
(2) in some situations, for example in indoor environments, the
                                                                             into account in the video surveillance domain. In this article, we
GPS positions do not provide a good precision.
                                                                             want to exploit the advances in the field of mobile objects and
      In the context of a video surveillance system:                         apply them in the video surveillance domain in order to consider
                                                                             the mobile aspect of surveillance cameras.
       The mobile cameras are embedded in buses, train, police
        cars;
       The movement of these vehicles is constrained by a road
        network and a transportation network.                                3. Extension of the Standard 22311 for the
      By analysing the standard, we can notice that it defines a             management of cameras mobility
relative position for a camera that is today a simple link to an                    As you could see in Section 2.2, the Standard 22311,
image (the plan of the network of cameras or of a building). This            defines a fix position of video surveillance camera, through the
kind of location is not easily exploitable. Furthermore, the                 GPS coordinates and a link to an image containing the plan of the
standard does not consider the video surveillance cameras’                   network. In order to overcome this issue, we propose to compute a
mobility. In order to overcome these issues, we propose to extend            relative position with regards to a map which will enable us to:
this standard through a multilayer modelling approach, where the                 Model the distances between the cameras and select the
network of cameras is put on top of a transportation network.                     relevant cameras for a certain trajectory;
      In the following, we present a state of the art of the mobile              Model the connections between the cameras ( e.g., possible
objects modelling as the cameras’ mobility management                             path between camera1 and camera2 but not between
represents the main focus of this paper.                                          camera2 and camera3 );
                                                                                 Model trajectories for mobile cameras;
                                                                                 Model the fields of view and the maximum detection
                                                                                  distances of fixed and mobile cameras.
2.3 Mobile Objects Modelling
     With the technology’s evolution, the mobility became very                   In order to achieve this goal we took our inspiration from the
important in the context of video surveillance systems. Not only             domain of GIS (Geographical Information Systems) [4] and
the objects (e.g., persons, cars) are moving in the monitored                mobile objects modelling.
scene, but also the surveillance cameras are moving. The great
majority of the research papers concerning the mobile objects in                   By considering the video surveillance system as a GIS we
the video surveillance domain concentrate on the video content               benefit from the separation between the conceptual layers. Thus at
analysis in order to detect and track the objects, to interpret their        any time, a new layer can be added without modifying the existing
behaviour and to understand the visual events of the monitored               layers.
scene [10]. Thus, the mobility of the cameras is not exploited.                    In our approach, we propose a four layer model: (1) Road
      In the field of moving objects, a mobile object means the              network, (2) Transportation network, (3) Objects and (4) Cameras
continuous evolution of any object over the time, in terms of                network. The Figure 5 illustrates the UML model for the first
position and dimension [21]. This movement of the mobile                     three layers.
objects can be effectuated in an unconstrained environment [18]                     The “Road network” layer, presented in blue in Figure 5, is
(e.g., for hurricanes, fires) or in a constrained environment [17]           based on the graph modelling approach well-known in the
(e.g., cars move on road and transportation networks).                       literature. The road network is considered as an undirected graph
     In the video surveillance domain, the objects are moving in a           G= (V, E), with V a set of vertices and E a set of edges defined
constrained environment, mainly by the road network. This                    according to the granularity level that we want to consider (for a
environment is represented as a graph-based model [6], [15], [25],           big boulevard of a European capital for example we can consider
where the vertices are junctions and the edges are the roads                 each segment of the road, each segment between two intersections
between the two junctions. [9] considers also the connectivity at            or the entire boulevard). Each vertex has an identifier and a 2D
each junction in order to represent the road network. [19] extends           position. Each edge is determined by two vertices.
the model proposed by [9] in order to consider the predefined                      The “Transportation network”, presented in yellow in
trajectories that some objects could have (e.g., buses). [7]                 Figure 5, is also based on a graph model. At this level, the vertices
proposes a mobile object data model where they consider the road             of the transportation network are intersections between roads, and
and rail networks. [2] takes into account the transport network in           bus stations. Each transportation vertex has a position with
a city as a graph and they add to each graph vertex the transport            regards to a road segment. Ordered sequences of transportation
modes available (i.e., pedestrian, auto, urban rail, metro, bus).            vertices constitute sections, which form lines (e.g., bus lines). The
      In the management of mobile objects, a major issue is the              advantage of our approach with regards to the ones proposed in
storage of the objects’ spatio-temporal positions. Several                   the state of the art [9] is that we have two independent graphs that
strategies can be considered: using the spatio-temporal data types           are connected to each other through the positions of transportation
                                                                             vertices. That way if the buses stations are modified or new buses

                                                                         4
          Proceedings IMMoA’13                                          55        http://www.dbis.rwth-aachen.de/IMMoA2013/
lines are introduced we do not have to recompute the underlying                   On top of all these layers, we model a video surveillance
road graph.                                                                 cameras’ network. A simplified schema of this model is illustrated
                                                                            in Figure 6.
     The “Objects” layer, presented in red in Figure 5, models
the positions of fixed and mobile objects with regards to the                      The cameras’ network is composed of fixed and mobile
underlying layers.                                                          cameras. The fixed cameras have a 2D position that is given at
                                                                            installation time. The mobile cameras are associated with mobile
      The Fixed Object has a position on a road segment. Its
                                                                            objects (e.g., buses) and their trajectory is the same as the object’s
position is defined as a distance from each end of the segment.
                                                                            one.
For this kind of objects, we adopt the same localisation as the one
proposed by [9].                                                                  The new generation of digital surveillance cameras has
                                                                            embedded GPS transmitters and even compasses. The
       In the case of Mobile Objects (e.g., buses, police cars,
                                                                            technologies developed around these cameras make it possible to
persons), the position changes in time. Each object will
                                                                            automatically extract information from the camera related to its
periodically transmit its position using different strategies (e.g.,
                                                                            orientation, pan, tilt, zoom, focal distance, compression
each Δt seconds, each time the object is changing the segment,
                                                                            parameters etc.
when the object's position predicted by the motion vector deviates
from the real position by more than a threshold [23]) that are out                Based on all these elements it is possible to model the field
of the scope of this article. We suppose that we periodically               of view for each camera and track its modifications in time. The
receive updates containing time-stamped GPS points that we                  field of view is computed based on four parameters [1]: the 2D
transform into a relative position with regards to the road network         position, the viewable angle, the orientation and the visible
(i.e., the segments). We use this information to reconstitute               distance. A schema of a 2D field of view proposed by [1] is
object’s trajectory.                                                        shown in Figure 7.
      We distinguish two types of mobile objects: objects that
move freely within the road and transportation networks (e.g., car,
person) and objects of which trajectories are constrained by a
“line” (e.g., buses).




                                                                                          Figure 6: "Cameras network" layer




                                                                                                    P : camera location ()
                                                                                                    θ : viewable angle
                                                                                                    d : camera direction vector
                                                                                                    R: visible distance



                                                                               Figure 7: Illustration of the field of view model in 2D [1]




  Figure 5: "Road network", "Transportation network" and
                      "Objects" layers



                                                                        5
          Proceedings IMMoA’13                                         56        http://www.dbis.rwth-aachen.de/IMMoA2013/
                                                Figure 8: General architecture of the system


      In order to select the most appropriate attributes to describe                More precisely, the idea is to compare a spatio-temporal
a video surveillance camera, we studied the sensor description               query of the user (e.g., Rivoli Street from Louvre to Metro
proposed by the ISO 22311 standard, SensorGML6, KML7. We                     Chatelet the 14th of July between 10h and 14h) with the
separated the identified camera’s properties in two categories:              trajectories stored in our database and, for a better precision, with
properties that could be modified over the time, and fixed                   the cameras fields of view. The Figure 8 illustrates the generic
characteristics.                                                             architecture of a system based on our spatio-temporal database for
                                                                             assisting the video surveillance in their research.
      Thus, the extension of the standard ISO 22311 is realised at
three levels:                                                                      From the Figure 8 it is easy to observe that there are two
                                                                             main questions when developing such system: How to query the
      Taking into account the road and transportation networks as           system? and How to update the system?. As explained in the
       a graph and not as an image;                                          previous section our work addresses only the querying aspect that
      Taking into account the camera’s relative position and its            we are going to describe in the following.
       mobility on the networks;
                                                                                    First, a Query Interpreter module will transform the user
      Taking into account the camera’s characteristics change
                                                                             query (e.g, Rivoli Street from Louvre to Metro Chatelet the 14 th of
       over the time.
                                                                             July between 10h and 14h) in a spatio-temporal query. By spatio-
      Our model is implemented in a spatio-temporal database                 temporal query we understand a sequence of road segments and a
that can be queried by users in order to retrieve the relevant               time interval that will be further transformed in a SQL query, by
cameras for a given trajectory. The originality of our research              the SQL Query Generator module. The SQL query is executed on
work is given by:                                                            the database having as a result a list of cameras. Based on some
                                                                             image quality parameters a score per camera can be computed and
      the fact that it combines different spatio-temporal                   the initial list can then be ranked according to this relevance
       information (e.g., road network, transportation network,              score.
       objects’ positions) and computation (e.g., trajectories, field
       of view) within the same database;                                          In the following we present two examples of spatio-
                                                                             temporal queries executed on our database implemented in Oracle
      the twofold mobility, of the target objects and of the
                                                                             Spatial 8:
       cameras.
      In the next section we present the general architecture of the                  The first selects the fixed cameras of which geometry
tool that could assist the video surveillance operators in their                       (field of view) intersects the geometry of the Rivoli street;
research based on our spatio-temporal database and some                      SELECT IdCamera
examples of queries.                                                         FROM FixedCamera
                                                                             WHERE SDO_RELATE(
                                                                                          camera_geom,
                                                                                          (SELECT street_geom
4. Spatiotemporal database and queries                                                         FROM Road
      Based on the presented model, our goal is to automatically
                                                                                               WHERE Name ='Rivoli‘ ),
select the cameras (fixed and mobile) that could contain relevant
                                                                                         'mask=OVERLAPBDYDISJOINT querytype=WINDOW'
video content with regards to the user query (their field of view
                                                                             )='TRUE';
intersected the query trajectory).

6
    http://www.opengeospatial.org/standards/sensorml                         8
                                                                              http://www.oracle.com/fr/products/database/options/spatial/index
7
    http://www.opengeospatial.org/standards/kml                                .html

                                                                         6
            Proceedings IMMoA’13                                        57           http://www.dbis.rwth-aachen.de/IMMoA2013/
       The second selects the mobile cameras that are associated                Another perspective of our work is the improvement of the
        with the buses that crossed the street within the given time        resulted cameras list by re-ranking it based on cameras’
        interval.                                                           characteristics (e.g., image quality, visible distance).

LET      TimePeriod       =      Timestamp(hour(2013,1,14,10),
hour(2013,1,14, 12));
SELECT ObjetID                                                              6. ACKNOWLEDGMENTS
       FROM ConstrainedObject                                               This work has been supported by the ANR CSOSG-National
       WHERE          Type.MobileObject=     “Bus”       AND                Security (French National Research Agency) project
       TimePeriod.ConstrainedObject (atperiods (Timestamp,                  METHODEO.
       TimePeriod));

SELECT DISTINCT IdMobileCamera
       FROM ConstrainedObject, FreeObject, MobileCamera
                                                                            7. REFERENCES
                                                                            [1] Ay S. A., Zimmermann R., and Kim S.O.. Relevance
       WHERE           Intersect       (MobileCamera.geom,
                                                                                Ranking in Georeferenced Video Search. In Multimedia
       ConstrainedObject.geom)         AND         Intersect
                                                                                Systems Journal, 16, 2 (March 2010), Springer, 105-125.
       (MobileCamera.geom, FreeObject.geom);
                                                                            [2] Booth J., Sistla P., Wolfson O., and Cruz I. F. A data model
                                                                                for trip planning in multimodal transportation systems. In
                                                                                Proceedings of the 12th International Conference on
5. CONCLUSION                                                                   Extending Database Technology: Advances in Database
     In this paper, we presented a spatio-temporal modelling                    Technology, ACM, 2009, 994-1005.
approach of fixed and mobile cameras within a common
transportation network. Taking our inspiration from the multilayer          [3] CARETAKER Consortium. Caretaker puts knowledge to
representation of the geographical information systems, we model                good use. In European Public Transport Magazine. 2008.
spatial information about the road and transportation                       [4] Chang K.-T. Introduction to Geographic Information
infrastructures and mobile objects’ trajectories in four                        Systems, McGraw-Hill Higher Education, 2006, 450 pages
independent layers: (1) Road network, (2) Transportation
                                                                            [5] Deparis J.P., Velastin S.A., and Davies A.C. Cromatica
network, (3) Objects and (4) Cameras network.
                                                                                project. In Advanced Video-Based Surveillance Systems, The
       Based on this modelling approach we also proposed a                      Springer International Series in Engineering and Computer
generic architecture for a system that could assist the video                   Science. volume 488, 1999, 203-212.
surveillance operators in their research. Starting from a sequence          [6] Dingin Z., and Deng K. Collecting and managing network-
of trajectory segments and a temporal interval, such system                     matched trajectories of moving objects in databases. In
generates the list of cameras that could contain relevant                       Proceedings of the 22nd international conference on
information concerning the query (that “saw” the query’s                        Database and expert systems applications, LNCS Volume
trajectory).                                                                    6860, 2011, pp 270-279 2011, 270-279.
     The need of such assisting tools was identified within the             [7] El bouziri A., Boulmakoul A., Laurini R. Mobile Object and
French National Project METHODEO. Among the project’s                           Real Time Information System Modeling for Urban
partners, we mention the French National Police, Thales and the                 Environment, In Proceedings of the 26th Urban and
RATP also known as Régie Autonome des Transports Parisiens                      Regional Data Management Symposium, 2007, 403-413.
(English: Autonomous Operator of Parisian Transports). Our
                                                                            [8] GDANSK, KU. Deliverable 2.1 – Review of existing smart
approach has been validated and will be evaluated within the
                                                                                video surveillance systems capable of being integrated with
project.
                                                                                ADDPRIV. ADDPRIV consortium, 2011, www.addpriv.eu
     Obviously, many questions are still left with no answer                [9] Güting R.H., Almeida, V.T., and Ding Z. Modeling and
giving way to a large number of perspectives. We will present                   Querying Moving Objects in Networks. In VLDB Journal,
several of them in the following.                                               15, 2 (2006), 165-190.
      For now, our model considers only outdoor transportation              [10] Joshi K. A., and Thakore D. G. A Survey on Moving Object
and surveillance networks. We plan to extend our model to indoor                 Detection and Tracking in Video Surveillance System, in
spaces also in order to model cameras inside train or subway                     International Journal of Soft Computing and Engineering, 2,
stations for example.                                                            3 (July 2012), 44-48
     Our work is situated in the context of the a posteriori                [11] Kim I.S., Choi H. S., Yi K. M., Choi J. Y., and Kong S. G.
research in the case of a police inquiry. We would like to extend                Intelligent Visual Surveillance - A Survey. In International
this context in the future in order to be able to process real time              Journal of Control Automation and Systems, 8, 5 (april
queries or to predict trajectories based on some statistics realized             2010), 926-939
based on the stored data (e.g., average speed on some road                  [12] Lakshmi Devasena C., Revathí R., and Hemalatha M., Video
segments).                                                                       Surveillance Systems-A Survey, in International Journal of
                                                                                 Computer Science Issues, 8, 4(July 2011), 635-642


                                                                        7
          Proceedings IMMoA’13                                         58        http://www.dbis.rwth-aachen.de/IMMoA2013/
[13] Lamy-Bergot C., Ambellouis S., Khoudour L., Sanz D.,                   Conference on Advanced Geographic Information Systems,
     Malouch N.,. Hocquard A, Bruyelle J-L., Petit L., Cappa A.,            Applications, and Services, IARIA, 2012, 222-231
     Barro A., Villalta E., Jeney G., and Egedy K. Transport            [20] Priam Q.-C., Lapeyronnie A., Baudry C., Lucat L., Sayd P.,
     system architecture for on board wireless secured a/v                   Ambellouis S., Sodoyer D., Flancquart A., Barcelo A.-C.,
     surveillance and sensing. In Proceedings of the 9th                     Heer F., Ganansia F., and Delcourt V. Audio-video
     international conference on Intelligent Transport Systems               surveillance system for public transportation. In Proceeding
     Telecommunications, IEEE, 2009, 564-568.                                of the 2nd international conference on Image Processing
[14] Le Barz, C. and Lamarque, T. Video Surveillance Cameras.                Theory Tools and Applications, IEEE, 2010, 47-53.
     In Jean-Yves Dufour, editor, Intelligent Video Surveillance        [21] Schneider M. Moving Objects in Databases and GIS: State-
     Systems, chapter 4, Wiley, novembre 2012. 33-46                         of-the-Art and Open Problems, in Research Trends in
[15] Liu K., Li Y., He F., Xu J., and Ding Z. Effective Map-                 Geographic Information Science. Springer-Verlag, 2009,
     matching on the Most Simplified Road Network. In                        169-188.
     Proceedings of the 20th International Conference on                [22] Sedes F., Sulzer J.F., Marraud D., Mulat Ch., and Cepas B..
     Advances in Geographic Information Systems (SIGSPATIAL                  A Posteriori Analysis for Investigative Purposes. In Jean-
     '12). ACM, 2012, 609-612.                                               Yves Dufour, editor, Intelligent Video Surveillance Systems,
[16] Marraud, D. Cepas, B. and Reithler, L., Semantic browsing               chapter 3, Wiley, novembre 2012, 33-46.
     of video surveillance databases through Online Generic             [23] Sistla A.P., Wolfson O., Chamberlain S., and Dao S.
     Indexing, in Proceeding of the 3rd ACM/IEEE International               Querying the Uncertain Posi-tion of Moving Objects. In:
     Conference on Distributed Smart Cameras, 2009, .1-8.                    Temporal Databases: Research and Practice, LNCS 1399,
[17] McKenney M, and Schneider M, Spatial Partition Graphs: A                Springer, 1998, 310–337
     Graph Theoretic Model of Maps. In Proceedingsof the 10th           [24] Soro S. and Heinzelman W. A Survey of Visual Sensor
     Int. Symp. on Spatial and Temporal Databases, LNCS 4605,                Networks, in Advances in Multimedia, vol. 2009, 2009, 1-22
     Springer, 2007, 167–184.
                                                                        [25] Xu J., Guo L., Ding Z., Sun X., and Liu C. Traffic aware
[18] Parent Ch., Spaccapietra S., and Zimányi E. Conceptual                  route planning in dynamic road networks. In Proceedings of
     Modeling for Traditional and Spatio-Temporal Applications:              the 17th international conference on Database Systems for
     The MADS Approach. Springer-Verlag New York, 2006                       Advanced Applications, LNCS 7238, 2012, 576-591
[19] Popa I.S., and Zeitouni K. Modeling and Querying Mobile
     Location Sensor Data, In Proceedings of the 4th Int.




                                                                    8
         Proceedings IMMoA’13                                      59        http://www.dbis.rwth-aachen.de/IMMoA2013/