<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Mobile objects and sensors within a video surveillance system: Spatio-temporal model and queries</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Dana</forename><surname>Codreanu</surname></persName>
							<email>codreanu@irit.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR</orgName>
								<orgName type="institution" key="instit1">Université de Toulouse</orgName>
								<orgName type="institution" key="instit2">IRIT</orgName>
								<address>
									<addrLine>5505 ; 118 Route de Narbonne</addrLine>
									<postCode>31062, Cedex 9</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ana-Maria</forename><surname>Manzat</surname></persName>
							<email>manzat@irit.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR</orgName>
								<orgName type="institution" key="instit1">Université de Toulouse</orgName>
								<orgName type="institution" key="instit2">IRIT</orgName>
								<address>
									<addrLine>5505 ; 118 Route de Narbonne</addrLine>
									<postCode>31062, Cedex 9</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Florence</forename><surname>Sedes</surname></persName>
							<email>sedes@irit.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR</orgName>
								<orgName type="institution" key="instit1">Université de Toulouse</orgName>
								<orgName type="institution" key="instit2">IRIT</orgName>
								<address>
									<addrLine>5505 ; 118 Route de Narbonne</addrLine>
									<postCode>31062, Cedex 9</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Mobile objects and sensors within a video surveillance system: Spatio-temporal model and queries</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C2034D2ED8F7FE1DC9E5428CE7B0C331</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:03+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The videos recorded by video surveillance systems represent a key element in a police inquiry. Based on a spatio-temporal query specified by a victim, (e.g., the trajectory of the victim before and after the aggression) the human operators select the cameras that could contain relevant information and analyse the corresponding video contents. This task becomes cumbersome because of the huge volume of video contents and the cameras' mobility. This paper presents an approach, which assists the operator in his task and reduces the research space. We propose to model the cameras' network (fixed and mobile cameras) on top of the city's transportation network. We consider the video surveillance system as a multilayer geographic information system, where the cameras are situated into a distinct layer, which is added on top of the other layers (e.g., roads, transport) and is related to them by the location. The model is implemented in a spatio-temporal database. Our final goal is that based on a spatio-temporal query to automatically extract the list of cameras (fixed and mobile) concerned by the query. We propose to include this automatically computed relative position of the cameras as an extension of the standard ISO 22311.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>The number of video surveillance cameras increases in public and private areas (e.g., in train and metro stations, onboard of buses and trains, inside commercial areas, inside enterprises buildings). For example, some estimations show that there are more than 400000 cameras in London and that only the RATP also known as Régie Autonome des Transports Parisiens (English: Autonomous Operator of Parisian Transports) surveillance system comprises around 9000 cameras in Paris. In these conditions, any person that lives and walks in those two big European capitals is likely to be captured many times during a day (up to 300 times in London) by several video surveillance systems (e.g., the traffic surveillance cameras, the cameras in the subway, and the cameras of a commercial centre). The only markers available for all these videos are the id of the camera (eventually GPS coordinates) and a local date/timestamp that are not homogenous throughout the different systems.</p><p>A great majority of the existing video surveillance systems are manual or semi-automatic (they employ some form of video processing but with significant human intervention) <ref type="bibr" target="#b10">[11]</ref>. Taking into account the huge amount of video contents that need to be handled, the purely manual approach (agents watching the videos and detecting events) becomes insufficient. The main objective in the video surveillance domain is to provide users with tools that could assist them in their research by reducing the research space and therefore the response time. These tools depend on the research context and complexity (e.g., real time surveillance of big events, police inquiry) <ref type="bibr" target="#b21">[22]</ref>.</p><p>Our work is situated in the context of the police inquiry which involves an a posteriori processing of the data in order to help the investigator to highlight (isolate) the relevant elements (e.g., persons, events). To do that, the investigators dispose of the set of recorded videos from different video surveillance systems (e.g., public, private, RATP). In order to assist the investigators in their tasks, it is important that the different outputs of the systems are interoperable, which is not currently the case. The interoperability between any video surveillance systems from the simple ones with only few cameras to the large scale systems is the main goal of the standard ISO 22311 <ref type="foot" target="#foot_0">1</ref> . It specifies a format for the data which can be exchanged between the video surveillance systems in the inquiry context. This standard does not consider the video surveillance cameras' mobility or their fields' of view modification. In fact, at the beginnings of video surveillance systems the cameras were placed in fixed locations in order to monitor indoor and outdoor places. With the improvements in the hardware and software technologies, on-board cameras are more and more employed in mobile vehicles (e.g., buses, police cars). This cameras' mobility makes the task of security agents even more difficult in the context of an inquiry, when they have to analyse a huge amount of video contents and to have supplementary knowledge on the system's characteristics (e.g., the bus timetables, the city transport plan) in order to select the most appropriate video contents.</p><p>In this context, our goal is to provide users with tools that could assist them in their research and reduce the research space. In order to achieve this objective, in this article, we propose an extension of the ISO 22311 standard in order to take into account the cameras' mobility. We consider the video surveillance system as a multilayer geographic information system, where the cameras are situated on a distinct layer, which is added on top of the other layers (e.g., roads, transport) through the location. We implemented our solution using a spatial database in order to select the cameras that might have acquired video contents corresponding to a user's spatio-temporal query.</p><p>The remainder of this paper is organized as follows. After a review of related work concerning the three aspects addressed in this paper, video surveillance systems, standard ISO 22311 and mobile objects modelling in the Section 2, Section 3 presents our multilayer modelling approach. This model is implemented using a spatio-temporal database. Some queries that can be answered based on this database are presented in Section 4. Finally, Section 5 concludes and discusses possible future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">STATE OF THE ART 2.1 Video Surveillance Systems</head><p>The generic schema of a video surveillance system is illustrated in Figure <ref type="figure">1</ref>. The content is captured and stored in a distributed manner and analysed in a control centre by human operators that watch a certain number of screens displayed in a matrix (the Video Wall in Figure <ref type="figure">1</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: Video surveillance system's schema</head><p>There is a big diversity of cameras and sensors that constitute the acquisition part of surveillance systems and a heterogeneity of their installation contexts (e.g., on the halls or platforms of railway or metro stations, on-board of trains and buses, on the streets, in commercial centres or office buildings). Therefore, we have fixed and mobile cameras having different technical characteristics (most of the time dynamic) (see Figure <ref type="figure" target="#fig_1">2</ref>   We started by analysing the way a query is processed in a video surveillance system today. When a person (victim of an aggression for example) files a complaint, he is asked to fill a form describing the elements that could help the investigators to find the relevant video segment (the Figure <ref type="figure">3</ref> illustrates an example of such form). Based on the spatial and the temporal aspects of the query, the surveillance operator uses his own knowledge concerning the spatial disposal of the cameras' network in order to select the most relevant video contents. Then he analyses these contents by playing them on the different screens that he has in front of him. The monitors themselves show no spatial relationship of any kind, only the numbering of the cameras is in a somewhat logical order.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 3: Example of a form filled by a victim</head><p>Therefore, the operators' tasks become cumbersome taking into consideration the huge volume of video contents to be analysed, the mobility and the different characteristics of cameras. Moreover, in the current systems, most of the stored contents is not exploitable because of the recording's low quality. This lack of quality is often caused by inappropriate installation of cameras, bad shooting, bad illumination conditions etc. The operator has no a priori knowledge on the quality of the video contents and thus he loses time by visualizing the low quality contents also.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: ISO 22311 sensor description</head><p>The video surveillance domain has seen a big number of commercial systems developed <ref type="bibr" target="#b7">[8]</ref>. In the research area, many projects were developed as well: CROMATICA [5], CARETAKER 2 [3], VANAHEIM 3 for the indoor static video surveillance, and SURTRAIN <ref type="bibr" target="#b19">[20]</ref>, BOSS 4  [13], PROTECTRAIL 5 projects for the on-board mobile surveillance. All these heterogeneous projects concentrate on the development of the system's physical architecture and of better detection algorithms in order to obtain a fully automatic system <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr" target="#b23">[24]</ref>.</p><p>We can summarize by saying that there is a growing concern in the research and industrial environments for developing algorithms for video content analysis (VCA) in order to automatically index content and detect objects (e.g., abandoned packets or luggage) and events (e.g., intrusions, people or vehicles going the wrong way) <ref type="bibr" target="#b15">[16]</ref> or to draw operators' attention to events of interest (e.g., alarms). However, solutions for assistance to a posteriori investigation are at a lesser stage of maturity, and to date most of the data remain unexploited.</p><p>In this article, we are going to address also the lack of interoperability between different surveillance systems. In the context of an inquiry, the police might need to analyse data from different sources (systems), so it is important that the different outputs of the systems to be interoperable. As a consequence, the big actors of the domain started to unify efforts in order to standardize the structure of folders and of metadata files generated by video surveillance systems. A result of these efforts is represented by the ISO 22311 standard that proposes a structure 2 http://cordis.europa.eu/ist/kct/caretaker_synopsis.htm 3 http://www.vanaheim-project.eu/ 4 http://celtic-boss.mik.bme.hu/ 5 http://www.protectrail.eu/ for the data issued from video surveillance systems and the metadata needed to exploit that data.</p><p>In the following, we are going to present the ISO 22311 standard, especially the part concerning the description of the cameras characteristics and mobility. We are going to highlight the interesting elements which relate to our research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Standard ISO 22311</head><p>The Standard ISO 22311 defines an interoperability format for the data generated by video surveillance systems and for the metadata needed to exploit these huge volumes of data.</p><p>The audio visual packages (containing audio, video or metadata files) have to be structured hierarchically (in files, folders and groups of folders) according to time intervals in Coordinated Universal Time (UTC). For each group of folders it is mandatory for the system to provide a XML description of the source(s) (e.g., cameras, GPS, video analysis tools), codec(s), file formats and a temporal index enabling an easy access to the content.</p><p>The current technologies and processing power enable the analysis of video content and the extraction of metadata describing objects, events, scenes etc. This analysis depends on the acquisition context (e.g., the position of the camera, the image quality, the type of sensors). Therefore, the standard distinguishes between the systems, those that can generate such metadata (i.e., level 2 systems) and provides a general structure and dictionary for describing sensors and events (i.e., metadata).</p><p>As in this paper we are going to address the problem of cameras' geo-localization we present the schema for the sensors description in Figure <ref type="figure">4</ref>.</p><p>Each camera has an absolute location (GPS coordinates) as more and more of the installed cameras have an embedded GPS transmitter. But, there are many cases when the GPS is not enough because: (1) we need to model the position of the camera with regards to the video surveillance system and not to the world;</p><p>(2) in some situations, for example in indoor environments, the GPS positions do not provide a good precision.</p><p>In the context of a video surveillance system:  The mobile cameras are embedded in buses, train, police cars;  The movement of these vehicles is constrained by a road network and a transportation network.</p><p>By analysing the standard, we can notice that it defines a relative position for a camera that is today a simple link to an image (the plan of the network of cameras or of a building). This kind of location is not easily exploitable. Furthermore, the standard does not consider the video surveillance cameras' mobility. In order to overcome these issues, we propose to extend this standard through a multilayer modelling approach, where the network of cameras is put on top of a transportation network.</p><p>In the following, we present a state of the art of the mobile objects modelling as the cameras' mobility management represents the main focus of this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Mobile Objects Modelling</head><p>With the technology's evolution, the mobility became very important in the context of video surveillance systems. Not only the objects (e.g., persons, cars) are moving in the monitored scene, but also the surveillance cameras are moving. The great majority of the research papers concerning the mobile objects in the video surveillance domain concentrate on the video content analysis in order to detect and track the objects, to interpret their behaviour and to understand the visual events of the monitored scene <ref type="bibr" target="#b9">[10]</ref>. Thus, the mobility of the cameras is not exploited.</p><p>In the field of moving objects, a mobile object means the continuous evolution of any object over the time, in terms of position and dimension <ref type="bibr" target="#b20">[21]</ref>. This movement of the mobile objects can be effectuated in an unconstrained environment <ref type="bibr" target="#b17">[18]</ref> (e.g., for hurricanes, fires) or in a constrained environment <ref type="bibr" target="#b16">[17]</ref> (e.g., cars move on road and transportation networks).</p><p>In the video surveillance domain, the objects are moving in a constrained environment, mainly by the road network. This environment is represented as a graph-based model <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b24">[25]</ref>, where the vertices are junctions and the edges are the roads between the two junctions. <ref type="bibr" target="#b8">[9]</ref> considers also the connectivity at each junction in order to represent the road network. <ref type="bibr" target="#b18">[19]</ref> extends the model proposed by <ref type="bibr" target="#b8">[9]</ref> in order to consider the predefined trajectories that some objects could have (e.g., buses). <ref type="bibr" target="#b6">[7]</ref> proposes a mobile object data model where they consider the road and rail networks. <ref type="bibr" target="#b1">[2]</ref> takes into account the transport network in a city as a graph and they add to each graph vertex the transport modes available (i.e., pedestrian, auto, urban rail, metro, bus).</p><p>In the management of mobile objects, a major issue is the storage of the objects' spatio-temporal positions. Several strategies can be considered: using the spatio-temporal data types defined by <ref type="bibr" target="#b8">[9]</ref> (e.g., moving points, moving lines, moving regions), or using the dynamic attributes <ref type="bibr" target="#b22">[23]</ref> (e.g., motion vector) which enables to limit the size of the data that has to be stored and queried.</p><p>As far as we know, the video content's mobility is not taken into account in the video surveillance domain. In this article, we want to exploit the advances in the field of mobile objects and apply them in the video surveillance domain in order to consider the mobile aspect of surveillance cameras.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Extension of the Standard 22311 for the management of cameras mobility</head><p>As you could see in Section 2.2, the Standard 22311, defines a fix position of video surveillance camera, through the GPS coordinates and a link to an image containing the plan of the network. In order to overcome this issue, we propose to compute a relative position with regards to a map which will enable us to:  Model the distances between the cameras and select the relevant cameras for a certain trajectory;  Model the connections between the cameras ( e.g., possible path between camera1 and camera2 but not between camera2 and camera3 );  Model trajectories for mobile cameras;  Model the fields of view and the maximum detection distances of fixed and mobile cameras.</p><p>In order to achieve this goal we took our inspiration from the domain of GIS (Geographical Information Systems) <ref type="bibr" target="#b3">[4]</ref> and mobile objects modelling.</p><p>By considering the video surveillance system as a GIS we benefit from the separation between the conceptual layers. Thus at any time, a new layer can be added without modifying the existing layers.</p><p>In our approach, we propose a four layer model: (1) Road network, (2) Transportation network, (3) Objects and (4) Cameras network. The Figure <ref type="figure">5</ref> illustrates the UML model for the first three layers.</p><p>The "Road network" layer, presented in blue in Figure <ref type="figure">5</ref>, is based on the graph modelling approach well-known in the literature. The road network is considered as an undirected graph G= (V, E), with V a set of vertices and E a set of edges defined according to the granularity level that we want to consider (for a big boulevard of a European capital for example we can consider each segment of the road, each segment between two intersections or the entire boulevard). Each vertex has an identifier and a 2D position. Each edge is determined by two vertices.</p><p>The "Transportation network", presented in yellow in Figure <ref type="figure">5</ref>, is also based on a graph model. At this level, the vertices of the transportation network are intersections between roads, and bus stations. Each transportation vertex has a position with regards to a road segment. Ordered sequences of transportation vertices constitute sections, which form lines (e.g., bus lines). The advantage of our approach with regards to the ones proposed in the state of the art <ref type="bibr" target="#b8">[9]</ref> is that we have two independent graphs that are connected to each other through the positions of transportation vertices. That way if the buses stations are modified or new buses lines are introduced we do not have to recompute the underlying road graph.</p><p>The "Objects" layer, presented in red in Figure <ref type="figure">5</ref>, models the positions of fixed and mobile objects with regards to the underlying layers.</p><p>The Fixed Object has a position on a road segment. Its position is defined as a distance from each end of the segment. For this kind of objects, we adopt the same localisation as the one proposed by <ref type="bibr" target="#b8">[9]</ref>.</p><p>In the case of Mobile Objects (e.g., buses, police cars, persons), the position changes in time. Each object will periodically transmit its position using different strategies (e.g., each Δt seconds, each time the object is changing the segment, when the object's position predicted by the motion vector deviates from the real position by more than a threshold <ref type="bibr" target="#b22">[23]</ref>) that are out of the scope of this article. We suppose that we periodically receive updates containing time-stamped GPS points that we transform into a relative position with regards to the road network (i.e., the segments). We use this information to reconstitute object's trajectory.</p><p>We distinguish two types of mobile objects: objects that move freely within the road and transportation networks (e.g., car, person) and objects of which trajectories are constrained by a "line" (e.g., buses).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 5: "Road network", "Transportation network" and "Objects" layers</head><p>On top of all these layers, we model a video surveillance cameras' network. A simplified schema of this model is illustrated in Figure <ref type="figure" target="#fig_2">6</ref>.</p><p>The cameras' network is composed of fixed and mobile cameras. The fixed cameras have a 2D position that is given at installation time. The mobile cameras are associated with mobile objects (e.g., buses) and their trajectory is the same as the object's one.</p><p>The new generation of digital surveillance cameras has embedded GPS transmitters and even compasses. The technologies developed around these cameras make it possible to automatically extract information from the camera related to its orientation, pan, tilt, zoom, focal distance, compression parameters etc.</p><p>Based on all these elements it is possible to model the field of view for each camera and track its modifications in time. The field of view is computed based on four parameters <ref type="bibr" target="#b0">[1]</ref>: the 2D position, the viewable angle, the orientation and the visible distance. A schema of a 2D field of view proposed by <ref type="bibr" target="#b0">[1]</ref> is shown in Figure <ref type="figure">7</ref>.  In order to select the most appropriate attributes to describe a video surveillance camera, we studied the sensor description proposed by the ISO 22311 standard, SensorGML<ref type="foot" target="#foot_3">6</ref> , KML <ref type="foot" target="#foot_4">7</ref> . We separated the identified camera's properties in two categories: properties that could be modified over the time, and fixed characteristics.</p><p>Thus, the extension of the standard ISO 22311 is realised at three levels:  Taking into account the road and transportation networks as a graph and not as an image;  Taking into account the camera's relative position and its mobility on the networks;  Taking into account camera's characteristics change over the time.</p><p>Our model is implemented in a spatio-temporal database that can be queried by users in order to retrieve the relevant cameras for a given trajectory. The originality of our research work is given by:  the fact that it combines different spatio-temporal information (e.g., road network, transportation network, objects' positions) and computation (e.g., trajectories, field of view) within the same database;  the twofold mobility, of the target objects and of the cameras.</p><p>In the next section we present the general architecture of the tool that could assist the video surveillance operators in their research based on our spatio-temporal database and some examples of queries.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Spatiotemporal database and queries</head><p>Based on the presented model, our goal is to automatically select the cameras (fixed and mobile) that could contain relevant video content with regards to the user query (their field of view intersected the query trajectory).</p><p>More precisely, the idea is to compare a spatio-temporal query of the user (e.g., Rivoli Street from Louvre to Metro Chatelet the 14 th of July between 10h and 14h) with the trajectories stored in our database and, for a better precision, with the cameras fields of view. The Figure <ref type="figure">8</ref> illustrates the generic architecture of a system based on our spatio-temporal database for assisting the video surveillance in their research.</p><p>From the Figure <ref type="figure">8</ref> it is easy to observe that there are two main questions when developing such system: How to query the system? and How to update the system?. As explained in the previous section our work addresses only the querying aspect that we are going to describe in the following.</p><p>First, a Query Interpreter module will transform the user query (e.g, Rivoli Street from Louvre to Metro Chatelet the 14 th of July between 10h and 14h) in a spatio-temporal query. By spatiotemporal query we understand a sequence of road segments and a time interval that will be further transformed in a SQL query, by the SQL Query Generator module. The SQL query is executed on the database having as a result a list of cameras. Based on some image quality parameters a score per camera can be computed and the initial list can then be ranked according to this relevance score.</p><p>In the following we present two examples of spatiotemporal queries executed on our database implemented in Oracle Spatial<ref type="foot" target="#foot_5">8</ref> :</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>The </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">CONCLUSION</head><p>In this paper, we presented a spatio-temporal modelling approach of fixed and mobile cameras within a common transportation network. Taking our inspiration from the multilayer representation of the geographical information systems, we model spatial information about the road and transportation infrastructures and mobile objects' trajectories in four independent layers: (1) Road network, (2) Transportation network, (3) Objects and (4) Cameras network.</p><p>Based on this modelling approach we also proposed a generic architecture for a system that could assist the video surveillance operators in their research. Starting from a sequence of trajectory segments and a temporal interval, such system generates the list of cameras that could contain relevant information concerning the query (that "saw" the query's trajectory).</p><p>The need of such assisting tools was identified within the French National Project METHODEO. Among the project's partners, we mention the French National Police, Thales and the RATP also known as Régie Autonome des Transports Parisiens (English: Autonomous Operator of Parisian Transports). Our approach has been validated and will be evaluated within the project.</p><p>Obviously, many questions are still left with no answer giving way to a large number of perspectives. We will present several of them in the following.</p><p>For now, our model considers only outdoor transportation and surveillance networks. We plan to extend our model to indoor spaces also in order to model cameras inside train or subway stations for example.</p><p>Our work is situated in the context of the a posteriori research in the case of a police inquiry. We would like to extend this context in the future in order to be able to process real time queries or to predict trajectories based on some statistics realized based on the stored data (e.g., average speed on some road segments).</p><p>Another perspective of our work is the improvement of the resulted cameras list by re-ranking it based on cameras' characteristics (e.g., image quality, visible distance).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">ACKNOWLEDGMENTS</head></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>for an example of such cameras) [14]:  Camera type: optical, thermal, infrared  Sensor type and dimension: CMOS, CCD  Transmission type: analogous/ IP  Angle of view (horizontal and vertical), focal distance, pan-tilt-zoom, field of view orientation, visible distance etc.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Examples of video surveillance cameras having the same position but different fields of view</figDesc><graphic coords="2,359.32,160.19,94.94,67.38" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: "Cameras network" layer</figDesc><graphic coords="5,317.91,307.16,249.22,193.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 7 :Figure 8 :</head><label>78</label><figDesc>Figure 7: Illustration of the field of view model in 2D [1]</figDesc><graphic coords="5,317.91,535.83,78.74,90.43" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,124.71,71.85,380.53,231.86" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="6,133.31,71.85,345.92,156.69" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>first selects the fixed cameras of which geometry (field of view) intersects the geometry of the Rivoli street;</figDesc><table><row><cell></cell><cell cols="4">The second selects the mobile cameras that are associated</cell></row><row><cell></cell><cell cols="4">with the buses that crossed the street within the given time</cell></row><row><cell></cell><cell>interval.</cell><cell></cell><cell></cell></row><row><cell>LET</cell><cell>TimePeriod</cell><cell>=</cell><cell cols="2">Timestamp(hour(2013,1,14,10),</cell></row><row><cell cols="2">hour(2013,1,14, 12));</cell><cell></cell><cell></cell></row><row><cell cols="2">SELECT ObjetID</cell><cell></cell><cell></cell></row><row><cell></cell><cell cols="2">FROM ConstrainedObject</cell><cell></cell></row><row><cell></cell><cell>WHERE</cell><cell cols="2">Type.MobileObject=</cell><cell>"Bus"</cell><cell>AND</cell></row><row><cell></cell><cell cols="4">TimePeriod.ConstrainedObject (atperiods (Timestamp,</cell></row><row><cell></cell><cell>TimePeriod));</cell><cell></cell><cell></cell></row><row><cell cols="4">SELECT DISTINCT IdMobileCamera</cell></row><row><cell></cell><cell cols="4">FROM ConstrainedObject, FreeObject, MobileCamera</cell></row><row><cell></cell><cell>WHERE</cell><cell>Intersect</cell><cell cols="2">(MobileCamera.geom,</cell></row><row><cell></cell><cell cols="2">ConstrainedObject.geom)</cell><cell>AND</cell><cell>Intersect</cell></row><row><cell></cell><cell cols="4">(MobileCamera.geom, FreeObject.geom);</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>SELECT IdCamera</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>FROM FixedCamera</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>WHERE SDO_RELATE(</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>camera_geom,</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>(SELECT street_geom</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>FROM Road</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>WHERE Name ='Rivoli' ),</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>'mask=OVERLAPBDYDISJOINT querytype=WINDOW'</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>)='TRUE';</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://www.iso.org/iso/fr/catalogue_detail.htm?csnumber=5346 Proceedings IMMoA'13  </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_1">http://www.dbis.rwth-aachen.de/IMMoA2013/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2">Proceedings IMMoA'13  </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3">http://www.opengeospatial.org/standards/sensorml</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_4">http://www.opengeospatial.org/standards/kml</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_5">http://www.oracle.com/fr/products/database/options/spatial/index .html Proceedings IMMoA'13  </note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work has been supported by the ANR CSOSG-National Security (French National Research Agency) project METHODEO.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Relevance Ranking in Georeferenced Video Search</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Ay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zimmermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kim</forename><forename type="middle">S O</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Systems Journal</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="105" to="125" />
			<date type="published" when="2010-03">March 2010</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A data model for trip planning in multimodal transportation systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Booth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sistla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Wolfson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cruz</forename><forename type="middle">I F</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology</title>
				<meeting>the 12th International Conference on Extending Database Technology: Advances in Database Technology</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="994" to="1005" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Caretaker puts knowledge to good use</title>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>European Public Transport Magazine</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Introduction to Geographic Information Systems</title>
		<author>
			<persName><forename type="first">Chang K.-T</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>McGraw-Hill Higher Education</publisher>
			<biblScope unit="page">450</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Advanced Video-Based Surveillance Systems</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Deparis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Velastin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Davies</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Springer International Series in Engineering and Computer Science</title>
		<imprint>
			<biblScope unit="volume">488</biblScope>
			<biblScope unit="page" from="203" to="212" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
	<note>Cromatica project</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Collecting and managing networkmatched trajectories of moving objects in databases</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Dingin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Deng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd international conference on Database and expert systems applications</title>
				<meeting>the 22nd international conference on Database and expert systems applications</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">6860</biblScope>
			<biblScope unit="page" from="270" to="279" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Mobile Object and Real Time Information System Modeling for Urban Environment</title>
		<author>
			<persName><forename type="first">A</forename><surname>El</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Boulmakoul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Laurini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th Urban and Regional Data Management Symposium</title>
				<meeting>the 26th Urban and Regional Data Management Symposium</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="403" to="413" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">U</forename><surname>Gdansk</surname></persName>
		</author>
		<ptr target="www.addpriv.eu" />
		<title level="m">Deliverable 2.1 -Review of existing smart video surveillance systems capable of being integrated with ADDPRIV</title>
				<imprint>
			<publisher>ADDPRIV consortium</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Modeling and Querying Moving Objects in Networks</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Güting</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">T</forename><surname>Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ding</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">VLDB Journal</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="165" to="190" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A Survey on Moving Object Detection and Tracking in Video Surveillance System</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Thakore</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Soft Computing and Engineering</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="44" to="48" />
			<date type="published" when="2012-07">July 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Intelligent Visual Surveillance -A Survey</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Kong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Control Automation and Systems</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="926" to="939" />
			<date type="published" when="2010-04">april 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Video Surveillance Systems-A Survey</title>
		<author>
			<persName><forename type="first">Lakshmi</forename><surname>Devasena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Revathí</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hemalatha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Science Issues</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="635" to="642" />
			<date type="published" when="2011-07">July 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Transport system architecture for on board wireless secured a/v surveillance and sensing</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lamy-Bergot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ambellouis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Khoudour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sanz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">,</forename><surname>Malouch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hocquard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J-L</forename><surname>Bruyelle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Petit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Villalta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jeney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Egedy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th international conference on Intelligent Transport Systems Telecommunications</title>
				<meeting>the 9th international conference on Intelligent Transport Systems Telecommunications</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="564" to="568" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Video Surveillance Cameras</title>
		<author>
			<persName><forename type="first">C</forename><surname>Le Barz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lamarque</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Video Surveillance Systems, chapter 4</title>
				<editor>
			<persName><forename type="first">Jean-Yves</forename><surname>Dufour</surname></persName>
		</editor>
		<imprint>
			<publisher>Wiley</publisher>
			<date type="published" when="2012">novembre 2012</date>
			<biblScope unit="page" from="33" to="46" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Effective Mapmatching on the Most Simplified Road Network</title>
		<author>
			<persName><forename type="first">K</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ding</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th International Conference on Advances in Geographic Information Systems (SIGSPATIAL &apos;12)</title>
				<meeting>the 20th International Conference on Advances in Geographic Information Systems (SIGSPATIAL &apos;12)</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="609" to="612" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Semantic browsing of video surveillance databases through Online Generic Indexing</title>
		<author>
			<persName><forename type="first">D</forename><surname>Marraud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cepas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Reithler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding of the 3rd ACM/IEEE International Conference on Distributed Smart Cameras</title>
				<meeting>eeding of the 3rd ACM/IEEE International Conference on Distributed Smart Cameras</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Spatial Partition Graphs: A Graph Theoretic Model of Maps</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mckenney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schneider</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedingsof the 10th Int. Symp. on Spatial and Temporal Databases</title>
				<meeting>of the 10th Int. Symp. on Spatial and Temporal Databases</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">4605</biblScope>
			<biblScope unit="page" from="167" to="184" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Conceptual Modeling for Traditional and Spatio-Temporal Applications: The MADS Approach</title>
		<author>
			<persName><surname>Parent Ch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Spaccapietra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zimányi</forename><forename type="middle">E</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>Springer-Verlag</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Modeling and Querying Mobile Location Sensor Data</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">S</forename><surname>Popa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zeitouni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th Int. Conference on Advanced Geographic Information Systems, Applications, and Services, IARIA</title>
				<meeting>the 4th Int. Conference on Advanced Geographic Information Systems, Applications, and Services, IARIA</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="222" to="231" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Audio-video surveillance system for public transportation</title>
		<author>
			<persName><forename type="first">Q.-C</forename><surname>Priam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lapeyronnie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Baudry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lucat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sayd</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ambellouis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sodoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Flancquart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-C</forename><surname>Barcelo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Heer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ganansia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Delcourt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding of the 2nd international conference on Image Processing Theory Tools and Applications</title>
				<meeting>eeding of the 2nd international conference on Image essing Theory Tools and Applications</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="47" to="53" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Moving Objects in Databases and GIS: Stateof-the-Art and Open Problems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schneider</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Research Trends in Geographic Information Science</title>
				<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="169" to="188" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A Posteriori Analysis for Investigative Purposes</title>
		<author>
			<persName><forename type="first">F</forename><surname>Sedes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Sulzer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Marraud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mulat</forename><surname>Ch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cepas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Video Surveillance Systems, chapter 3</title>
				<editor>
			<persName><forename type="first">Jean-Yves</forename><surname>Dufour</surname></persName>
		</editor>
		<imprint>
			<publisher>Wiley</publisher>
			<date type="published" when="2012">novembre 2012</date>
			<biblScope unit="page" from="33" to="46" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Querying the Uncertain Posi-tion of Moving Objects</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Sistla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Wolfson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chamberlain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Temporal Databases: Research and Practice</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">1399</biblScope>
			<biblScope unit="page" from="310" to="337" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">A Survey of Visual Sensor Networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Soro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Heinzelman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Multimedia</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">2009</biblScope>
			<biblScope unit="page" from="1" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Traffic aware route planning in dynamic road networks</title>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th international conference on Database Systems for Advanced Applications</title>
				<meeting>the 17th international conference on Database Systems for Advanced Applications</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">7238</biblScope>
			<biblScope unit="page" from="576" to="591" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
