<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Conference &amp; Workshop on Assistive Technologies for People with Vision &amp; Hearing Impairments Assistive Technology for All Ages CVHI 2007, M.A. Hersh (ed.) BODY MOUNTED VISION SYSTEM FOR VISUALLY IMPAIRED OUTDOOR AND INDOOR WAYFINDING ASSISTANCE</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Sylvie</forename><surname>Treuillet</surname></persName>
							<email>sylvie.treuillet@univ-bpclermont.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR 6602</orgName>
								<orgName type="institution">LASMEA</orgName>
								<address>
									<addrLine>Campus des Cézeaux</addrLine>
									<postCode>63177</postCode>
									<settlement>Aubière</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eric</forename><surname>Royer</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR 6602</orgName>
								<orgName type="institution">LASMEA</orgName>
								<address>
									<addrLine>Campus des Cézeaux</addrLine>
									<postCode>63177</postCode>
									<settlement>Aubière</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Thierry</forename><surname>Chateau</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR 6602</orgName>
								<orgName type="institution">LASMEA</orgName>
								<address>
									<addrLine>Campus des Cézeaux</addrLine>
									<postCode>63177</postCode>
									<settlement>Aubière</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michel</forename><surname>Dhome</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR 6602</orgName>
								<orgName type="institution">LASMEA</orgName>
								<address>
									<addrLine>Campus des Cézeaux</addrLine>
									<postCode>63177</postCode>
									<settlement>Aubière</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jean-Marc</forename><surname>Lavest</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">UMR 6602</orgName>
								<orgName type="institution">LASMEA</orgName>
								<address>
									<addrLine>Campus des Cézeaux</addrLine>
									<postCode>63177</postCode>
									<settlement>Aubière</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Conference &amp; Workshop on Assistive Technologies for People with Vision &amp; Hearing Impairments Assistive Technology for All Ages CVHI 2007, M.A. Hersh (ed.) BODY MOUNTED VISION SYSTEM FOR VISUALLY IMPAIRED OUTDOOR AND INDOOR WAYFINDING ASSISTANCE</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F414369355F6611343E58F473ADBC138</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T05:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>personal localisation and heading system</term>
					<term>navigation assistance</term>
					<term>blind people</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The most challenging issue of the navigation assistive systems for the visually impaired is the instantaneous and accurate spatial localization of the user. Most of the previous proposed systems based on GPS sensors have clearly insufficient accuracy for pedestrian use and are confined to outdoor with severe failing in urban area. This paper presents an interesting alternative localization algorithm using a body mounted single camera. Instantaneous accurate localization and heading estimates of the person are computed from images as the trip progresses along a memorised path. A first portable prototype has been tested for outdoor as well as indoor pedestrian trips. Experimental results demonstrate the effectiveness of the vision based localization to keep the walker in a navigation corridor less than one meter width along the intended path. Future works will investigate multimodal adaptative interface taking account the psychological and ergonomic factors for the blind end-user to design a suitable guiding solution for the blind and visually impaired.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>A major political and technical challenge for a modern society is to find innovating solutions for an effective assistance to increase the security and mobility of the visually impaired persons travelling through city streets and buildings. For people who are blind, way finding is dependent on the ability to remain localized and oriented. Recent works on assistive technologies for human localization are mainly focused on GPS (Global Positioning System) <ref type="bibr" target="#b0">(Brusnighan 1989</ref><ref type="bibr" target="#b7">, Makino 1997</ref><ref type="bibr" target="#b6">, Loomis 2001</ref><ref type="bibr" target="#b4">, Kowalik 2004</ref>). But GPS sensor is indoor ineffective and fails to provide accurate spatial position in low cost version (ten meters -for the best -is clearly inadequate to keep a walker along the intented path). Furthermore, GPS suffers of masking areas especially as the blind individual moves while travelling along the walls. Some works have tested differential GPS which allows around one meter accuracy <ref type="bibr" target="#b2">(Helal 2001</ref><ref type="bibr" target="#b1">, Balachandran 2003</ref>), but it is a costly and cumbersome equipment which needs fixed ground stations and is only outdoor efficient. Alternative devices have also been designed for indoor human localization such as ultrasound <ref type="bibr" target="#b9">(Ran 2004)</ref> or radio frequency identification transponder <ref type="bibr" target="#b8">(Na, 2006)</ref>, using robotic "dog-guide" (Kulyulin 2004) or instrumented white cane <ref type="bibr" target="#b3">(Hesch 2007)</ref>. To sum up, the localization systems with sufficient accuracy for pedestrian application at this time are based on ground installation (DGPS), and Drishti <ref type="bibr" target="#b9">(Ran 2004</ref>) is the only system which proposes a combined outdoor/indoor system. To trust in a large spread, the design of an individual localization system will be based on wearable, low cost and mobile technologies, free from collective equipment. In this way, we propose a new vision based solution derived from autonomous navigation of wheeled robot <ref type="bibr" target="#b10">(Royer 2005)</ref> to assist visually impaired in their everyday travelling outdoor as well as indoor. This computer vision based system requires no environment instrumentation: instantaneous accurate localization and heading estimates of the user are computed only from images provided by a body mounted camera as the trip progresses along a memorized path. This real time localization is available provided that a video sequence has been previously recorded during a learning trip. For the time being, the vision system is only used to localize and guide the walker along the intended path, it does not substitute for the long white cane in obstacle avoidance. The following section describes our vision based positioning system. Some experimental results with a first wearable prototype are given in section 3. The conclusion and future works are discussed in the last section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Vision Based Positioning</head><p>The localization system relies on two steps (Figure <ref type="figure">1</ref>): a preliminary learning path and a real time localization for navigation assistance. During a learning trip, a video sequence is recorded by the onboard vision system and processed off-line to automatically build a large set of natural landmarks by applying structure from motion algorithm. This algorithm permits to jointly recover the motion of the camera and the geometry of a rigid surrounding scene by automatically recognizing some singular points through the image sequence (angular points on buildings, windows, doors, billboard, trees,…). The recorded reference consists in a set of selected key images corresponding to different camera positions along the learning path and a 3D map of the environnement. During navigation assistance, ego motion of the camera is computed as the trip progresses by matching the stored 3D landmarks with their projections extracted in the current image. Figure <ref type="figure">2</ref> shows a result of localization during a trip in the university library. As the camera is rigidly allied to the carrier, it provides its instantaneous localization and orientation (6 DoF) along the trip. Even if transient obstacles (pedestrian, car, advertising hoarding), that were not in the path of the memorized route, mask some part of the 3D environment, the camera localization may be effective while there are at least a few tens of well recognized landmarks. Figure <ref type="figure">2</ref> presents such a case with pedestrian shifting. Initial position is automatically detected by scanning the stored key images while camera is placed not too far from the memorized path. Performance of the system has been largely tested with wheeled robots: absolute localization accuracy around 0.05m is achieved by comparing with the ground truth given by a DGPS sensor <ref type="bibr" target="#b10">(Royer 2005)</ref>. With such a low cost vision system, the robot may be controlled with the same accuracy that with the DGPS receiver. We propose to adapt this system to pedestrian navigation by using a body mounted camera and a wearable computer. The vision system provides current localization and also heading of the walker relative to the local segment of the reference path. Considering this current position, the walker may be continuously reoriented to be kept in a navigation corridor along the intended path by sound effects or vocal prompts.</p><p>Figure <ref type="figure">1</ref> Synoptic of the vision based localization system (left) and the pedestrian prototype with a chest mounted camera and a wearable computer in a back pack (right).</p><p>S. Treuillet, E. Royer, T. Chateau, M. Dhome, J.M. Lavest</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experiments</head><p>A first pedestrian prototype has been developped to test the robustness of the vision based localization algorithm on bumpy walking trips (unlike the wheeled robots which move smoothly). Several video sequences have been acquired along walking trips more than 100m long in various situations (indoor/outdoor, open/cluttered space, flat floor/stairs,…). Different inexperienced visually impaired subjects test the localization system several days or weeks after the video sequence acquisition of the learning trip. The navigation performance may be analyzed by observing in live the path followed by the walker in comparison with the intended one (i.e. the reference path followed by the camera during the learning trip).</p><p>Figure <ref type="figure">2</ref> Left: Top view of a walking trip inside the university library. Empty squares represent all the camera positions selected during the learning path. The current position and heading estimates of the person over the intended path (colored square and vector), bounded with uncertainty ellipsoïd on camera position estimates (extented 100 times). Right: Current image captured by the camera (bottom) and the corresponding image-key memorized during the learning trip several days before (top). Large colored squares are the well matched points, small ones the unmatched.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Experimental device</head><p>A photo of the experimental device is given in Figure <ref type="figure">1</ref>. The pedestrian prototype is composed of a 2,8mm lens equipped AVT Guppy 044B camera (120° diagonal field of view) fixed on a body harness and connected to a wearable computer (Pentium M 1.86GHz 1 Go RAM). Intrinsic parameters of the camera including radial distortions are estimated by a prior calibration. The image resolution is 320x240 pixels which allows to provide localization data every 40 ms. From these localization and heading measurements, vocal prompts are given to keep the walker as close as possible to the memorized path. In this primary version, two different guiding modalities have been tested with headphones: a speech-based interface or sonar sound effect regularly time spaced. The verbal navigational prompts are of three types: "turn left", "turn right" or "straight ahead" depending on walker's heading. Similarly, the sonar-ping are respectively applied only in the left ear, only in the right ear or in both, with varying frequency according to the adherence to the path (angular deviation). S. Treuillet, E. Royer, T. Chateau, M. Dhome, J.M. Lavest</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Results</head><p>In order to illustrate the efficiency of the proposed vision system, Figure <ref type="figure" target="#fig_0">3</ref> shows the tracks of three subjects made blind by an opaque bandage on their eyes navigated outdoor. Equipped with our vision based localization system, they have been able to walk near the reference path in unfamiliar open space without point of reference. Table <ref type="table" target="#tab_0">1</ref> gives some statistics on navigation performance during this outdoor trip and the indoor trip inside the library whose a snapshot is given in Figure <ref type="figure">2</ref>. The average lateral space relative to the reference path is around 0.50m without optimization or subject apprenticeship. Guidance system integrates no motion model and except the prior camera calibration, no harness calibration has been made when exchanging participants. This proves the relative robustness of the system to variations of acquisition conditions.   <ref type="figure" target="#fig_1">4</ref> along a totally open space indoor trip without wall to follow. These experimental results demonstrate the effectiveness of the vision based localization system to keep the walker in a navigation corridor less than one meter width along the intended path.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions and Future Work</head><p>A wearable vision system has been developed for completely localizing a person on a memorized path. Vision based localization allows the same accuracy that the DGPS receiver without the drawbacks of this cumbersome and costly system. Our real-time localization system may be used for pedestrian navigation both outdoor and indoor without dependences to the environment instrumentation. It is clearly an advantage to design a wide used autonomous system for assisting visually impaired people. Future works are dedicated to increase the ergonomics and robustness of the system by working in collaboration with end-users. The key issue to improve guidance efficiency is to find the best communication modalities taking account the psychological and ergonomic factors with a large panel of users with profound vision loss in various situations. Navigation control may be analysed in live by the quantitative feedbacks of path efficiency which will be a very usefull tool to investigate different multimodal adaptative interfaces. We have observed that performance improves with practice. Machine needs to learn to work with the user and adapt some walk models to each user. We also plan to investigate a second prototype mixing a glasses mounted camera with 3-axis gyroscopes to estimate the movements of the head and developing application on PDAs. The ambition of the project is to lead to a convenient embedded personal guidance system dedicated to visual impaired users in complement of existing obstacle detection device like long cane. Such a system may also be useful for locomotion learning of blind people or for elderly people with memory loss to improve their mobility and social inclusion. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 3</head><label>3</label><figDesc>Figure 3Left: Outdoors tracks of three visually impaired subjects equipped with our vision based localization system in unfamiliar open space environment. Right: A surrounding view of the scene.</figDesc><graphic coords="4,70.92,220.38,280.20,280.86" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 4</head><label>4</label><figDesc>Figure 4Two successive tracks of a same blind subject guided by the verbal navigational prompts (vocal), respectively by the sonar interface (bip) with frequency according to the angular deviation.</figDesc><graphic coords="5,137.88,323.94,319.62,263.58" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="2,99.84,473.34,210.97,253.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,113.76,208.86,367.80,254.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Indoor and outdoor localization and navigation performance.</figDesc><table><row><cell>Outdoor trip (150 m)</cell><cell>Subject 1</cell><cell>Subject 2</cell><cell>Subject 3</cell></row><row><cell>Average of lateral space (m)</cell><cell>0.59</cell><cell>0.34</cell><cell>0.66</cell></row><row><cell>Standard deviation of lateral space (m)</cell><cell>0.35</cell><cell>0.20</cell><cell>0.43</cell></row><row><cell>Indoor trip (70 m)</cell><cell>Subject 1</cell><cell>Subject 2</cell><cell>Subject 3</cell></row><row><cell>Average of lateral space (m)</cell><cell>0.36</cell><cell>0.40</cell><cell>0.47</cell></row><row><cell>Standard deviation of lateral space (m)</cell><cell>0.25</cell><cell>0.22</cell><cell>0.27</cell></row><row><cell cols="4">A comparison of the tracks obtained with different guiding modalities is illustrated in Figure</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgement</head><p>The authors thank Gregory Gérenton for its support in the software optimisation and his active contribution to experiments during his training course in laboratory. Special thanks are also due to Nièle Pouzet, a blind student of the University, for her disposal to test our prototype.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Orientation aid implementing the global positioning system</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Brusnighan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">15th IEEE Annual Northeast Bioengineering Conference</title>
				<imprint>
			<date type="published" when="1989">1989</date>
			<biblScope unit="page" from="33" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A GPS-based navigation aid for the blind</title>
		<author>
			<persName><forename type="first">W</forename><surname>Balachandran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. IEEE Symposium on Wearable Computers</title>
				<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="34" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Dristi: an integrated navigation system for the visually impaired</title>
		<author>
			<persName><forename type="first">A</forename><surname>Helal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. IEEE Symposium on Wearable Computers</title>
				<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="149" to="156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An Indoor localization aid for the visually impaired</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Hesch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Roumeliotis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Int. Conf. on Robotics and Automation</title>
				<meeting><address><addrLine>Roma, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-04-10">2007. April 10-14</date>
			<biblScope unit="page" from="3545" to="3551" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Navigator -a talking GPS receiver for the blind</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kowalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kwasniewski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">9th Int. Conf. on Computers Helping People with Special Needs</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Paris, France; Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004-07-07">2004. July 7-9</date>
			<biblScope unit="volume">3118</biblScope>
			<biblScope unit="page" from="446" to="449" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">RFID in robot assisted indoor navigation for the visually impaired</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kulyukin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gharpure</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nicholson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pavithran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Int. Conf. on Intelligent Robats and Systems</title>
				<meeting><address><addrLine>Sandaï, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2002">2004. Sept. 28 -Oct. 2</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1979" to="1984" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">GPS-based navigation systems for the visually impaired</title>
		<author>
			<persName><forename type="first">J</forename><surname>Loomis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Fundamentals of Wearable Computers and Augmented Reality</title>
				<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="429" to="446" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Development of navigation system for the blind using GPS and mobile phone combination</title>
		<author>
			<persName><forename type="first">H</forename><surname>Makino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Int. Conf. of Engineering in Medicine and Biology</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="506" to="507" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The blind interactive guide system using RFID-based indoor positioning system</title>
		<author>
			<persName><forename type="first">J</forename><surname>Na</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">10th Int. Conf. on Computers Helping People with Special Needs</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Linz, Austria; Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006-07-11">2006. July 11-13</date>
			<biblScope unit="volume">4061</biblScope>
			<biblScope unit="page" from="1298" to="1305" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Drishti: an integrated indoor/outdoor blind navigation system and service</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Helal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Moore</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2nd Annual Conf. on Pervasive Computing and Communications</title>
				<meeting><address><addrLine>Orlando, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004-03-14">2004. March 14-17</date>
			<biblScope unit="page" from="23" to="32" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Localization in urban environments: monocular vision compared to a differential GPS sensor</title>
		<author>
			<persName><forename type="first">E</forename><surname>Royer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lhuillier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dhome</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chateau</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. Conf. on Computer Vision and Pattern Recognition</title>
				<meeting><address><addrLine>San Diego, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005-06-20">2005. June 20-25</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="114" to="121" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
