<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fault Tolerant Indoor Positioning Based on Azimuth Measurements</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Márk</forename><surname>Rátosi</surname></persName>
							<email>ratosi@dcs.uni-pannon.hu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Pannonia</orgName>
								<address>
									<addrLine>Egyetem út 10</addrLine>
									<postCode>8200</postCode>
									<settlement>Veszprém</settlement>
									<country key="HU">Hungary</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gyula</forename><surname>Simon</surname></persName>
							<email>simon.gyula@amk.uni-obuda.hu</email>
							<affiliation key="aff1">
								<orgName type="institution">Óbuda University</orgName>
								<address>
									<addrLine>Budai út 45</addrLine>
									<postCode>8000</postCode>
									<settlement>Székesfehérvár</settlement>
									<country key="HU">Hungary</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Fault Tolerant Indoor Positioning Based on Azimuth Measurements</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">677428DC7BED93CCDCB6DC41D92F97E9</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T02:48+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A novel robust positioning method is proposed for indoor 2-D localization, based on azimuth measurements of beacons, deployed in known locations. Based on the measured azimuth and the beacon positions, the location of the sensor is determined. The proposed method utilizes the RANSAC algorithm to detect and reject outlier measurements, possibly present due to incorrect beacon detections or reflections. The outlier detection is based on a realistic, locationdependent measurement error model. The performance of the proposed method is illustrated by simulations and real measurements.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Several indoor localization systems use angle of arrival (AOA) or angle difference of arrival (ADOA) measurements, using light <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>, acoustic <ref type="bibr" target="#b4">[5]</ref>, or radio <ref type="bibr" target="#b6">[6]</ref>, <ref type="bibr" target="#b7">[7]</ref> sources. In these systems the beacons transmit signals, which are detected by the sensors; and the AOA values of the beacons, or ADOA values of beacon pairs, are measured. These angle measurements are used to estimate either the unknown location of the sensor (using multiple beacons, deployed at known positions), or the unknown location of the beacon (using multiple sensors, deployed at known positions).</p><p>The angles can be measured in 3-D, where both the azimuth and elevation of the beacon are measured <ref type="bibr" target="#b3">[4]</ref>, or in 2-D, using only the azimuth values <ref type="bibr" target="#b2">[3]</ref>. The azimuth-only estimation requires much less computation <ref type="bibr" target="#b8">[8]</ref>, and is sufficient for applications where only the 2-D location is to be determined and the elevation is irrelevant (e.g. when a fork-lift truck is tracked, see Fig. <ref type="figure">1</ref>). The sensor's direction, however, must be known or measured in the azimuth-only method; the simplest solution being when the sensor is facing upwards <ref type="bibr" target="#b2">[3]</ref>.</p><p>The estimation of the sensor position can be made by solving an equation system <ref type="bibr" target="#b9">[9]</ref>; or in case of redundant measurements, least squares methods <ref type="bibr" target="#b3">[4]</ref>, consensus-based approaches <ref type="bibr" target="#b2">[3]</ref>, or exhaustive search methods were proposed <ref type="bibr" target="#b3">[4]</ref>. The various estimation schemes provide good estimates when enough measurements are available, and the measurements have reasonably small error. Measurements with large error (i.e. outliers), usually resulting from incorrect detections or reflections, may cause large error in the position estimate. Such outliers must be removed from the data used in the estimation process, to avoid incorrect estimates. Popular methods include model-based approaches (using e.g. Kalman-filters) <ref type="bibr" target="#b10">[10]</ref>, statistical methods <ref type="bibr" target="#b11">[11]</ref>, consistency-based approaches <ref type="bibr" target="#b2">[3]</ref>, and Random Sample Consensus (RANSAC) <ref type="bibr" target="#b12">[12]</ref>, <ref type="bibr" target="#b13">[13]</ref>. RANSAC is popular because it iteratively adds inliers to and excludes outliers from a small starting set of measurements, thus provides robust estimates with reasonable amount of computation.</p><p>In this paper we propose a robust RANSAC-based solution for azimuth-only AOA/ADOA systems. The proposed solution requires angle measurements of at least 3 beacons. With minimal number of beacons an algebraic solution is provided, which is naturally sensitive to outliers. When redundant (i.e. four or more) measurements are present, RANSAC is used to select the larges possible consistent beacon set to calculate a robust position estimate. A practical error model for cameras is proposed to determine the position-dependent inlier angle tolerance. The performance of the proposed solution is illustrated by simulation examples and practical measurements as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Proposed Localization Method</head><p>2.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>System architecture</head><p>In the proposed system the position of a camera is estimated; the camera may be deployed e.g. on top of a fork-lift truck, as shown in Fig 1 <ref type="figure">.</ref> The camera, equipped with a fisheye lens, is facing upwards, thus the sensor's normal vector is in line with the coordinate system's Z axis. The beacons (e.g. blinking LEDs) are deployed at known positions (e.g. along the walls). The beacons continuously transmit their unique IDs, using visible light communication techniques (in the test system the RUPSOOK protocol was used <ref type="bibr" target="#b14">[14]</ref>). The beacons are detected on the camera's image stream.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1:</head><p>The architecture of the localization system. Blinking beacons are detected on the image of the fish-eye camera. The camera location is determined using the known beacon positions. The world coordinate system is 𝐾 with axes 𝑋 , 𝑌 , 𝑍 , while the camera coordinate system is 𝐾 with axes 𝑋 , 𝑌 , 𝑍 , where 𝑍 and 𝑍 are parallel. The origin of 𝐾 is the unknown camera position 𝐶 * = (𝑥 , 𝑦 , 𝑧 ), and its orientation is the unknown angle 𝜑 , as shown in Fig. <ref type="figure" target="#fig_0">2</ref>. Since 𝑍 is parallel with 𝑍 (the camera is facing upwards), the 2-D beacon directions can be used, as shown in Fig. <ref type="figure" target="#fig_0">2</ref>, which greatly simplifies the computation need <ref type="bibr" target="#b15">[15]</ref>. Notice that we assumed that the axis of the camera is parallel with the Z axis. It may easily be satisfied in certain applications (see e.g. Fig. <ref type="figure">1</ref>), or the sensor inclination can be measured and compensated for, e.g. <ref type="bibr" target="#b2">[3]</ref>.</p><p>In coordinate system 𝐾 let us denote the known coordinates of beacon 𝐵 by 𝑃 * = (𝑥 , 𝑦 , 𝑧 ), and the orthogonal projections of 𝑃 * and 𝐶 * to plane X-Y by 𝑃 = (𝑥 , 𝑦 ) and 𝐶 = (𝑥 , 𝑦 ), respectively. The measured direction of beacon 𝐵 on the X-Y plane in coordinate system 𝐾 is denoted by 𝛼 , as shown in Fig. <ref type="figure" target="#fig_0">2</ref>. Thus the inputs of the estimator process are measurements 𝛼 , and beacon positions 𝑃 , 𝑖 = 1, 2, … , 𝑁 , where 𝑁 is the number of beacons, while the output is position 𝐶 and orientation 𝜑 (in world coordinate system 𝐾 ).</p><p>In real scenarios bad measurements are not uncommon (e.g. due to reflections), resulting in outlier measurement values. Also, beacons may be unintentionally relocated, thus certain values 𝑃 may be wrong and the measured angles are inconsistent with them, practically behaving like outliers. To achieve a fault tolerant system, it is essential that the estimation process be resilient to outliers in the set of input values. In the proposed solution RANSAC is used to provide robust estimates. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Error Model</head><p>The beacons are detected on the camera's image stream. On the left-hand side of Fig. <ref type="figure" target="#fig_1">3</ref> a bird's-eye view of the area is shown, with the camera 𝐶, beacons 𝐵 , 𝐵 , 𝐵 , 𝐵 , and an obstacle partly covering beacon 𝐵 . On the right-hand side the fish-eye camera image is shown, with two possible sources of detection error.</p><p>First, the center estimate of a beacon image may be inaccurate, as illustrated on 𝐵 . This center detection error Δ𝐶 is small (around one pixel), and if all beacons are approximately on the horizon, the angle error is independent of the distance between the beacon and the camera. We model this error as a random additive error 𝛼 ( ) with uniform distribution between −Δ𝛼 and +Δ𝛼 , where Δ𝛼 is constant. If the radius of the horizon on the camera image is 𝑅 pixels, then the value of Δ𝛼 can be approximated as follows:</p><formula xml:id="formula_0">Δ𝛼 = atan2 Δ𝐶 , 𝑅 . (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where atan2 is the two-argument arctangent function.</p><p>The second type of detection error may happen when a beacon is partially obscured (see 𝐵 in Fig. <ref type="figure" target="#fig_1">3</ref>). In this case the camera image contains a misshapen beacon, the center of which may have an offset with regard to the true center. The error is the largest when almost the whole beacon is covered; in this case the center of the observed beacon may appear away from the true position, where 𝑊 is the horizontal size of the beacon. The resulting angle error may be higher if the beacon is closer to the camera (i.e. the beacon image is larger). We model this angle error as a random additive error 𝛼 ( ) , with uniform distribution between −Δ𝛼 , and +Δ𝛼 , , where Δ𝛼 , is the maximal angle error as follows:</p><formula xml:id="formula_2">Δ𝛼 , = atan2 𝑊, 𝐷 ,<label>(2)</label></formula><p>where 𝑊 is the horizontal size of beacon 𝐵 , and 𝐷 is the distance between 𝐵 and the camera.</p><p>The measured angle of beacon 𝐵 is modelled as follows: </p><formula xml:id="formula_3">𝛼 = 𝛼 ( ) + 𝛼 ( ) + 𝛼 ( ) ,<label>(3)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Overview of the RANSAC-based localization process</head><p>In this section we briefly summarize the operation of the proposed localization algorithm. The technical details will be discussed later; the summary contains references to sections with the detailed discussion. The pseudo-code of the proposed algorithm is the following:</p><p>Repeat Steps 1-4 𝑁 times. The number 𝑁 of iterations is discussed in Section 2.4.</p><p>Step 1: Select 3 beacons randomly. Let the selected beacon triplet be 𝐵𝑇 .</p><p>Step 2: Calculate initial position estimate 𝐶 and orientation estimate 𝜑 using 𝐵𝑇 . The estimates are calculated from the positions of beacons 𝐵𝑇 and the measured angles of these beacons. The details of the estimation are presented in Section 2.5.</p><p>Step 3: For all the beacons, check whether their measurements are consistent with 𝐶 and 𝜑 .</p><p>The set of supporter beacons of 𝐵𝑇 (i.e. the inliers) is denoted by 𝑆𝑈𝑃 . The proposed inlier-outlier classification method uses a position-dependent threshold; thus the effect of the measurement errors will be location-independent. The method is discussed in Section 2.6.</p><p>Step 4: Calculate the number of supporter beacons 𝑁𝑆 = 𝑆𝑈𝑃 and their mean squared angle measurement error 𝐸 , as described in Section 2.6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>End Repeat</head><p>Step 5: From the 𝑁 experiments select the triplet with the highest value 𝑁𝑆 . If multiple triplets exist with the same 𝑁𝑆 , then select the one with the smallest value 𝐸 .</p><p>Step 6: Using all of the supporter beacons in 𝑆𝑈𝑃 , calculate refined estimates 𝐶 and 𝜑 , starting from 𝐶 , as described in Section 2.7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Number of iterations</head><p>The estimation process uses random beacon triplets to find an initial position estimate. In order the estimator algorithm to provide a good estimate the measurement must fulfill the following two conditions: a) All three initial beacons must be detected correctly, i.e. with small measurement error. If some of the initial measurements are outliers, the initial estimate will be wrong. b) According to the phenomenon called geometrical dilution of precision (GDOP), some setups are more sensitive to measurement errors than others <ref type="bibr" target="#b16">[16]</ref>. Thus the geometry of the initial beacons and the camera must allow a good estimate when the measurements are correct (i.e. the measurement error is small, according to condition (a)). In order the estimator algorithm to provide reliable estimate, we need at least one good initial triplet, fulfilling both requirements, among the 𝑁 experiments. Since beacon triplets 𝐵𝑇 are chosen randomly, it is possible to determine the probability 𝑃𝑟 of having at least one good beacon triplet among the randomly chosen 𝑁 triplets. On the reverse direction, we can use this design parameter 𝑃𝑟 to determine the necessary number of iterations 𝑁 <ref type="bibr" target="#b17">[17]</ref>.</p><p>Let us denote the number of beacons and the number of total experiments by 𝑁 and 𝑁 , respectively. Let us suppose that out of 𝑁 measurements 𝑁 are inliers. The probability of choosing a reliable triplet is</p><formula xml:id="formula_4">𝑃𝑟 𝐵𝑇 is reliable = 𝑃 * ,<label>(4)</label></formula><p>where 𝑃 * is the conditional probability of providing a good estimate if all of the initial measurements are inliers <ref type="bibr" target="#b17">[17]</ref>.</p><p>The probability 𝑃𝑟 of having at least one reliable triplet out of 𝑁 trials can be expressed as follows:</p><formula xml:id="formula_5">𝑃𝑟 = 1 − 1 − 𝑃 * (5)</formula><p>Thus the number of necessary trials, given probability 𝑃𝑟 and 𝑃 * , is the following:</p><formula xml:id="formula_6">𝑁 = ( ) *<label>(6)</label></formula><p>In practice the number of inlier/outlier measurements is usually unknown. However, it is usually possible to give a safe upper bound for the outliers (e.g. if there are rarely more than one outliers then a safe assumption is that 𝑁 ≥ 𝑁 − 2, and an even safer assumption is that 𝑁 ≥ 𝑁 − 3). If the number of inliers is at least 𝑁 then the calculated 𝑁 , according to <ref type="bibr" target="#b6">(6)</ref>, provides an upper bound for the required number of trials.</p><p>The probability 𝑃𝑟 is a design parameter, set close to 1 (e.g. 0.999). Probability 𝑃 * can be estimated using simulations, using the inlier angle tolerance value, as input parameter <ref type="bibr" target="#b17">[17]</ref>.</p><p>For small number of beacons, it is feasible to use all the possible beacon triplets. Even for 15 visible beacons, the number of trials is only 455 (see the speed measurements in Section 3.2). However, for higher number of beacons (6) may really be helpful: e.g. for 50 visible beacons 19600 trials would be necessary, which is not feasible for real-time applications. In the tests, we had small number of beacons and thus used all possible triplets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Calculation of the initial estimate</head><p>For sake of simplicity, and without loss of generality, in iteration step 𝑗 let us denote the selected beacons by 𝐵𝑇 = {𝐵 , 𝐵 , 𝐵 }. Our goal is to determine camera position 𝐶 and orientation 𝜑 . For simpler notation for now we will omit iteration index 𝑗 and denote the estimates by 𝐶 and 𝜑 . Let us use the following notations: 𝑏 = 𝑃 − 𝐶, (7) 𝜃 , = ∠𝑃 𝐶𝑃 = 𝛼 − 𝛼 <ref type="bibr" target="#b8">(8)</ref> where 𝑖, 𝑗 = 1,2,3. Let us choose two beacons (e.g. 𝐵 and 𝐵 ), and suppose that 𝜃 , ≠ 0 and 𝜃 , ≠ 𝜋.</p><p>Let us express the square of their distance 𝑑 , , using the law of cosines, as follows:</p><formula xml:id="formula_7">𝑑 , = 𝑏 + 𝑏 − 2 𝑏 𝑏 cos 𝜃 ,<label>(9)</label></formula><p>Using the fact that 𝑏 and 𝑏 are on the same plane X-Y, the equality 𝑏 × 𝑏 = 𝑛 𝑏 𝑏 sin 𝜃 , , holds, where 𝑛 is the unity vector in direction of axis Z. In this case (9) takes the following form:</p><formula xml:id="formula_8">𝑑 , = 𝑏 + 𝑏 − 2 ( ) ( ) , ,<label>(10)</label></formula><p>Substituting 𝑑 , = (𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) and 𝑏 = (𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) , and using notation 𝑐 , = cot 𝜃 , , we get the following equation:</p><formula xml:id="formula_9">𝑥 𝑥 + 𝑦 𝑦 + 𝑥 𝑦 − 𝑥 𝑦 𝑐 , = 𝑥 𝑥 + 𝑥 + 𝑦 − 𝑦 𝑐 , + 𝑦 𝑦 + 𝑦 + 𝑥 − 𝑥 𝑐 , − 𝑥 − 𝑦 (<label>11</label></formula><formula xml:id="formula_10">)</formula><p>Similar result is obtained if beacon pair 𝐵 and 𝐵 is used:</p><formula xml:id="formula_11">𝑥 𝑥 + 𝑦 𝑦 + 𝑥 𝑦 − 𝑥 𝑦 𝑐 , = 𝑥 𝑥 + 𝑥 + 𝑦 − 𝑦 𝑐 , + 𝑦 𝑦 + 𝑦 + 𝑥 − 𝑥 𝑐 , − 𝑥 − 𝑦<label>(12)</label></formula><p>By subtracting ( <ref type="formula" target="#formula_11">12</ref>) from ( <ref type="formula" target="#formula_9">11</ref>) the squared terms vanish and we get the following linear equation: −𝑥 𝜉 , − 𝑦 𝜐 , − ω , 𝑐 , + ω , 𝑐 , = 𝑥 −𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , + 𝑦 −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 , <ref type="bibr" target="#b13">(13)</ref> where 𝜉 , = 𝑥 − 𝑥 , 𝜐 , = 𝑦 − 𝑦 , and ω , = 𝑥 𝑦 − 𝑥 𝑦 Now let us calculate <ref type="bibr" target="#b13">(13)</ref>, by substituting (𝑖, 𝑗, 𝑘) = (1,2,3), (2,3,1), and (3,1,2). We get the following over/determined linear equation system:</p><formula xml:id="formula_12">𝐴 ⋅ 𝑥 𝑦 = 𝑏<label>(14)</label></formula><p>where </p><formula xml:id="formula_13">𝐴 = −𝜉 , +</formula><p>If none two of the three beacons are on the same line with C (i.e. sin 𝜃 , ≠ 0, sin 𝜃 , ≠ 0, and sin 𝜃 , ≠ 0) then 𝑐 , , 𝑐 , , and 𝑐 , are finite and any two lines can be selected from 𝐴 and 𝑏 to construct 𝐴 and 𝑏 , resulting an equation system with two equations and two unknowns. The initial location estimate is the following:</p><formula xml:id="formula_15">𝐶 = 𝑥 𝑦 = (𝐴 ) 𝑏 . (<label>17</label></formula><formula xml:id="formula_16">)</formula><p>If one pair of beacons is on the same line with C (e.g. 𝐵 and 𝐵 ), then sin 𝜃 , = 0. In this case from 𝐴 and 𝑏 one row is selected only, the one which does not include the infinite term 𝑐 , . (E.g. if 𝐵 , 𝐵 , and C are on the same line then 𝑐 , would be infinite, thus only the second row is used from 𝐴 and 𝑏). In this case another equation must be constructed for 𝐴 and 𝑏 ; from (13) using limit sin 𝜃 , → 0 the following equation can be obtained:</p><formula xml:id="formula_17">𝑥 𝜐 , − 𝑦 𝜉 , = −ω , ,<label>(18)</label></formula><p>and again <ref type="bibr" target="#b17">(17)</ref> can be used to calculate the position estimate. If all three beacons and C are on the same line (i.e. sin 𝜃 , = sin 𝜃 , = sin 𝜃 , = 0) then the position cannot be determined from this set of measurements. Also, there is no solution if all three beacons and C are on the same circle (det(𝐴 ) ≅ 0).</p><p>In practice the angle differences 𝜃 , are checked before calculating 𝐴 and 𝑏: if any of them is closer to 0 or 𝜋 than a small threshold then (18) is applied. If all measurements are close to 0 or 𝜋 then the triplet is ignored (since the position cannot be reliable determined), and the iteration is continued by selecting a different beacon triplet.</p><p>Once the initial position is estimated, the initial orientation estimate can be calculated as follows: Let us choose any of the three beacons, e.g. 𝐵 . From point 𝐶 beacon 𝐵 is observed in 𝐾 under angle 𝛽 = atan2(𝑦 − 𝑦 , 𝑥 − 𝑥 ).</p><p>(19) The observed angle in 𝐾 was 𝛼 , thus the estimated orientation, according to Fig. <ref type="figure" target="#fig_0">2</ref>, is the following:</p><formula xml:id="formula_18">𝜑 = 𝛽 − 𝛼 .<label>(20)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.6.">Location-dependent inlier-outlier classification</head><p>In the previous step the initial position 𝐶 = (𝑥 , 𝑦 ), and the initial orientation 𝜑 were estimated. If the initial estimates are correct then any beacon 𝐵 should be seen from ideal direction 𝛽 , in the world coordinate system: 𝛽 = atan2(𝑦 − 𝑦 , 𝑥 − 𝑥 ), (25) Using ( <ref type="formula">23</ref>) and ( <ref type="formula">25</ref>), the location-dependent inlier/outlier classification is the following:</p><formula xml:id="formula_20">𝐵 ∈ 𝑆𝑈𝑃 (inlier): if |𝜀 | ≤ Δ𝛼 𝐵 ∉ 𝑆𝑈𝑃 (outlier): if |𝜀 | &gt; Δ𝛼 , (<label>26</label></formula><formula xml:id="formula_21">)</formula><p>where 𝑆𝑈𝑃 is the set of supporter beacons for triplet 𝐵𝑇 . Notice that beacons in 𝐵𝑇 are naturally elements of 𝑆𝑈𝑃 . The mean square angle measurement error 𝐸 <ref type="bibr" target="#b3">[4]</ref>, using the consistent measurements only, is the following:</p><formula xml:id="formula_22">𝐸 = ∑ 𝜀 ∈ . (<label>27</label></formula><formula xml:id="formula_23">)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.7.">Refined location estimate</head><p>Let us construct error function 𝐸(𝑥, 𝑦) as follows: if the camera position were in (𝑥, 𝑦) then beacon 𝐵 would be observed in direction 𝛽 , in coordinate system 𝐾 : 𝛽 (𝑥, 𝑦) = atan2(𝑦 − 𝑦, 𝑥 − 𝑥), (28) From measurements 𝛼 of the consistent beacons, the orientation estimate is refined as follows:</p><formula xml:id="formula_24">𝜑 (𝑥, 𝑦) = ∑ (𝛽 (𝑥, 𝑦) − 𝛼 ) ∈ . (<label>29</label></formula><formula xml:id="formula_25">)</formula><p>Thus the observation error of beacon 𝐵 , provided the camera position is (𝑥, 𝑦), is 𝜀 (𝑥, 𝑦) = 𝛼 − 𝑎𝑡𝑎𝑛2(𝑦 − 𝑦, 𝑥 − 𝑥) + 𝜑 (𝑥, 𝑦), (30) and the mean square error function <ref type="bibr" target="#b3">[4]</ref>, similarly to (27), is the following:</p><formula xml:id="formula_26">𝐸(𝑥, 𝑦) = ∑ 𝜀 ∈ (𝑥, 𝑦). (<label>31</label></formula><formula xml:id="formula_27">)</formula><p>The refined position estimate 𝐶 is at the minimum of 𝐸(𝑥, 𝑦):</p><formula xml:id="formula_28">𝐶 = (𝑥 , 𝑦 ) = argmin ( , ) 𝐸(𝑥, 𝑦),<label>(32)</label></formula><p>and the refined orientation estimate 𝜑 is the following:</p><formula xml:id="formula_29">𝜑 = 𝜑 (𝑥 , 𝑦 ). (<label>33</label></formula><formula xml:id="formula_30">)</formula><p>The location estimate is calculated by a gradient search on the error function 𝐸(𝑥, 𝑦), starting from initial position estimate (𝑥 , 𝑦 ). The search finds a (possibly local) minimum (𝑥 , 𝑦 ) near the initial estimate. It is not guaranteed that the global minimum of (31) is found, but since the selected measurements of beacons in 𝑆𝑈𝑃 are consistent and the search is started from a consistent initial estimate (𝑥 , 𝑦 ), the iterative search in practice quickly and accurately finds the correct estimate, usually close to the initial value (𝑥 , 𝑦 ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Simulation results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Test environment</head><p>The tests were performed in the simulated environment, shown in Fig. <ref type="figure" target="#fig_3">4</ref>. The size of the simulated room was 22m × 22m, where 12 beacons were deployed in the positions shown in Fig. <ref type="figure" target="#fig_3">4</ref>. Performance tests were made on a test grid containing 23x23 points, the grid coordinates being integer numbers in meters, between −11m and +11m.</p><p>The measurements were generated as follows. In each test point the ideal angles were calculated, then a random error between −0.4° and +0.4° was added to simulate the center detection error, which is equivalent to Δ𝛼 = 0.4°. This value corresponds to the case when the distance between the beacon's image and the center of the camera is ~150 pixels and the center detection error is Δ𝐶 = 1 pixel. Also, a potential coverage error was simulated with beacon width value 𝑊 = 3𝑐𝑚, according to <ref type="bibr" target="#b1">(2)</ref>. In each test points 20 independent noise realizations (i.e. measurement errors 𝛼 ( ) + 𝛼 ( ) , according to (3)), were generated. The outliers were generated as follows: When one outlier was present, a random beacon was selected and a random angle measurement error in the range of ±[10°, 170°] was added. When 𝐾 ≥ 2 outliers were present then 𝐾 beacons were randomly selected and their generated measurements were shuffled. This error simulates multiple incorrect beacon identifications and thus resulting false measurements.</p><p>The proposed method was compared with the method of exhaustion (MEX), proposed in <ref type="bibr" target="#b3">[4]</ref>, applied for the 2D case. The MEX method utilizes error function 𝐸 , which is similar to the error function of the proposed method:</p><formula xml:id="formula_31">𝐸 (𝑥, 𝑦) = ∑ 𝜀 (𝑥, 𝑦). (<label>34</label></formula><formula xml:id="formula_32">)</formula><p>MEX uses an exhaustive (brute-force) search on a grid and is guaranteed to find the optimal position on the search grid. In the tests we used a search grid of size 1cm for MEX to find the smallest error function value. On the other hand, the proposed algorithm is not restricted to a search grid, but finds the solution anywhere on the search plane, without any guarantee for global optimum. Since the error functions are similar (see ( <ref type="formula" target="#formula_26">31</ref>) and (34)), the expected behavior of the two algorithms is the same, when there are no outliers.</p><p>Since the number of beacons is low (𝑁 = 12), in the tests all possible beacon triplets were used, i.e. 𝑁 = 𝑁 3 = 220.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Simulation results</head><p>In the first test measurement noise was added for the ideal angle measurements, but no outliers were present. Fig. <ref type="figure" target="#fig_3">4</ref> shows the position errors for the proposed and the reference methods. The performances of the algorithms are very similar, as it was expected. The error is small, mainly below 10 cm. At certain points of the search space slightly higher errors can be observed: this is because of the poor GDOP; here 8 of the beacons (𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 ) are on the same circle, while the other 4 beacons are also close to this circle. To allow detailed analysis of the errors, the error distributions are shown in Fig. <ref type="figure">5</ref>, using all of the experiments (23x23x20=10580 test runs). The behavior of the two algorithms is apparently quite similar, according to the expectations. The average error was 2.65cm and 2.67cm for the proposed and reference methods, respectively. The slight difference is probably due to the fact that the reference method was evaluated on a 1cm grid, while the proposed method provides results on the continuous plane. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Density</head><p>The computational costs between the proposed and the reference method were compared. Both methods were implemented in Matlab and run on a conventional laptop with Intel Core i7-9750H processor. The proposed method (with 12 beacons, using all possible beacon triplets) required 56 microseconds in average to compute an estimate, while for the reference MEX method the average runtime was 278 milliseconds. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Measurement results</head><p>The test measurements were made inside of an industrial warehouse and in front of the warehouse. The map of the test area is shown in Fig. <ref type="figure" target="#fig_7">8 (a</ref>). On the right-hand side a part of the warehouse is visible, where nine beacons were deployed along the walls and on the shelves. There were another six beacons deployed outside the building on the walls and poles. The camera was deployed on top of the vehicle, as shown in Fig. <ref type="figure" target="#fig_7">8 (b</ref>). An estimated path of the vehicle is shown in Fig. <ref type="figure" target="#fig_7">8</ref>  During the measurements the number of visible beacons varied from 3 to 13. Apart from the bumpy pavement, where the camera swayed significantly on the top of the vehicle, the trajectory estimate is smooth and contains variation of a few centimeters only. During the test, the algorithm detected outliers (due to reflecting surfaces in the warehouse) at three different path segments, in total of ~550 image frames, as marked in Fig. <ref type="figure" target="#fig_7">8(a)</ref>. The outliers had no significant effect on the position estimates which clearly illustrates the fault-tolerance of the proposed system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Summary</head><p>A robust RANSAC-based solution, for 2-D position estimation using azimuth-only angle measurements, was proposed. The proposed method uses a practical error model and a corresponding position-dependent inlier angle tolerance to detect outliers. The initial estimate is calculated by a linearized equation system, using measurements from 3 beacons. The number of supporter beacons is increased incrementally using RANSAC. The best initial estimate (with the largest set of supporters and the smallest error function value) is used as an initial point for a gradient search to find the final position estimate; in the search those measurements are used which are consistent with the winner initial estimate. The performance of the proposed method was illustrated by simulation examples. It was shown that the proposed method and the reference MEX method has essentially the same accuracy when no outliers are present. This is due to the similar cost functions used by the two methods. However, in the presence of outliers, the reference method provided no usable results, since it used the bad measurements as well for the estimation, generating large errors. The proposed method was not sensitive to outliers: its performance degradation was minor, even when as much as 5 simultaneous outliers were present.</p><p>The computational cost of the proposed method is low: in the realistic test cases the computational time of one position estimate was as low as 56 microseconds, on a conventional laptop. The speed of the reference method, due to its exhaustive (brute-force) nature, was more than three orders of magnitude lower.</p><p>The performance of the proposed method was tested using real measurements as well, in an industrial warehouse. During the tests, the outliers had no effect on the position estimates, allowing positioning with a few centimeters of error only.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Angle measurement using the camera image: (a) 3D, (b) 2D view.</figDesc><graphic coords="3,128.88,71.76,337.20,123.36" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Detection error sources. (a) Bird's-eye view of the detection area. (b) Fisheye camera image. The detected and true beacon centers are denoted by red and green dots.</figDesc><graphic coords="3,161.76,614.52,285.72,107.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>)</head><label></label><figDesc>In coordinate system 𝐾 the ideal angle is 𝛼 = 𝛽 − 𝜑 . (22) The observation error of beacon 𝐵 is the following: 𝜀 = 𝛼 − 𝛼 = 𝛼 − 𝛽 + 𝜑 . (23) Let us use design parameters Δ𝛼 and Δ𝛼 , , according to (1) and (2), where 𝐷 = (𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) , (24) and let us calculate inlier angle tolerance Δ𝛼 for beacon 𝐵 as follows: Δ𝛼 = 2 Δ𝛼 + Δ𝛼 , .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Position errors for the proposed (a) and the reference (b) methods for simulated noisy measurements without outliers.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :Figure 6 :</head><label>56</label><figDesc>Figure 5: Error distribution of the proposed and the reference methods for simulated noisy measurements, without outliers. In the next experiments 1, 3, and 5 outlier measurements were included. The position errors in the search area, for 1 outlier, are shown in Fig. 6. The results for the reference method clearly show the dramatic effect of the outlier: most of the position estimates have significant error now, up to a few meters. The position errors of the proposed method did not change significantly, illustrating the effectiveness of the outlier detection and removal. The position error distributions are shown in Fig. 7, for 1, 3, and 5 simultaneous outliers. The reference method, as shown in Fig 7(b), cannot provide meaningful estimates in the presence of outliers. Even a single outlier can destroy the estimate, but more outliers affect the estimation quality more severely.For the proposed method the quality of the estimate did not change significantly, as shown in Fig7 (a). However, there is slight shift towards larger errors when outliers are present. The reason is the following: when more outliers are present then there are fewer correct measurements that can be used in the estimation process, thus the accuracy of the estimation naturally decreases. But since there are no outliers included in the estimation, the quality degradation is only minor.</figDesc><graphic coords="9,106.80,516.24,381.36,191.52" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Error distribution of the proposed (a) and the reference (b) methods for simulated noisy measurements and outliers.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>(a) in blue: the vehicle started from point A in the warehouse, exited the building, crossed a bumpy part of the pavement and turned left to point B. Here the vehicle backed and made another left turn to point C. From point C the vehicle entered the building on a straight trajectory to the final point D.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: (a) Map of the measurement environment. On the right-hand side there is a warehouse, the left-hand side shows the open-air area in front of it. The beacons are denoted by numbered crosses. The estimated vehicle route A-B-C-D is shown in blue. (b) Photo of the vehicle with the installed camera on top of it.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="2,154.08,298.32,286.56,194.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="11,73.68,71.76,446.88,226.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>𝜐 , 𝑐 , − 𝜐 , 𝑐 , −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 , −𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 , −𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 , − 𝑦 𝜐 , − 𝜔 , 𝑐 , + 𝜔 , 𝑐 , −𝑥 𝜉 , − 𝑦 𝜐 , − 𝜔 , 𝑐 , + 𝜔 , 𝑐 , −𝑥 𝜉 , − 𝑦 𝜐 , − 𝜔 , 𝑐 , + 𝜔 , 𝑐 ,</figDesc><table><row><cell>(15)</cell></row><row><cell>and</cell></row><row><cell>−𝑥 𝜉</cell></row><row><cell>𝑏 =</cell></row></table><note>,</note></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgements</head><p>We acknowledge the financial support of Széchenyi 2020 under the EFOP-3.6.1-16-2016-00015.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">References</head></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Toward the implementation of a universal angle-based optical indoor positioning system</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Bergen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">S</forename><surname>Schaal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klukas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Holzman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Front. Optoelectron</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="116" to="127" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">VLC-based positioning system for an indoor environment using an image sensor and an accelerometer sensor</title>
		<author>
			<persName><forename type="first">P</forename><surname>Huynh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yoo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page">783</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Lookup: Robust and Accurate Indoor Localization Using Visible Light Communication</title>
		<author>
			<persName><forename type="first">G</forename><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zachár</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Vakulya</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Instrumentation and Measurement</title>
		<imprint>
			<biblScope unit="volume">66</biblScope>
			<biblScope unit="page" from="2337" to="2348" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Three-dimensional VLC positioning based on angle difference of arrival with arbitrary tilting angle of receiver</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE J. Sel. Areas Commun</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="8" to="22" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">G</forename><surname>Serrenho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Apolinário</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L L</forename><surname>Ramos</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Gunshot airborne surveillance with rotary wing UAV-embedded microphone array</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Fernandes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page">4271</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Dead on Arrival: An Empirical Study of The Bluetooth 5.1 Positioning System</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cominelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Patras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gringoli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Workshop on Wireless Network Testbeds, Experimental Evaluation &amp; Characterization</title>
				<meeting><address><addrLine>WiNTECH</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="13" to="20" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">AnguLoc: Concurrent Angle of Arrival Estimation for Indoor Localization with UWB Radios</title>
		<author>
			<persName><forename type="first">M</forename><surname>Heydariaan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Dabirian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Gnawali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">16th International Conference on Distributed Computing in Sensor Systems (DCOSS)</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="112" to="119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Azimuth-Only Estimation for TDOA-based Direction Finding with Three-Dimensional Acoustic Array</title>
		<author>
			<persName><forename type="first">X</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Instrumentation and Measurement</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="page" from="985" to="994" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A New Three Object Triangulation Algorithm for Mobile Robot Positioning</title>
		<author>
			<persName><forename type="first">V</forename><surname>Pierlot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Van Droogenbroeck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Robotics</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="566" to="577" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">AOA-Based Three-Dimensional Positioning and Tracking Using the Factor Graph Technique</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Symmetry</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">1400</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A Set-membership Approach for Visible Light Positioning with Fluctuated RSS Measurements</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Short Paper Proceedings of the Tenth International Conference on Indoor Positioning and Indoor Navigation -Work-in-Progress Papers (IPIN-WiP 2019)</title>
				<meeting><address><addrLine>Pisa, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="275" to="282" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Robust acoustic source localization with TDOA based RANSAC algorithm</title>
		<author>
			<persName><forename type="first">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICIC--5th Int. Conf. Emerging Intell.Comput. Technol. Appl</title>
				<meeting>ICIC--5th Int. Conf. Emerging Intell.Comput. Technol. Appl<address><addrLine>Berlin, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="222" to="227" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Fischler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Bolles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="381" to="395" />
			<date type="published" when="1981">1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Towards Robust VLC Beacon Identification in Camera Based Localization Systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rátosi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Simon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN)</title>
				<meeting>2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN)<address><addrLine>Pisa, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Real-Time Localization and Tracking using Visible Light Communication</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rátosi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Simon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)</title>
				<meeting>2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)<address><addrLine>Nantes, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Dilution of precision</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">B</forename><surname>Langley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">GPS World</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="52" to="59" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Fast Adaptive Acoustic Localization for Sensor Networks</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vakulya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Simon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Instrumentation and Measurement</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="page" from="1820" to="1829" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
