=Paper=
{{Paper
|id=Vol-3097/paper27
|storemode=property
|title=Fault Tolerant Indoor Positioning Based on Azimuth Measurements
|pdfUrl=https://ceur-ws.org/Vol-3097/paper27.pdf
|volume=Vol-3097
|authors=Márk Rátosi,Gyula Simon
|dblpUrl=https://dblp.org/rec/conf/ipin/RatosiS21
}}
==Fault Tolerant Indoor Positioning Based on Azimuth Measurements ==
Fault Tolerant Indoor Positioning
Based on Azimuth Measurements
Márk Rátosi 1 and Gyula Simon 2
1
University of Pannonia, Egyetem út 10., 8200 Veszprém, Hungary
2
Óbuda University, Budai út 45., 8000 Székesfehérvár, Hungary
Abstract
A novel robust positioning method is proposed for indoor 2-D localization, based on azimuth
measurements of beacons, deployed in known locations. Based on the measured azimuth and
the beacon positions, the location of the sensor is determined. The proposed method utilizes
the RANSAC algorithm to detect and reject outlier measurements, possibly present due to
incorrect beacon detections or reflections. The outlier detection is based on a realistic, location-
dependent measurement error model. The performance of the proposed method is illustrated
by simulations and real measurements.
Keywords 1
Position Estimation, Azimuth Estimation, Angle of Arrival, Angle Difference of Arrival,
RANSAC.
1. Introduction
Several indoor localization systems use angle of arrival (AOA) or angle difference of arrival
(ADOA) measurements, using light [1], [2], [3], [4], acoustic [5], or radio [6], [7] sources. In these
systems the beacons transmit signals, which are detected by the sensors; and the AOA values of the
beacons, or ADOA values of beacon pairs, are measured. These angle measurements are used to
estimate either the unknown location of the sensor (using multiple beacons, deployed at known
positions), or the unknown location of the beacon (using multiple sensors, deployed at known
positions).
The angles can be measured in 3-D, where both the azimuth and elevation of the beacon are
measured [4], or in 2-D, using only the azimuth values [3]. The azimuth-only estimation requires much
less computation [8], and is sufficient for applications where only the 2-D location is to be determined
and the elevation is irrelevant (e.g. when a fork-lift truck is tracked, see Fig. 1). The sensor’s direction,
however, must be known or measured in the azimuth-only method; the simplest solution being when
the sensor is facing upwards [3].
The estimation of the sensor position can be made by solving an equation system [9]; or in case of
redundant measurements, least squares methods [4], consensus-based approaches [3], or exhaustive
search methods were proposed [4]. The various estimation schemes provide good estimates when
enough measurements are available, and the measurements have reasonably small error. Measurements
with large error (i.e. outliers), usually resulting from incorrect detections or reflections, may cause large
error in the position estimate. Such outliers must be removed from the data used in the estimation
process, to avoid incorrect estimates. Popular methods include model-based approaches (using e.g.
Kalman-filters) [10], statistical methods [11], consistency-based approaches [3], and Random Sample
Consensus (RANSAC) [12], [13]. RANSAC is popular because it iteratively adds inliers to and
excludes outliers from a small starting set of measurements, thus provides robust estimates with
reasonable amount of computation.
IPIN 2021 WiP Proceedings, November 29 -- December 2, 2021, Lloret de Mar, Spain
EMAIL: ratosi@dcs.uni-pannon.hu (A. 1); simon.gyula@amk.uni-obuda.hu (A. 2);
ORCID: 0000-0002-6414-2775 (A. 1); 0000-0003-0296-3399 (A. 2);
© 2020 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
In this paper we propose a robust RANSAC-based solution for azimuth-only AOA/ADOA systems.
The proposed solution requires angle measurements of at least 3 beacons. With minimal number of
beacons an algebraic solution is provided, which is naturally sensitive to outliers. When redundant (i.e.
four or more) measurements are present, RANSAC is used to select the larges possible consistent
beacon set to calculate a robust position estimate. A practical error model for cameras is proposed to
determine the position-dependent inlier angle tolerance. The performance of the proposed solution is
illustrated by simulation examples and practical measurements as well.
2. Proposed Localization Method
2.1. System architecture
In the proposed system the position of a camera is estimated; the camera may be deployed e.g. on
top of a fork-lift truck, as shown in Fig 1. The camera, equipped with a fisheye lens, is facing upwards,
thus the sensor’s normal vector is in line with the coordinate system’s Z axis. The beacons (e.g. blinking
LEDs) are deployed at known positions (e.g. along the walls). The beacons continuously transmit their
unique IDs, using visible light communication techniques (in the test system the RUPSOOK protocol
was used [14]). The beacons are detected on the camera’s image stream.
Figure 1: The architecture of the localization system. Blinking beacons are detected on the image of
the fish-eye camera. The camera location is determined using the known beacon positions.
The world coordinate system is 𝐾 with axes 𝑋 , 𝑌 , 𝑍 , while the camera coordinate system is 𝐾
with axes 𝑋 , 𝑌 , 𝑍 , where 𝑍 and 𝑍 are parallel. The origin of 𝐾 is the unknown camera position
𝐶 ∗ = (𝑥 , 𝑦 , 𝑧 ), and its orientation is the unknown angle 𝜑 , as shown in Fig. 2. Since 𝑍 is parallel
with 𝑍 (the camera is facing upwards), the 2-D beacon directions can be used, as shown in Fig. 2,
which greatly simplifies the computation need [15]. Notice that we assumed that the axis of the camera
is parallel with the Z axis. It may easily be satisfied in certain applications (see e.g. Fig. 1), or the sensor
inclination can be measured and compensated for, e.g. [3].
In coordinate system 𝐾 let us denote the known coordinates of beacon 𝐵 by 𝑃∗ = (𝑥 , 𝑦 , 𝑧 ), and
the orthogonal projections of 𝑃 ∗ and 𝐶 ∗ to plane X-Y by 𝑃 = (𝑥 , 𝑦 ) and 𝐶 = (𝑥 , 𝑦 ), respectively.
The measured direction of beacon 𝐵 on the X-Y plane in coordinate system 𝐾 is denoted by 𝛼 , as
shown in Fig. 2. Thus the inputs of the estimator process are measurements 𝛼 , and beacon positions
𝑃 , 𝑖 = 1, 2, … , 𝑁 , where 𝑁 is the number of beacons, while the output is position 𝐶 and orientation
𝜑 (in world coordinate system 𝐾 ).
In real scenarios bad measurements are not uncommon (e.g. due to reflections), resulting in outlier
measurement values. Also, beacons may be unintentionally relocated, thus certain values 𝑃 may be
wrong and the measured angles are inconsistent with them, practically behaving like outliers. To
achieve a fault tolerant system, it is essential that the estimation process be resilient to outliers in the
set of input values. In the proposed solution RANSAC is used to provide robust estimates.
(a) (b)
Figure 2: Angle measurement using the camera image: (a) 3D, (b) 2D view.
2.2. Error Model
The beacons are detected on the camera’s image stream. On the left-hand side of Fig. 3 a bird’s-eye
view of the area is shown, with the camera 𝐶, beacons 𝐵 , 𝐵 , 𝐵 , 𝐵 , and an obstacle partly covering
beacon 𝐵 . On the right-hand side the fish-eye camera image is shown, with two possible sources of
detection error.
First, the center estimate of a beacon image may be inaccurate, as illustrated on 𝐵 . This center
detection error Δ𝐶 is small (around one pixel), and if all beacons are approximately on the horizon, the
angle error is independent of the distance between the beacon and the camera. We model this error as a
random additive error 𝛼 ( ) with uniform distribution between −Δ𝛼 and +Δ𝛼 , where Δ𝛼 is constant.
If the radius of the horizon on the camera image is 𝑅 pixels, then the value of Δ𝛼 can be approximated
as follows:
Δ𝛼 = atan2 Δ𝐶 , 𝑅 . (1)
where atan2 is the two-argument arctangent function.
The second type of detection error may happen when a beacon is partially obscured (see 𝐵 in Fig. 3).
In this case the camera image contains a misshapen beacon, the center of which may have an offset with
regard to the true center. The error is the largest when almost the whole beacon is covered; in this case
the center of the observed beacon may appear away from the true position, where 𝑊 is the horizontal
size of the beacon. The resulting angle error may be higher if the beacon is closer to the camera (i.e. the
( )
beacon image is larger). We model this angle error as a random additive error 𝛼 , with uniform
distribution between −Δ𝛼 , and +Δ𝛼 , , where Δ𝛼 , is the maximal angle error as follows:
Δ𝛼 , = atan2 𝑊, 𝐷 , (2)
where 𝑊 is the horizontal size of beacon 𝐵 , and 𝐷 is the distance between 𝐵 and the camera.
The measured angle of beacon 𝐵 is modelled as follows:
( ) ( ) ( )
𝛼 =𝛼 +𝛼 +𝛼 , (3)
(a) (b)
Figure 3: Detection error sources. (a) Bird’s-eye view of the detection area. (b) Fisheye camera image.
The detected and true beacon centers are denoted by red and green dots.
2.3. Overview of the RANSAC-based localization process
In this section we briefly summarize the operation of the proposed localization algorithm. The
technical details will be discussed later; the summary contains references to sections with the detailed
discussion. The pseudo-code of the proposed algorithm is the following:
Repeat Steps 1-4 𝑁 times. The number 𝑁 of iterations is discussed in Section 2.4.
Step 1: Select 3 beacons randomly. Let the selected beacon triplet be 𝐵𝑇 .
Step 2: Calculate initial position estimate 𝐶 and orientation estimate 𝜑 using 𝐵𝑇 . The
estimates are calculated from the positions of beacons 𝐵𝑇 and the measured angles of
these beacons. The details of the estimation are presented in Section 2.5.
Step 3: For all the beacons, check whether their measurements are consistent with 𝐶 and 𝜑 .
The set of supporter beacons of 𝐵𝑇 (i.e. the inliers) is denoted by 𝑆𝑈𝑃 . The proposed
inlier-outlier classification method uses a position-dependent threshold; thus the effect
of the measurement errors will be location-independent. The method is discussed in
Section 2.6.
Step 4: Calculate the number of supporter beacons 𝑁𝑆 = 𝑆𝑈𝑃 and their mean squared angle
measurement error 𝐸 , as described in Section 2.6.
End Repeat
Step 5: From the 𝑁 experiments select the triplet with the highest value 𝑁𝑆 . If multiple triplets
exist with the same 𝑁𝑆 , then select the one with the smallest value 𝐸 .
Step 6: Using all of the supporter beacons in 𝑆𝑈𝑃 , calculate refined estimates 𝐶 and 𝜑 , starting
from 𝐶 , as described in Section 2.7.
2.4. Number of iterations
The estimation process uses random beacon triplets to find an initial position estimate. In order the
estimator algorithm to provide a good estimate the measurement must fulfill the following two
conditions:
a) All three initial beacons must be detected correctly, i.e. with small measurement error. If some
of the initial measurements are outliers, the initial estimate will be wrong.
b) According to the phenomenon called geometrical dilution of precision (GDOP), some setups
are more sensitive to measurement errors than others [16]. Thus the geometry of the initial
beacons and the camera must allow a good estimate when the measurements are correct (i.e.
the measurement error is small, according to condition (a)).
In order the estimator algorithm to provide reliable estimate, we need at least one good initial triplet,
fulfilling both requirements, among the 𝑁 experiments. Since beacon triplets 𝐵𝑇 are chosen randomly,
it is possible to determine the probability 𝑃𝑟 of having at least one good beacon triplet among the
randomly chosen 𝑁 triplets. On the reverse direction, we can use this design parameter 𝑃𝑟 to
determine the necessary number of iterations 𝑁 [17].
Let us denote the number of beacons and the number of total experiments by 𝑁 and 𝑁 ,
respectively. Let us suppose that out of 𝑁 measurements 𝑁 are inliers. The probability of choosing
a reliable triplet is
𝑃𝑟 𝐵𝑇 is reliable = 𝑃∗ , (4)
where 𝑃∗ is the conditional probability of providing a good estimate if all of the initial measurements
are inliers [17].
The probability 𝑃𝑟 of having at least one reliable triplet out of 𝑁 trials can be expressed as
follows:
𝑃𝑟 = 1− 1− 𝑃∗ (5)
Thus the number of necessary trials, given probability 𝑃𝑟 and 𝑃∗ , is the following:
( )
𝑁 = (6)
∗
In practice the number of inlier/outlier measurements is usually unknown. However, it is usually
possible to give a safe upper bound for the outliers (e.g. if there are rarely more than one outliers then
a safe assumption is that 𝑁 ≥ 𝑁 − 2, and an even safer assumption is that 𝑁 ≥ 𝑁 − 3). If
the number of inliers is at least 𝑁 then the calculated 𝑁 , according to (6), provides an upper bound
for the required number of trials.
The probability 𝑃𝑟 is a design parameter, set close to 1 (e.g. 0.999). Probability 𝑃∗ can be
estimated using simulations, using the inlier angle tolerance value, as input parameter [17].
For small number of beacons, it is feasible to use all the possible beacon triplets. Even for 15 visible
beacons, the number of trials is only 455 (see the speed measurements in Section 3.2). However, for
higher number of beacons (6) may really be helpful: e.g. for 50 visible beacons 19600 trials would be
necessary, which is not feasible for real-time applications. In the tests, we had small number of beacons
and thus used all possible triplets.
2.5. Calculation of the initial estimate
For sake of simplicity, and without loss of generality, in iteration step 𝑗 let us denote the selected
beacons by 𝐵𝑇 = {𝐵 , 𝐵 , 𝐵 }. Our goal is to determine camera position 𝐶 and orientation 𝜑 . For
simpler notation for now we will omit iteration index 𝑗 and denote the estimates by 𝐶 and 𝜑. Let us use
the following notations:
𝑏 = 𝑃 − 𝐶, (7)
𝜃 , = ∠𝑃 𝐶𝑃 = 𝛼 − 𝛼 (8)
where 𝑖, 𝑗 = 1,2,3. Let us choose two beacons (e.g. 𝐵 and 𝐵 ), and suppose that 𝜃 , ≠ 0 and 𝜃 , ≠ 𝜋.
Let us express the square of their distance 𝑑 , , using the law of cosines, as follows:
𝑑, = 𝑏 + 𝑏 − 2 𝑏 𝑏 cos 𝜃 , (9)
Using the fact that 𝑏 and 𝑏 are on the same plane X-Y, the equality 𝑏 × 𝑏 = 𝑛 𝑏 𝑏 sin 𝜃 , ,
holds, where 𝑛 is the unity vector in direction of axis Z. In this case (9) takes the following form:
( ) ( ) ,
𝑑, = 𝑏 + 𝑏 −2 (10)
,
Substituting 𝑑 , = (𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) and 𝑏 = (𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) , and using
notation 𝑐 , = cot 𝜃 , , we get the following equation:
𝑥 𝑥 +𝑦 𝑦 + 𝑥 𝑦 −𝑥 𝑦 𝑐, =
𝑥 𝑥 +𝑥 + 𝑦 −𝑦 𝑐, +𝑦 𝑦 +𝑦 + 𝑥 −𝑥 𝑐, −𝑥 −𝑦 (11)
Similar result is obtained if beacon pair 𝐵 and 𝐵 is used:
𝑥 𝑥 +𝑦 𝑦 + 𝑥 𝑦 −𝑥 𝑦 𝑐, =
𝑥 𝑥 +𝑥 + 𝑦 −𝑦 𝑐, +𝑦 𝑦 +𝑦 + 𝑥 −𝑥 𝑐, −𝑥 −𝑦 (12)
By subtracting (12) from (11) the squared terms vanish and we get the following linear equation:
−𝑥 𝜉 , − 𝑦 𝜐 , − ω , 𝑐 , + ω , 𝑐 , =
𝑥 −𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , + 𝑦 −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 , (13)
where 𝜉 , = 𝑥 − 𝑥 , 𝜐 , = 𝑦 − 𝑦 , and ω , = 𝑥 𝑦 − 𝑥 𝑦
Now let us calculate (13), by substituting (𝑖, 𝑗, 𝑘) = (1,2,3), (2,3,1), and (3,1,2). We get the
following over/determined linear equation system:
𝑥
𝐴⋅ 𝑦 =𝑏 (14)
where
−𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 ,
𝐴 = −𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 , (15)
−𝜉 , + 𝜐 , 𝑐 , − 𝜐 , 𝑐 , −𝜐 , − 𝜉 , 𝑐 , + 𝜉 , 𝑐 ,
and
−𝑥 𝜉 , − 𝑦 𝜐 , − 𝜔 , 𝑐 , + 𝜔 , 𝑐 ,
𝑏 = −𝑥 𝜉 , − 𝑦 𝜐 , − 𝜔 , 𝑐 , + 𝜔 , 𝑐 , (16)
−𝑥 𝜉 , − 𝑦 𝜐 , − 𝜔 , 𝑐 , + 𝜔 , 𝑐 ,
If none two of the three beacons are on the same line with C (i.e. sin 𝜃 , ≠ 0, sin 𝜃 , ≠ 0, and
sin 𝜃 , ≠ 0) then 𝑐 , , 𝑐 , , and 𝑐 , are finite and any two lines can be selected from 𝐴 and 𝑏 to
construct 𝐴 and 𝑏 , resulting an equation system with two equations and two unknowns. The initial
location estimate is the following:
𝑥
𝐶= = (𝐴 ) 𝑏 . (17)
𝑦
If one pair of beacons is on the same line with C (e.g. 𝐵 and 𝐵 ), then sin 𝜃 , = 0. In this case from
𝐴 and 𝑏 one row is selected only, the one which does not include the infinite term 𝑐 , . (E.g. if 𝐵 , 𝐵 ,
and C are on the same line then 𝑐 , would be infinite, thus only the second row is used from 𝐴 and 𝑏).
In this case another equation must be constructed for 𝐴 and 𝑏 ; from (13) using limit sin 𝜃 , → 0 the
following equation can be obtained:
𝑥 𝜐 , − 𝑦 𝜉 , = −ω , , (18)
and again (17) can be used to calculate the position estimate.
If all three beacons and C are on the same line (i.e. sin 𝜃 , = sin 𝜃 , = sin 𝜃 , = 0) then the
position cannot be determined from this set of measurements. Also, there is no solution if all three
beacons and C are on the same circle (det(𝐴 ) ≅ 0).
In practice the angle differences 𝜃 , are checked before calculating 𝐴 and 𝑏: if any of them is closer
to 0 or 𝜋 than a small threshold then (18) is applied. If all measurements are close to 0 or 𝜋 then the
triplet is ignored (since the position cannot be reliable determined), and the iteration is continued by
selecting a different beacon triplet.
Once the initial position is estimated, the initial orientation estimate can be calculated as follows:
Let us choose any of the three beacons, e.g. 𝐵 . From point 𝐶 beacon 𝐵 is observed in 𝐾 under angle
𝛽 = atan2(𝑦 − 𝑦 , 𝑥 − 𝑥 ). (19)
The observed angle in 𝐾 was 𝛼 , thus the estimated orientation, according to Fig. 2, is the following:
𝜑 =𝛽 −𝛼 . (20)
2.6. Location-dependent inlier-outlier classification
In the previous step the initial position 𝐶 = (𝑥 , 𝑦 ), and the initial orientation 𝜑 were estimated. If
the initial estimates are correct then any beacon 𝐵 should be seen from ideal direction 𝛽 , in the world
coordinate system:
𝛽 = atan2(𝑦 − 𝑦 , 𝑥 − 𝑥 ), (21)
In coordinate system 𝐾 the ideal angle is
𝛼 = 𝛽 − 𝜑. (22)
The observation error of beacon 𝐵 is the following:
𝜀 = 𝛼 − 𝛼 = 𝛼 − 𝛽 + 𝜑. (23)
Let us use design parameters Δ𝛼 and Δ𝛼 , , according to (1) and (2), where
𝐷 = (𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) , (24)
and let us calculate inlier angle tolerance Δ𝛼 for beacon 𝐵 as follows:
Δ𝛼 = 2 Δ𝛼 + Δ𝛼 , . (25)
Using (23) and (25), the location-dependent inlier/outlier classification is the following:
𝐵 ∈ 𝑆𝑈𝑃 (inlier): if |𝜀 | ≤ Δ𝛼
, (26)
𝐵 ∉ 𝑆𝑈𝑃 (outlier): if |𝜀 | > Δ𝛼
where 𝑆𝑈𝑃 is the set of supporter beacons for triplet 𝐵𝑇 . Notice that beacons in 𝐵𝑇 are naturally
elements of 𝑆𝑈𝑃 . The mean square angle measurement error 𝐸 [4], using the consistent measurements
only, is the following:
𝐸 = ∑ ∈ 𝜀 . (27)
2.7. Refined location estimate
Let us construct error function 𝐸(𝑥, 𝑦) as follows: if the camera position were in (𝑥, 𝑦) then beacon
𝐵 would be observed in direction 𝛽 , in coordinate system 𝐾 :
𝛽 (𝑥, 𝑦) = atan2(𝑦 − 𝑦, 𝑥 − 𝑥), (28)
From measurements 𝛼 of the consistent beacons, the orientation estimate is refined as follows:
𝜑 (𝑥, 𝑦) = ∑ ∈ (𝛽 (𝑥, 𝑦) − 𝛼 ). (29)
Thus the observation error of beacon 𝐵 , provided the camera position is (𝑥, 𝑦), is
𝜀 (𝑥, 𝑦) = 𝛼 − 𝑎𝑡𝑎𝑛2(𝑦 − 𝑦, 𝑥 − 𝑥) + 𝜑 (𝑥, 𝑦), (30)
and the mean square error function [4], similarly to (27), is the following:
𝐸(𝑥, 𝑦) = ∑ ∈ 𝜀 (𝑥, 𝑦). (31)
The refined position estimate 𝐶 is at the minimum of 𝐸(𝑥, 𝑦):
𝐶 = (𝑥 , 𝑦 ) = argmin 𝐸(𝑥, 𝑦), (32)
( , )
and the refined orientation estimate 𝜑 is the following:
𝜑 = 𝜑 (𝑥 , 𝑦 ). (33)
The location estimate is calculated by a gradient search on the error function 𝐸(𝑥, 𝑦), starting from
initial position estimate (𝑥 , 𝑦 ). The search finds a (possibly local) minimum (𝑥 , 𝑦 ) near the initial
estimate. It is not guaranteed that the global minimum of (31) is found, but since the selected
measurements of beacons in 𝑆𝑈𝑃 are consistent and the search is started from a consistent initial
estimate (𝑥 , 𝑦 ), the iterative search in practice quickly and accurately finds the correct estimate,
usually close to the initial value (𝑥 , 𝑦 ).
3. Simulation results
3.1. Test environment
The tests were performed in the simulated environment, shown in Fig. 4. The size of the simulated
room was 22m × 22m, where 12 beacons were deployed in the positions shown in Fig. 4. Performance
tests were made on a test grid containing 23x23 points, the grid coordinates being integer numbers in
meters, between −11m and +11m.
The measurements were generated as follows. In each test point the ideal angles were calculated,
then a random error between −0.4° and +0.4° was added to simulate the center detection error, which
is equivalent to Δ𝛼 = 0.4°. This value corresponds to the case when the distance between the beacon’s
image and the center of the camera is ~150 pixels and the center detection error is Δ𝐶 = 1 pixel. Also,
a potential coverage error was simulated with beacon width value 𝑊 = 3𝑐𝑚, according to (2). In each
( ) ( )
test points 20 independent noise realizations (i.e. measurement errors 𝛼 + 𝛼 , according to (3)),
were generated.
The outliers were generated as follows: When one outlier was present, a random beacon was selected
and a random angle measurement error in the range of ±[10°, 170°] was added. When 𝐾 ≥ 2 outliers
were present then 𝐾 beacons were randomly selected and their generated measurements were shuffled.
This error simulates multiple incorrect beacon identifications and thus resulting false measurements.
The proposed method was compared with the method of exhaustion (MEX), proposed in [4], applied
for the 2D case. The MEX method utilizes error function 𝐸 , which is similar to the error function of
the proposed method:
𝐸 (𝑥, 𝑦) = ∑ 𝜀 (𝑥, 𝑦). (34)
MEX uses an exhaustive (brute-force) search on a grid and is guaranteed to find the optimal position
on the search grid. In the tests we used a search grid of size 1cm for MEX to find the smallest error
function value. On the other hand, the proposed algorithm is not restricted to a search grid, but finds
the solution anywhere on the search plane, without any guarantee for global optimum. Since the error
functions are similar (see (31) and (34)), the expected behavior of the two algorithms is the same, when
there are no outliers.
Since the number of beacons is low (𝑁 = 12), in the tests all possible beacon triplets were used,
𝑁
i.e. 𝑁 = = 220.
3
3.2. Simulation results
In the first test measurement noise was added for the ideal angle measurements, but no outliers were
present. Fig. 4 shows the position errors for the proposed and the reference methods. The performances
of the algorithms are very similar, as it was expected. The error is small, mainly below 10 cm. At certain
points of the search space slightly higher errors can be observed: this is because of the poor GDOP;
here 8 of the beacons (𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 , 𝐵 ) are on the same circle, while the other 4 beacons
are also close to this circle.
(a) (b)
Figure 4: Position errors for the proposed (a) and the reference (b) methods for simulated noisy
measurements without outliers.
To allow detailed analysis of the errors, the error distributions are shown in Fig. 5, using all of the
experiments (23x23x20=10580 test runs). The behavior of the two algorithms is apparently quite
similar, according to the expectations. The average error was 2.65cm and 2.67cm for the proposed and
reference methods, respectively. The slight difference is probably due to the fact that the reference method
was evaluated on a 1cm grid, while the proposed method provides results on the continuous plane.
Density
Figure 5: Error distribution of the proposed and the reference methods for simulated noisy
measurements, without outliers.
In the next experiments 1, 3, and 5 outlier measurements were included. The position errors in the
search area, for 1 outlier, are shown in Fig. 6. The results for the reference method clearly show the
dramatic effect of the outlier: most of the position estimates have significant error now, up to a few
meters. The position errors of the proposed method did not change significantly, illustrating the
effectiveness of the outlier detection and removal.
The position error distributions are shown in Fig. 7, for 1, 3, and 5 simultaneous outliers. The
reference method, as shown in Fig 7(b), cannot provide meaningful estimates in the presence of outliers.
Even a single outlier can destroy the estimate, but more outliers affect the estimation quality more
severely.
For the proposed method the quality of the estimate did not change significantly, as shown in Fig 7
(a). However, there is slight shift towards larger errors when outliers are present. The reason is the
following: when more outliers are present then there are fewer correct measurements that can be used
in the estimation process, thus the accuracy of the estimation naturally decreases. But since there are no
outliers included in the estimation, the quality degradation is only minor.
(a) (b)
Figure 6: Position errors for the proposed (a) and the reference (b) methods for simulated noisy
measurements with 1 outlier.
The computational costs between the proposed and the reference method were compared. Both
methods were implemented in Matlab and run on a conventional laptop with Intel Core i7-9750H
processor. The proposed method (with 12 beacons, using all possible beacon triplets) required 56
microseconds in average to compute an estimate, while for the reference MEX method the average
runtime was 278 milliseconds.
Empirical PDF Empirical PDF
35 0.35
RANSAC - outliers: 1 MEX - outliers: 1
RANSAC - outliers: 3 MEX - outliers: 3
30 RANSAC - outliers: 5 0.3 MEX - outliers: 5
25 0.25
20 0.2
Density
Density
15 0.15
10 0.1
5 0.05
0 0
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 2 4 6 8 10 12 14 16 18 20 22
Error level [m] Error level [m]
(a) (b)
Figure 7: Error distribution of the proposed (a) and the reference (b) methods for simulated noisy
measurements and outliers.
4. Measurement results
The test measurements were made inside of an industrial warehouse and in front of the warehouse.
The map of the test area is shown in Fig. 8 (a). On the right-hand side a part of the warehouse is visible,
where nine beacons were deployed along the walls and on the shelves. There were another six beacons
deployed outside the building on the walls and poles. The camera was deployed on top of the vehicle,
as shown in Fig. 8 (b). An estimated path of the vehicle is shown in Fig. 8 (a) in blue: the vehicle started
from point A in the warehouse, exited the building, crossed a bumpy part of the pavement and turned
left to point B. Here the vehicle backed and made another left turn to point C. From point C the vehicle
entered the building on a straight trajectory to the final point D.
During the measurements the number of visible beacons varied from 3 to 13. Apart from the bumpy
pavement, where the camera swayed significantly on the top of the vehicle, the trajectory estimate is
smooth and contains variation of a few centimeters only. During the test, the algorithm detected outliers
(due to reflecting surfaces in the warehouse) at three different path segments, in total of ~550 image
frames, as marked in Fig. 8(a). The outliers had no significant effect on the position estimates which
clearly illustrates the fault-tolerance of the proposed system.
5. Summary
A robust RANSAC-based solution, for 2-D position estimation using azimuth-only angle
measurements, was proposed. The proposed method uses a practical error model and a corresponding
position-dependent inlier angle tolerance to detect outliers. The initial estimate is calculated by a
linearized equation system, using measurements from 3 beacons. The number of supporter beacons is
increased incrementally using RANSAC. The best initial estimate (with the largest set of supporters
and the smallest error function value) is used as an initial point for a gradient search to find the final
position estimate; in the search those measurements are used which are consistent with the winner initial
estimate.
(a) (b)
Figure 8: (a) Map of the measurement environment. On the right-hand side there is a warehouse, the
left-hand side shows the open-air area in front of it. The beacons are denoted by numbered crosses.
The estimated vehicle route A-B-C-D is shown in blue. (b) Photo of the vehicle with the installed
camera on top of it.
The performance of the proposed method was illustrated by simulation examples. It was shown that
the proposed method and the reference MEX method has essentially the same accuracy when no outliers
are present. This is due to the similar cost functions used by the two methods. However, in the presence
of outliers, the reference method provided no usable results, since it used the bad measurements as well
for the estimation, generating large errors. The proposed method was not sensitive to outliers: its
performance degradation was minor, even when as much as 5 simultaneous outliers were present.
The computational cost of the proposed method is low: in the realistic test cases the computational
time of one position estimate was as low as 56 microseconds, on a conventional laptop. The speed of
the reference method, due to its exhaustive (brute-force) nature, was more than three orders of
magnitude lower.
The performance of the proposed method was tested using real measurements as well, in an
industrial warehouse. During the tests, the outliers had no effect on the position estimates, allowing
positioning with a few centimeters of error only.
6. Acknowledgements
We acknowledge the financial support of Széchenyi 2020 under the EFOP-3.6.1-16-2016-00015.
7. References
[1] M.H., Bergen, F.S. Schaal, R. Klukas, J. Cheng, J.F. Holzman, Toward the implementation of a
universal angle-based optical indoor positioning system. Front. Optoelectron. 11 (2018): 116–127.
[2] P. Huynh, M. Yoo, VLC-based positioning system for an indoor environment using an image
sensor and an accelerometer sensor. Sensors 16 (2016): 783.
[3] G. Simon, G. Zachár, G. Vakulya, Lookup: Robust and Accurate Indoor Localization Using
Visible Light Communication. IEEE Transactions on Instrumentation and Measurement 66 (2017):
2337-2348.
[4] B. Zhu, J. Cheng, Y. Wang, J. Yan, J. Wang, Three-dimensional VLC positioning based on angle
difference of arrival with arbitrary tilting angle of receiver. IEEE J. Sel. Areas Commun. 36 (2018):
8–22.
[5] F.G. Serrenho, J.A. Apolinário, A.L.L. Ramos; R.P. Fernandes, Gunshot airborne surveillance with
rotary wing UAV-embedded microphone array. Sensors 19 (2019): 4271.
[6] M. Cominelli, P. Patras, F. Gringoli, Dead on Arrival: An Empirical Study of The Bluetooth 5.1
Positioning System. International Workshop on Wireless Network Testbeds, Experimental
Evaluation & Characterization (WiNTECH 2019), pp. 13-20.
[7] M. Heydariaan, H. Dabirian, O. Gnawali, AnguLoc: Concurrent Angle of Arrival Estimation for
Indoor Localization with UWB Radios. 16th International Conference on Distributed Computing
in Sensor Systems (DCOSS), 2020, pp. 112-119.
[8] X. Cui, K. Yu, S. Zhang H. Wang, Azimuth-Only Estimation for TDOA-based Direction Finding
with Three-Dimensional Acoustic Array. IEEE Transactions on Instrumentation and Measurement
69 (2020): 985-994.
[9] V. Pierlot, M. Van Droogenbroeck, A New Three Object Triangulation Algorithm for Mobile
Robot Positioning. IEEE Transactions on Robotics, 30 (2014): 566-577.
[10] H. Zhang, Z. Zhang, AOA-Based Three-Dimensional Positioning and Tracking Using the Factor
Graph Technique. Symmetry, 12 (2020), 1400.
[11] Z. Wang, et al., A Set-membership Approach for Visible Light Positioning with Fluctuated RSS
Measurements. Short Paper Proceedings of the Tenth International Conference on Indoor
Positioning and Indoor Navigation - Work-in-Progress Papers (IPIN-WiP 2019), Pisa, Italy, 2019,
pp. 275-282.
[12] P. Li, X. Ma, Robust acoustic source localization with TDOA based RANSAC algorithm. In: Proc.
ICIC––5th Int. Conf. Emerging Intell.Comput. Technol. Appl., Berlin, Germany, 2009, pp. 222–
227.
[13] M.A. Fischler, R.C. Bolles, Random sample consensus: A paradigm for model fitting with
applications to image analysis and automated cartography. Commun. ACM, 24 (1981): 381–395.
[14] M. Rátosi, G. Simon, Towards Robust VLC Beacon Identification in Camera Based Localization
Systems. In: Proc. 2019 International Conference on Indoor Positioning and Indoor Navigation
(IPIN), Pisa, Italy, 2019, pp. 1-8.
[15] M. Rátosi, G. Simon, Real-Time Localization and Tracking using Visible Light Communication.
In: Proc. 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN),
Nantes, France, 2018, pp. 1-8.
[16] R. B. Langley, “Dilution of precision,” GPS World 10 (1999): 52–59.
[17] G. Vakulya, G. Simon, Fast Adaptive Acoustic Localization for Sensor Networks. IEEE
Transactions on Instrumentation and Measurement 60 (2011): 1820-1829.