<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Easily Identifiable Emission Waveform Design in Visible Light Positioning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Qing Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Deyue Zou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Information and Communication Engineering, Dalian University of Technology</institution>
          ,
          <addr-line>Dalian</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <abstract>
        <p>Visible light positioning (VLP) based on an image sensor uses light sources to emit ID (Identity Document) signals containing the position of the light sources and uses a mobile phone as the receiver. The stripes images carrying ID information are acquired by the rolling shutter efect of the Complementary Metal Oxide Semiconductor (CMOS) image sensor. And then using the geometric relationship between the object point and the image point of the light sources to complete the positioning. Quantization error is inevitable when obtaining the imaging coordinates of fringe images. Based on this, we design an easily identifiable light source signal mechanism, which consists of two parts: the information sequence and the all-bright sequence. We use diferent methods to get the imaging coordinates of the light source and fit the deviation between the ideal imaging coordinates and the identified imaging coordinates, the fitting results show that the imaging deviations obey Gaussian distribution. And this paper simulates positionings on the basis of considering the imaging deviation, the positioning results show that the positioning error can be controlled within 1.3cm efectively under the signal mechanism and identification method we proposed.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Visible light positioning</kwd>
        <kwd>image sensor</kwd>
        <kwd>Gaussian distribution</kwd>
        <kwd>image point coordinates</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The VLP positioning system is mainly composed of a receiver and a transmitter. The
transmitter is generally light source, and the receiver is distinguished by a Photo-Diode (PD) or an
Image-Sensor (IS)[11]. PD is sensitive to the beam direction, so it greatly limits the mobility of
the positioning terminal. And PD-based positioning requires extremely high angle measurement
and received signal strength measurement, which will lead to large positioning errors[12]. In
contrast, IS is more widely incorporated into various mobile devices. Thus it’s can complete
high-accuracy positioning without additional equipment just using a mobile phone camera[13].</p>
      <p>In VLP based on IS, we encode the actual position of the light sources and load them onto
the light sources to emit in the form of high-frequency flashing The flicker frequency of the
modulated light sources reaches kHz, CMOS image sensor can detect the change and image them
as light and dark stripes images[14]. Identifying the images, decoding the ID information of light
sources, and then calculating the imaging coordinates of the light sources (′ , ′ ). By querying
the pre-established ID database, matching the three-dimensional(3-D) position coordinates of
the light sources (, , )[15]. The position of the IS can be calculated through the geometric
relationship between the object point coordinates and the image point coordinates, that is,
positioning is achieved. While calculating the coordinates of the light sources on the imaging
plane, there will be inevitable quantization errors which will lead to positioning deviations due
to the variety of stripes images. Therefore, this paper proposes an easily identifiable signaling
mechanism as the emission signal of light sources and proposes two identification algorithms
based on this signal mechanism. By fitting the imaging deviation of the light sources obtained
by actual measurement, it can be found that the imaging deviations conform to the Gaussian
distribution. After taking the imaging deviation into account, we test positionings using the
obtained data and the results show that the positioning accuracy of the algorithm proposed in
this paper is better than the positioning accuracy of direct identification.
2. Single camera positioning algorithm
The VLP based on IS is to get the position and attitude information of IS. To simplify the
experiments, we only discuss the case where IS is fixed at the same height. Fig. 1 shows a
typical indoor positioning system. We installed 4 LEDs on the ceiling of the room and the LEDs
were not in a straight line. Each LED transmits its unique ID signal representing its own 3-D
coordinate information to the optical channel through OOK modulation. We use the mobile
phone as the receiver to capture images of light sources. Then detecting the ID and image
coordinates of the LEDs. After obtaining the above information, we use the following algorithm
to complete the position calculation of the receiver  = ( ,  ,  ).</p>
      <p>
        Let the 3-D positions of LEDs be (, , ),  = 1, 2, 3, 4, where  are the numbers of the
light sources. The height of the mobile phone from the ceiling is  and the distance from each
LED to the lens( ) is denoted as 1, 2, 3, 4. They can be expressed as (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )-(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ):
( −  )2 + ( −  )2 + ( −  )2 = 2,  = 1, 2, 3, 4
 =  + 
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>After imaging by the IS, we can get the position coordinates (′ , ′ ), ′ = 1, 2, 3, 4 of each
LED on the imaging plane, where ′ are the numbers of the imaging light sources. Then we can</p>
      <p>R1</p>
      <p>R3
R4</p>
      <p>R2
LED3
P
f</p>
      <p>Q d1' (u1', v1')
Z</p>
      <p>Y</p>
      <p>
        X
calculate the distance (′ and ′ ) from each image point to the image center  = (0, 0) and
to the optical axis of the lens, as shown in (
        <xref ref-type="bibr" rid="ref3">3</xref>
        )-(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ):
′ =
√︁
(′ − 0)2 + (′ − 0)2, ′ = 1, 2, 3, 4
′ = √︁ 2 + 2′ , ′ = 1, 2, 3, 4
      </p>
      <p>
        According to the triangle similarity principle, the distance from each LED to the lens can be
calculated as shown in (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ):
 =  
′  →  =  · ′
where  is the focal length of the camera, which is known. In a result, (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) can be can be rewritten
as (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ):
      </p>
      <p>( −  )2 + ( −  )2 = 2 · ( 22′ − 1)</p>
      <p>
        Therefore, the receiver’s coordinate of  that is (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) can be achieved by resolving Least Squares
Method.
where  and  are shown as (
        <xref ref-type="bibr" rid="ref8">8</xref>
        )-(
        <xref ref-type="bibr" rid="ref9">9</xref>
        ).
      </p>
      <p>[ ;  ] = (   )− 1</p>
      <p>
        ⎡2 − 1 2 − 1⎤
 = 2 ⎣3 − 1 3 − 1⎦ ,
4 − 1 4 − 1
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
⎡
      </p>
      <p>12′
(22 − 12) + (22 − 12) + 2 · ( 2 − 1)
⎢⎢ 22′ ⎥⎥
 = ⎢⎢(32 − 12) + (32 − 12) + 2 · ( 2 − 1)⎥⎥
⎢⎣ ⎥⎦</p>
      <p>
        32′
(42 − 12) + (42 − 12) + 2 · ( 2 − 1)
⎤
(
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
3. The proposed identification method
      </p>
      <sec id="sec-1-1">
        <title>3.1. Transmit signal design</title>
        <p>To obtain the coordinates of the light sources image point in the positioning algorithm proposed
in section 2, we need to identify the captured light source image. When obtaining the imaging
coordinates of the light source, the imaged stripes images are not all complete circular contours
due to the high scanning frequency of the camera, as shown in Fig. 2, which causes inevitable
quantization errors and afects subsequent positioning. Therefore, we propose a design method
for light source emission signal which is easy to identify.</p>
        <p>(a)
(c)
(b)
(d)</p>
        <p>Under the premise of satisfying indoor lighting, we choose every 4 light sources as a set of
positioning light source groups. The positions of those 4 light sources are not on the same
straight line. The emission data of each light source consists of two parts: the information
sequence  and the all-bright sequence . The information sequence  is the ID information
that matches the actual 3-D position of the light source. Due to the great autocorrelation
characteristics of the pseudo-random code, we might as well choose the pseudo-code as the ID
of the light source. The all-bright sequence , as the name implies, is used to identify the image
point of the light source and the ID demodulation. The transmitter uses OOK modulation to
load the 3-D position of the light sources to the light sources at a frequency of 3kHz. Due to the
high flicker frequency, the human eye cannot perceive this flickering. This not only ensures the
lighting function of the light source but also completes the work of the positioning transmitting</p>
        <p>To verify the superiority of the signal proposed in this paper, we discuss the situation of 4
indoor light sources that the requirements for the number of IDs are not high. Therefore, a
Pseudo-noise Sequence  = [ 1,  2, ...,  1 ],  = 1, 2, 3, 4 (including but not limited to shift)
with a code length of 1 = 7 and duration of 4 is selected to be assigned to each light source.
To ensure that the receiving step can identify the light source quickly and accurately, adding
two all-bright sequence  = [1
, 2, ...2 ] of time 4 and code length 2 = 7 for each light</p>
        <p>
          source. Since the light source is grounded at the cathode, here the 2 represents the binary all
“1” symbol. To collect the imaging of the light source information emission sequence and the
all-bright sequence simultaneously within  , the all-bright sequence needs to be shifted, that
is, there is always a 4 delay between the all-bright sequences of the light source, as shown in
Fig. 3. Therefore, the total emission sequence of the light source groups in  are shown as (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ),
where “⊗ ” represents the Kronecker product, and “⊕ ” represents the modulo-two addition. The
light source groups broadcast the total emission sequence cyclically with  as the period, and
 /4 is the exposure time of the receiver.
        </p>
        <p>LED1
LED2
LED3
LED4</p>
        <p>T
4</p>
        <p>T
4</p>
        <p>T
4</p>
        <p>
          T
4
1 = ([︀ 0 0 1 1 ⊗ [︀  11,  12, ...  11 ]︀ ) ⊕ (︀[ 1 1 0 0 ⊗ [︀ 1, 2, ... 2 ]︀ )
︀]
︀]
1 1 1
2 = ([︀ 1 0 0 1 ⊗ [︀  21,  22, ...  21 ]︀ ) ⊕ (︀[ 0 1 1 0 ⊗ [︀ 1, 2, ... 2 ]︀ )
︀]
︀]
2 2 2
3 = ([︀ 1 1 0 0 ⊗ [︀  31,  32, ...  31 ]︀ ) ⊕ (︀[ 0 0 1 1 ⊗ [︀ 1, 2, ... 2 ]︀ )
︀]
︀]
3 3 3
4 = ([︀ 0 1 1 0 ⊗ [︀  41,  42, ...  41 ]︀ ) ⊕ (︀[ 1 0 0 1 ⊗ [︀ 1, 2, ... 2 ]︀ )
︀]
︀]
4 4 4
In addition, the number of light sources is more than 4 or even more in the actual environment,
so 1 and 2 can change the sequence length according to the number of light sources in the
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(a)
(b)
actual environment, so as to ensure that there are enough IDs for positioning.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>3.2. LED light source identification</title>
        <p>Selecting the IS mounted on the mobile phone as the receiver, and aligning the light source
array within time  to obtain 4 photos ,  = 1, 2, 3, 4 continuously, where  are the
numbers of the photos. In this step, the acquired color images are transformed into grayscale
images and binary images at first. Then we segment the binary images to process each light
source in the captured images. Finally, we obtain the processed image ′ and pixel matrices
(, )×  , ,  = 1, 2, 3, 4, where (, ) is image coordinates of pixels, N is the number of
pixels in row (ie horizontally), M is the number of pixels in column (ie vertical). After completing
the above steps, using the identification algorithm to extract the centroid coordinates of the
light sources.</p>
        <p>
          The traditional identification algorithm is to find the upper, lower, left, and right edges of the
light source fringe image respectively, and take the midpoint as the imaging coordinate, or use
the fitting method to fit the circular edge of the light source and take the center of the circle as
the imaging coordinate. The above identification methods all have errors as shown in Fig. 2.
Based on the description in section 3.1, we propose two identification methods here as shown
in Table 1 and Table 2: (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )Algorithm 1: image method; (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )Algorithm 2: fitting method.
        </p>
        <p>Detecting the area of each light source in the image ′ and calculating its two-dimensional
pixel coordinates. Taking light LED1 as an example, the main processes of these two methods
are described in algorithm 1 and algorithm 2. And some identification results are shown in Fig.
4.
1. Traverse the column pixels of any one 1(, )×  ,  = 1, 2, 3, 4
2. Record the column 1 whose pixel value is not 0 for the first time and the column 2 whose
pixel value is 0 again;
3. Traverse the rows pixels of 1(, )×  ,  = 1, 2, 3, 4 selected in step 1;
4. Record the row 1 whose pixel value is not 0 for the first time and the row 2 whose pixel
value is 0 again;
5. Calculate the pixel value:</p>
        <p>
          2
 = ∑︁ 1(, )|= 1+2
=1 2
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
6. Take the centroid coordinates ( 1+2 , 1+2 ) of the light source of the image where max
2 2
is located as the imaging coordinates of LED1 (1′ , 1′ ).
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. Experiment and analysis</title>
      <p>Considering the strictness of synchronization and to simplify the experiment, we use the
smartphone to capture video images at a frame rate of 24fps and convert the video images into
frame images. Then use Matlab to process these images. All the key system parameters adopted
are provided in Table 3.</p>
      <p>
        To verify our proposed methods, we use the camera of a smartphone(iphone12) to capture
the light source video carrying the information sequence described in section 3.2 at the test
point (
        <xref ref-type="bibr" rid="ref10">25,25,10</xref>
        ). Under the condition that the height of the mobile phone remains unchanged at
the test point, the posture of the mobile phone can be adjusted, and the LED can be imaged as a
circular spot as much as possible. Then converting the video to 12,000 light source images, and
using the two algorithms proposed in 3.2 to calculate the LEDs centroid coordinates every 4
images in sequence. Under the high shutter speed camera, the background is completely black,
which has almost no efect on the light source, but it still introduces some noise interference
inevitably. The several coordinate points with large deviations in the upper left corner in
Fig. 5 are caused by noise interference. There is a large coordinate deviation because of the
interference of light spots in individual light source frames. Therefore, we use the 3 criterion
here to remove individual noisy data. As shown in the enlarged part of Fig. 5, after removing
the noise data, the number of samples is 3000, and it can be seen that these 4 methods all deviate
from the ideal imaging coordinate.
1. Same as steps 1–5 of algorithm 1;
2. Add and subtract a range threshold 10pix respectively to the boundary value
[1, 2][1, 2] of the light source of the image where max is located, and set the pixel
values outside the [1 : 1 + 10, 2 : 2 − 10][1 : 1 + 10, 2 : 2 − 10] to 0;
3. Use the Sobel operator to extract the edge points (, ) of the light source
processed in step 2;
      </p>
      <sec id="sec-2-1">
        <title>4. Find the circle</title>
        <p>
          = @()( − (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ))2 + ( − (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ))2 − (
          <xref ref-type="bibr" rid="ref3">3</xref>
          )2
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
that matches the edge points in step 3 by least squares fitting and find the centre of the circle
((
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )) as the imaging coordinates of LED1 (1′ , 1′ ).
        </p>
        <sec id="sec-2-1-1">
          <title>4.1. Identification error</title>
          <p>In order to see the errors of diferent identification methods more clearly, our paper chooses the
Root Mean Square Error (RMSE) to calculate the identification error:
 =
√︁
(′ − ′ )2 + (′ − ′ )2
(13)
where, (′ , ′ ) is the imaging coordinates of LEDi measured by the above 4 methods at the
test point, (′ , ′ ) is the ideal imaging coordinates of the test point.</p>
          <p>Cumulative distribution function (CDF) is the integral of the probability density function,
which can describe the probability distribution of identification error. Taking LED1 as an
example, the error cumulative distribution function of diferent identification methods are
shown in Fig. 6 and Fig. 7. Fig. 6 is the deviation diagram of the centroid coordinates fringe
900
800
700
l) 600
e
x
i
(ep 500
u
l
a
lev 400
x
i
np 300
m
u
loC 200
100
0</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Parameter</title>
        <p>images identified directly and the ideal imaging coordinates. Fig. 7 is the deviation diagram of
the centroid coordinates using the algorithms proposed in 3.2 and the ideal imaging coordinates.
0.9
0.8</p>
        <p>It can be seen that the coordinate error of the light source identified by fitting the stripes
image directly is 0pix-50pix, and there is a 57% chance that the error exceeds 10pix. The error
of the image method is 0pix-60pix, and the probability of exceeding 10pix is 68%. They all have
large errors. The error of using the signal mechanisms proposed by 3.1 to identify the full bright
spot are all about 0-3pix. Among them, algorithm 2 is to calculate the pixels of images, and the
pixel coordinates are integers, so there are many repetitions of error values.</p>
        <p>Fitting the coordinate deviation distribution obtained by the above four methods. Fig. 8 is
the abscissa deviation distribution fitted according to the histogram of deviation coordinate
diference using 4 methods, and Fig. 9 is the ordinate deviation distribution fitted according
to the histogram of deviation coordinate diference using 4 methods. It can be seen that the
deviation of the actual imaging coordinates of the light source is in line with the Gaussian
distribution as shown in (14). The parameters are shown in Table 4.
 = 0 + 0−
(− )2</p>
        <p>22
︂( ′ )︂
′
=
︂( ′ )︂
′
+
︂(</p>
        <p>︂)

0
−2
−1
0</p>
        <p>1
(c)
2
3
−1</p>
        <p>0
(d)
1</p>
        <sec id="sec-2-2-1">
          <title>4.2. Positioning error</title>
          <p>In our positioning system, the source of the positioning error is mainly the quantization error
caused by the image identification of the LED described in section 4.1, so the ideal imaging
coordinates of the light source need to be rewritten as (15).
where (, 2) and (, 2) are Gaussian white noises that are independent of each
other on the horizontal and vertical axes of the light source on the imaging plane, the means
and standard deviations are shown in Table 4.</p>
          <p>Considering the identification error, we do positioning tests on the data obtained by each
method at the test point. Then we calculated the positioning error which is the RMSE of the
estimated position results and test point. The statistical result is shown in Fig. 10.</p>
          <p>It can be seen that algorithm 1 and algorithm 2 perform better than the direct solution. More
than 90% of the positioning error is within 1.69cm when using the fitted stripes images directly.
More than 90% of the positioning error is within 1.77 cm when using the stripes coordinates
identified by the image method directly. While more than 90 % of the positioning error is within
(14)
(15)
1.26cm when using algorithm 1, and the positioning error of algorithm 2 is within 1.25cm. So
the two algorithms proposed in the paper all have high accuracy.</p>
          <p>1</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Conclusion</title>
      <p>Indoor visible light position based on a single camera will be a large error when calculating
fringe coordinates directly as the imaging centroid coordinates of light sources. In this paper,
we proposed a light source emission signal mechanism combining information sequence and
allbright sequence, we compared and fitted the imaging deviations of diferent methods. The results
show that the imaging deviations obey the Gaussian distribution. We simulated positionings
after taking the identification error into account. The results show that the two identification
methods based on this signal mechanism can reduce the positioning error to within 1.3cm. The
positioning accuracy is greatly improved compared with the positioning method of identifying
the light source directly.</p>
      <p>This paper tests multiple positioning at the test point, and the statistical results show the
superiority of the signal proposed in this paper in positioning accuracy. In future work, we
consider transplanting the positioning system into the real environment and consider the
influence of some non-positioning light sources (such as sunlight) in the real environment to
complete random and dynamic positioning.</p>
    </sec>
    <sec id="sec-4">
      <title>6. Acknowledgments</title>
      <p>This research was supported by The National Natural Science Foundation of China (62171075).
Conference on Automation, Electronics and Electrical Engineering (AUTEEE), 2018, pp.
202–208. doi:10.1109/AUTEEE.2018.8720798.
[13] H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, Z. Chen, A fast and
high-accuracy real-time visible light positioning system based on single led lamp with a
beacon, IEEE Photonics Journal 12 (2020) 1–12. doi:10.1109/JPHOT.2020.3032448.
[14] W. Guan, L. Huang, B. Hussain, C. P. Yue, Robust robotic localization using visible light
positioning and inertial fusion, IEEE Sensors Journal 22 (2022) 4882–4892. doi:10.1109/
JSEN.2021.3053342.
[15] Y. Wu, X. Liu, W. Guan, B. Chen, X. Chen, C. Xie, High-speed 3d indoor localization
system based on visible light communication using diferential evolution algorithm,
Optics Communications 424 (2018) 177–189. doi:https://doi.org/10.1016/j.optcom.
2018.04.062.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Mutka</surname>
          </string-name>
          ,
          <article-title>Crisp: cooperation among smartphones to improve indoor position information</article-title>
          ,
          <source>Wireless Networks</source>
          <volume>24</volume>
          (
          <year>2018</year>
          )
          <fpage>867</fpage>
          -
          <lpage>884</lpage>
          . URL: http://dx.doi.org/10.1007/ s11276-016-1373-1.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Smieja</surname>
          </string-name>
          ,
          <article-title>Zigbee phase shift measurement approach to mobile inspection robot indoor positioning techniques</article-title>
          ,
          <source>Diagnostyka</source>
          <volume>19</volume>
          (
          <year>2018</year>
          )
          <fpage>101</fpage>
          -
          <lpage>107</lpage>
          . URL: http://dx.doi.org/10.29354/ diag/94498.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Redzic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Laoudias</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kyriakides</surname>
          </string-name>
          ,
          <article-title>Image and wlan bimodal integration for indoor user localization</article-title>
          ,
          <source>IEEE Transactions on Mobile Computing</source>
          <volume>19</volume>
          (
          <year>2020</year>
          )
          <fpage>1109</fpage>
          -
          <lpage>1122</lpage>
          . URL: http://dx.doi.org/10.1109/TMC.
          <year>2019</year>
          .
          <volume>2903044</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pei</surname>
          </string-name>
          , J. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kröger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Chen,</surname>
          </string-name>
          <article-title>The evaluation of wifi positioning in a bluetooth and wifi coexistence environment</article-title>
          , in: 2012 Ubiquitous Positioning,
          <source>Indoor Navigation, and Location Based Service (UPINLBS)</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/UPINLBS.
          <year>2012</year>
          .
          <volume>6409768</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Paszek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Grzechca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Becker</surname>
          </string-name>
          ,
          <article-title>Design of the uwb positioning system simulator for los/nlos environments</article-title>
          ,
          <source>Sensors</source>
          <volume>21</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .3390/s21144757.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.-F.</given-names>
            <surname>Hsu</surname>
          </string-name>
          , C.-S. Cheng, W.-C. Chu,
          <string-name>
            <surname>Compass:</surname>
          </string-name>
          <article-title>An active rfid-based real-time indoor positioning system</article-title>
          ,
          <source>Human-centric Computing and Information Sciences</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <fpage>17921</fpage>
          -
          <lpage>17942</lpage>
          . URL: http://dx.doi.org/10.22967/HCIS.
          <year>2022</year>
          .
          <volume>12</volume>
          .007.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , C. Xiao,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A single led visible light positioning system based on geometric features and cmos camera</article-title>
          ,
          <source>IEEE Photonics Technology Letters</source>
          <volume>32</volume>
          (
          <year>2020</year>
          )
          <fpage>1097</fpage>
          -
          <lpage>1100</lpage>
          . doi:
          <volume>10</volume>
          .1109/LPT.
          <year>2020</year>
          .
          <volume>3012476</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Thompson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Haas</surname>
          </string-name>
          ,
          <article-title>A survey of positioning systems using visible led lights</article-title>
          ,
          <source>IEEE Communications Surveys Tutorials</source>
          <volume>20</volume>
          (
          <year>2018</year>
          )
          <fpage>1963</fpage>
          -
          <lpage>1988</lpage>
          . doi:
          <volume>10</volume>
          .1109/COMST.
          <year>2018</year>
          .
          <volume>2806558</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Raza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lolic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Akhter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Liut</surname>
          </string-name>
          ,
          <article-title>Comparing and evaluating indoor positioning techniques</article-title>
          ,
          <source>in: 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/IPIN51156.
          <year>2021</year>
          .
          <volume>9662632</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Abe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Sato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Watanabe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hashizume</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sugimoto</surname>
          </string-name>
          ,
          <article-title>Smartphone positioning using an ambient light sensor and reflected visible light</article-title>
          ,
          <source>in: 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/IPIN51156.
          <year>2021</year>
          .
          <volume>9662520</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <article-title>Signature codes in visible light positioning</article-title>
          ,
          <source>IEEE Wireless Communications</source>
          <volume>28</volume>
          (
          <year>2021</year>
          )
          <fpage>178</fpage>
          -
          <lpage>184</lpage>
          . URL: http://dx.doi.org/10.1109/MWC.001. 2000540.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , L. Liu,
          <article-title>A novel three-dimensional indoor localization algorithm based on visual visible light communication using single led</article-title>
          , in: 2018 IEEE International
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>