=Paper= {{Paper |id=Vol-3248/paper17 |storemode=property |title=An Easily Identifiable Emission Waveform Design in Visible Light Positioning |pdfUrl=https://ceur-ws.org/Vol-3248/paper17.pdf |volume=Vol-3248 |authors=Qing Wang,Deyue Zou |dblpUrl=https://dblp.org/rec/conf/ipin/WangZ22 }} ==An Easily Identifiable Emission Waveform Design in Visible Light Positioning== https://ceur-ws.org/Vol-3248/paper17.pdf
An Easily Identifiable Emission Waveform Design in
Visible Light Positioning
Qing Wang1 , Deyue Zou1,*
1
    School of Information and Communication Engineering, Dalian University of Technology, Dalian, China


                                         Abstract
                                         Visible light positioning (VLP) based on an image sensor uses light sources to emit ID (Identity Document)
                                         signals containing the position of the light sources and uses a mobile phone as the receiver. The stripes
                                         images carrying ID information are acquired by the rolling shutter effect of the Complementary Metal
                                         Oxide Semiconductor (CMOS) image sensor. And then using the geometric relationship between the
                                         object point and the image point of the light sources to complete the positioning. Quantization error is
                                         inevitable when obtaining the imaging coordinates of fringe images. Based on this, we design an easily
                                         identifiable light source signal mechanism, which consists of two parts: the information sequence and the
                                         all-bright sequence. We use different methods to get the imaging coordinates of the light source and fit the
                                         deviation between the ideal imaging coordinates and the identified imaging coordinates, the fitting results
                                         show that the imaging deviations obey Gaussian distribution. And this paper simulates positionings on
                                         the basis of considering the imaging deviation, the positioning results show that the positioning error
                                         can be controlled within 1.3cm effectively under the signal mechanism and identification method we
                                         proposed.

                                         Keywords
                                         Visible light positioning, image sensor, Gaussian distribution, image point coordinates




1. Introduction
The outdoor positioning systems such as Global Positioning System (GPS) and BeiDou Naviga-
tion Satellite System (BDS) can’t be positioned accurately due to the limited indoor complex
environment. In large shopping malls, museums, subway stations, etc. indoor position-based ser-
vices have become an urgent need[1]. In recent years, technologies based on indoor positioning
have emerged one after another, such as ZigBee[2], Wireless Local Area Network (WLAN)[3],
Bluetooth[4], Ultra Wide Band (UWB)[5], and Radio Frequency Identification (RFID)[6]. While
they are not widely used due to their inherent flaws including low positioning accuracy and
expensive cost of the devices. With the development of LED (light-emitting diode) light source,
visible light positioning (VLP) has become a research hotspot[7]. As an alternative to those
techniques, VLP uses visible light sources as beacons for position estimation. Meanwhile, it
can use existing lighting infrastructure with minimal modifications, which greatly reduces
deployment costs[8, 9, 10].


IPIN 2022 WiP Proceedings, September 5 - 7, 2022, Beijing, China
*
 Corresponding author.
" hlgwangqing@163.com (Q. Wang); zoudeyue@dlut.edu.cn (D. Zou)
                                       Β© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
   The VLP positioning system is mainly composed of a receiver and a transmitter. The trans-
mitter is generally light source, and the receiver is distinguished by a Photo-Diode (PD) or an
Image-Sensor (IS)[11]. PD is sensitive to the beam direction, so it greatly limits the mobility of
the positioning terminal. And PD-based positioning requires extremely high angle measurement
and received signal strength measurement, which will lead to large positioning errors[12]. In
contrast, IS is more widely incorporated into various mobile devices. Thus it’s can complete
high-accuracy positioning without additional equipment just using a mobile phone camera[13].
   In VLP based on IS, we encode the actual position of the light sources and load them onto
the light sources to emit in the form of high-frequency flashing The flicker frequency of the
modulated light sources reaches kHz, CMOS image sensor can detect the change and image them
as light and dark stripes images[14]. Identifying the images, decoding the ID information of light
sources, and then calculating the imaging coordinates of the light sources (𝑒𝑖′ , 𝑣𝑖′ ). By querying
the pre-established ID database, matching the three-dimensional(3-D) position coordinates of
the light sources (𝑋𝑖 , π‘Œπ‘– , 𝑍𝑖 )[15]. The position of the IS can be calculated through the geometric
relationship between the object point coordinates and the image point coordinates, that is,
positioning is achieved. While calculating the coordinates of the light sources on the imaging
plane, there will be inevitable quantization errors which will lead to positioning deviations due
to the variety of stripes images. Therefore, this paper proposes an easily identifiable signaling
mechanism as the emission signal of light sources and proposes two identification algorithms
based on this signal mechanism. By fitting the imaging deviation of the light sources obtained
by actual measurement, it can be found that the imaging deviations conform to the Gaussian
distribution. After taking the imaging deviation into account, we test positionings using the
obtained data and the results show that the positioning accuracy of the algorithm proposed in
this paper is better than the positioning accuracy of direct identification.


2. Single camera positioning algorithm
The VLP based on IS is to get the position and attitude information of IS. To simplify the
experiments, we only discuss the case where IS is fixed at the same height. Fig. 1 shows a
typical indoor positioning system. We installed 4 LEDs on the ceiling of the room and the LEDs
were not in a straight line. Each LED transmits its unique ID signal representing its own 3-D
coordinate information to the optical channel through OOK modulation. We use the mobile
phone as the receiver to capture images of light sources. Then detecting the ID and image
coordinates of the LEDs. After obtaining the above information, we use the following algorithm
to complete the position calculation of the receiver 𝑃 = (𝑋𝑃 , π‘Œπ‘ƒ , 𝑍𝑃 ).
   Let the 3-D positions of LEDs be (𝑋𝑖 , π‘Œπ‘– , 𝑍𝑖 ), 𝑖 = 1, 2, 3, 4, where 𝑖 are the numbers of the
light sources. The height of the mobile phone from the ceiling is 𝐻 and the distance from each
LED to the lens(𝑃 ) is denoted as 𝑅1 , 𝑅2 , 𝑅3 , 𝑅4 . They can be expressed as (1)-(2):

                 (𝑋𝑖 βˆ’ 𝑋𝑃 )2 + (π‘Œπ‘– βˆ’ π‘Œπ‘ƒ )2 + (𝑍𝑖 βˆ’ 𝑍𝑃 )2 = 𝑅𝑖 2 , 𝑖 = 1, 2, 3, 4                 (1)
                                          𝑍𝑖 = 𝐻 + 𝑍𝑃                                            (2)
  After imaging by the IS, we can get the position coordinates (𝑒𝑖′ , 𝑣𝑖′ ), 𝑖′ = 1, 2, 3, 4 of each
LED on the imaging plane, where 𝑖′ are the numbers of the imaging light sources. Then we can
                                   LED1
                                     D1                                             LED2
                                                                                      D2


                                LED4
                                  D4                              LED3
                                                                    D3



                                          H              R1               R3
                                                 R4                                R2



                                                                      P

                                                                  f
                                                              Q
                                                                      d1'
                                                                            (u1' , v1' )
                   Z
                        Y
                            X

Figure 1: Visible light positioning system.


calculate the distance (𝑑𝑖′ and π‘Ÿπ‘–β€² ) from each image point to the image center 𝑄 = (𝑒0 , 𝑣0 ) and
to the optical axis of the lens, as shown in (3)-(4):
                               √︁
                         𝑑𝑖′ = (𝑒𝑖′ βˆ’ 𝑒0 )2 + (𝑣𝑖′ βˆ’ 𝑣0 )2 , 𝑖′ = 1, 2, 3, 4                    (3)
                                               √︁
                                       π‘Ÿπ‘–β€² =     𝑓 2 + 𝑑2𝑖′ , 𝑖′ = 1, 2, 3, 4                    (4)
  According to the triangle similarity principle, the distance from each LED to the lens can be
calculated as shown in (5):
                                  𝑅𝑖     𝐻             𝐻
                                      =      β†’ 𝑅𝑖 =       Β· π‘Ÿπ‘–β€²                              (5)
                                  π‘Ÿπ‘–β€²     𝑓            𝑓
where 𝑓 is the focal length of the camera, which is known. In a result, (1) can be can be rewritten
as (6):
                                                               π‘Ÿ2β€²
                          (𝑋𝑖 βˆ’ 𝑋𝑃 )2 + (π‘Œπ‘– βˆ’ π‘Œπ‘ƒ )2 = 𝐻 2 Β· ( 𝑖2 βˆ’ 1)                            (6)
                                                               𝑓
 Therefore, the receiver’s coordinate of 𝑃 that is (6) can be achieved by resolving Least Squares
Method.
                                                                            βˆ’1
                                       [𝑋𝑃 ; π‘Œπ‘ƒ ] = (𝑀 𝑇 𝑀 )                     𝑀𝑇 𝐷            (7)
where 𝑀 and 𝐷 are shown as (8)-(9).
                                             ⎑                ⎀
                                              𝑋2 βˆ’ 𝑋1 π‘Œ2 βˆ’ π‘Œ1
                                       𝑀 = 2 βŽ£π‘‹3 βˆ’ 𝑋1 π‘Œ3 βˆ’ π‘Œ1 ⎦ ,                                (8)
                                              𝑋4 βˆ’ 𝑋1 π‘Œ4 βˆ’ π‘Œ1
                                                                   π‘Ÿ2
                        ⎑                                                   ⎀
                         (𝑋2 2 βˆ’ 𝑋1 2 ) + (π‘Œ2 2 βˆ’ π‘Œ1 2 ) + 𝐻 2 Β· ( 𝑓12β€² βˆ’ 1)
                        ⎒                                                   βŽ₯
                        ⎒                                            2
                                                                            βŽ₯
                                                                   π‘Ÿ
                    𝐷 = ⎒(𝑋3 2 βˆ’ 𝑋1 2 ) + (π‘Œ3 2 βˆ’ π‘Œ1 2 ) + 𝐻 2 Β· ( 𝑓22β€² βˆ’ 1)βŽ₯                  (9)
                        ⎒                                                   βŽ₯
                        ⎒                                                   βŽ₯
                        ⎣                                            2
                                                                            ⎦
                                                                   π‘Ÿ
                         (𝑋4 2 βˆ’ 𝑋1 2 ) + (π‘Œ4 2 βˆ’ π‘Œ1 2 ) + 𝐻 2 Β· ( 𝑓32β€² βˆ’ 1)


3. The proposed identification method
3.1. Transmit signal design
To obtain the coordinates of the light sources image point in the positioning algorithm proposed
in section 2, we need to identify the captured light source image. When obtaining the imaging
coordinates of the light source, the imaged stripes images are not all complete circular contours
due to the high scanning frequency of the camera, as shown in Fig. 2, which causes inevitable
quantization errors and affects subsequent positioning. Therefore, we propose a design method
for light source emission signal which is easy to identify.




                                     (a)                     (b)




                                     (c)                     (d)



Figure 2: Different LED stripes images captured by CMOS sensor.


   Under the premise of satisfying indoor lighting, we choose every 4 light sources as a set of
positioning light source groups. The positions of those 4 light sources are not on the same
straight line. The emission data of each light source consists of two parts: the information
sequence 𝑆𝑒 and the all-bright sequence 𝑆. The information sequence 𝑆𝑒 is the ID information
that matches the actual 3-D position of the light source. Due to the great autocorrelation
characteristics of the pseudo-random code, we might as well choose the pseudo-code as the ID
of the light source. The all-bright sequence 𝑆, as the name implies, is used to identify the image
point of the light source and the ID demodulation. The transmitter uses OOK modulation to
load the 3-D position of the light sources to the light sources at a frequency of 3kHz. Due to the
high flicker frequency, the human eye cannot perceive this flickering. This not only ensures the
lighting function of the light source but also completes the work of the positioning transmitting
end.
   To verify the superiority of the signal proposed in this paper, we discuss the situation of 4
indoor light sources that the requirements for the number of IDs are not high. Therefore, a
Pseudo-noise Sequence 𝑆𝑒𝑖 = [πœπ‘–1 , πœπ‘–2 , ..., πœπ‘–π‘1 ], 𝑖 = 1, 2, 3, 4 (including but not limited to shift)
with a code length of 𝑁1 = 7 and duration of 𝑇4 is selected to be assigned to each light source.
To ensure that the receiving step can identify the light source quickly and accurately, adding
two all-bright sequence 𝑆𝑖 = [𝑠1𝑖 , 𝑠2𝑖 , ...𝑠𝑁
                                              𝑖 ] of time 4 and code length 𝑁2 = 7 for each light
                                               2           𝑇

source. Since the light source is grounded at the cathode, here the 𝑠𝑁      𝑖 represents the binary all
                                                                             2

β€œ1” symbol. To collect the imaging of the light source information emission sequence and the
all-bright sequence simultaneously within 𝑇 , the all-bright sequence needs to be shifted, that
is, there is always a 𝑇4 delay between the all-bright sequences of the light source, as shown in
Fig. 3. Therefore, the total emission sequence of the light source groups in 𝑇 are shown as (10),
where β€œβŠ—β€ represents the Kronecker product, and β€œβŠ•β€ represents the modulo-two addition. The
light source groups broadcast the total emission sequence cyclically with 𝑇 as the period, and
𝑇 /4 is the exposure time of the receiver.




                   LED1


                   LED 2


                   LED3


                   LED 4
                               T            T            T              T
                               4            4            4              4

                                                                                   t
Figure 3: The modulated signal waveform.



                                       𝜏12 , ... 𝜏1𝑁1 ) βŠ• ( 1                          𝑠21 , ... 𝑠𝑁
          [οΈ€            ]οΈ€ [οΈ€                        ]οΈ€    [οΈ€          ]οΈ€ [οΈ€                         ]οΈ€
 𝐢𝐿𝐸𝐷1 = ( 0       0 1 1 βŠ— 𝜏11 ,                                  1 0 0 βŠ— 𝑠11 ,                   1 )
                                                                                                   2


                                       𝜏22 , ... 𝜏2𝑁1 ) βŠ• ( 0                          𝑠22 , ... 𝑠𝑁
          [οΈ€            ]οΈ€ [οΈ€                        ]οΈ€    [οΈ€          ]οΈ€ [οΈ€                         ]οΈ€
 𝐢𝐿𝐸𝐷2 = ( 1       0 0 1 βŠ— 𝜏21 ,                                  1 1 0 βŠ— 𝑠12 ,                   2 )
                                                                                                   2


                                       𝜏32 , ... 𝜏3𝑁1 ) βŠ• ( 0                          𝑠23 , ... 𝑠𝑁
          [οΈ€            ]οΈ€ [οΈ€                        ]οΈ€    [οΈ€          ]οΈ€ [οΈ€                         ]οΈ€
 𝐢𝐿𝐸𝐷3 = ( 1       1 0 0 βŠ— 𝜏31 ,                                  0 1 1 βŠ— 𝑠13 ,                   3 )
                                                                                                   2


                                       𝜏42 , ... 𝜏4𝑁1 ) βŠ• ( 1                          𝑠24 , ... 𝑠𝑁
          [οΈ€            ]οΈ€ [οΈ€                        ]οΈ€    [οΈ€          ]οΈ€ [οΈ€                         ]οΈ€
 𝐢𝐿𝐸𝐷4 = ( 0       1 1 0 βŠ— 𝜏41 ,                                  0 0 1 βŠ— 𝑠14 ,                   4 )
                                                                                                   2


                                                                                           (10)
In addition, the number of light sources is more than 4 or even more in the actual environment,
so 𝑁1 and 𝑁2 can change the sequence length according to the number of light sources in the
                  (1)                    (2)                    (1)                    (2)




                  (3)                    (4)                    (3)                    (4)

                            (a)                                           (b)
Figure 4: (a) Partial identification results calculated by algorithm 1. (b) Partial identification results
calculated by algorithm 2.


actual environment, so as to ensure that there are enough IDs for positioning.

3.2. LED light source identification
Selecting the IS mounted on the mobile phone as the receiver, and aligning the light source
array within time 𝑇 to obtain 4 photos πΊπ‘š , π‘š = 1, 2, 3, 4 continuously, where π‘š are the
numbers of the photos. In this step, the acquired color images are transformed into grayscale
images and binary images at first. Then we segment the binary images to process each light
                                                                                 β€²
source in the captured images. Finally, we obtain the processed image πΊπ‘š and pixel matrices
            𝑀 ×𝑁 , 𝑖, π‘š = 1, 2, 3, 4, where (π‘₯, 𝑦) is image coordinates of pixels, N is the number of
π΅π‘š 𝑖 (π‘₯, 𝑦)

pixels in row (ie horizontally), M is the number of pixels in column (ie vertical). After completing
the above steps, using the identification algorithm to extract the centroid coordinates of the
light sources.
    The traditional identification algorithm is to find the upper, lower, left, and right edges of the
light source fringe image respectively, and take the midpoint as the imaging coordinate, or use
the fitting method to fit the circular edge of the light source and take the center of the circle as
the imaging coordinate. The above identification methods all have errors as shown in Fig. 2.
Based on the description in section 3.1, we propose two identification methods here as shown
in Table 1 and Table 2: (1)Algorithm 1: image method; (2)Algorithm 2: fitting method.
                                                              β€²
    Detecting the area of each light source in the image πΊπ‘š and calculating its two-dimensional
pixel coordinates. Taking light LED1 as an example, the main processes of these two methods
are described in algorithm 1 and algorithm 2. And some identification results are shown in Fig.
4.
Table 1
The process of image method

     Algorithm 1: Image method

     Input: LED1 images
     Output: LED1 centroid coordinates (𝑒1β€² , 𝑣1β€² )
    1. Traverse the column pixels of any one π΅π‘š
                                              1
                                                (π‘₯, 𝑦)𝑀 ×𝑁 , π‘š = 1, 2, 3, 4
    2. Record the column 𝑐𝑙1 whose pixel value is not 0 for the first time and the column 𝑐𝑙2 whose
    pixel value is 0 again;
    3. Traverse the rows pixels of π΅π‘š
                                    1
                                      (π‘₯, 𝑦)𝑀 ×𝑁 , π‘š = 1, 2, 3, 4 selected in step 1;
    4. Record the row π‘Ÿπ‘™1 whose pixel value is not 0 for the first time and the row π‘Ÿπ‘™2 whose pixel
    value is 0 again;
    5. Calculate the pixel value:
                                          π‘Ÿπ‘™2
                                          βˆ‘οΈ
                                    𝐼=            1
                                                 π΅π‘š (π‘₯, 𝑦)|𝑦= 𝑐𝑙1 +𝑐𝑙2                         (11)
                                                                  2
                                         π‘₯=π‘Ÿπ‘™1


    6. Take the centroid coordinates ( π‘Ÿπ‘™1 +π‘Ÿ
                                           2    , 2 ) of the light source of the image where 𝐼max
                                              𝑙2 𝑐𝑙1 +𝑐𝑙2


    is located as the imaging coordinates of LED1 (𝑒1β€² , 𝑣1β€² ).



4. Experiment and analysis
Considering the strictness of synchronization and to simplify the experiment, we use the
smartphone to capture video images at a frame rate of 24fps and convert the video images into
frame images. Then use Matlab to process these images. All the key system parameters adopted
are provided in Table 3.
   To verify our proposed methods, we use the camera of a smartphone(iphone12) to capture
the light source video carrying the information sequence described in section 3.2 at the test
point (25,25,10). Under the condition that the height of the mobile phone remains unchanged at
the test point, the posture of the mobile phone can be adjusted, and the LED can be imaged as a
circular spot as much as possible. Then converting the video to 12,000 light source images, and
using the two algorithms proposed in 3.2 to calculate the LEDs centroid coordinates every 4
images in sequence. Under the high shutter speed camera, the background is completely black,
which has almost no effect on the light source, but it still introduces some noise interference
inevitably. The several coordinate points with large deviations in the upper left corner in
Fig. 5 are caused by noise interference. There is a large coordinate deviation because of the
interference of light spots in individual light source frames. Therefore, we use the 3𝜎 criterion
here to remove individual noisy data. As shown in the enlarged part of Fig. 5, after removing
the noise data, the number of samples is 3000, and it can be seen that these 4 methods all deviate
from the ideal imaging coordinate.
Table 2
The process of fitting method

     Algorithm 2: Fitting method

     Input: LED1 images
     Output: LED1 centroid coordinates (𝑒1β€² , 𝑣1β€² )
     1. Same as steps 1–5 of algorithm 1;
     2. Add and subtract a range threshold 10pix respectively to the boundary value
     [𝑐𝑙1 , 𝑐𝑙2 ][π‘Ÿπ‘™1 , π‘Ÿπ‘™2 ] of the light source of the image where 𝐼max is located, and set the pixel
     values outside the [𝑐𝑙1 : 𝑐𝑙1 + 10, 𝑐𝑙2 : 𝑐𝑙2 βˆ’ 10][π‘Ÿπ‘™1 : π‘Ÿπ‘™1 + 10, π‘Ÿπ‘™2 : π‘Ÿπ‘™2 βˆ’ 10] to 0;
     3. Use the Sobel operator to extract the edge points (π‘₯π‘‘π‘Žπ‘‘π‘Ž, π‘¦π‘‘π‘Žπ‘‘π‘Ž) of the light source
     processed in step 2;
     4. Find the circle
                                                    2                   2        2
                          𝐹 = @(π‘Ÿ)(π‘₯π‘‘π‘Žπ‘‘π‘Ž βˆ’ π‘Ÿ(1)) + (π‘¦π‘‘π‘Žπ‘‘π‘Ž βˆ’ π‘Ÿ(2)) βˆ’ π‘Ÿ(3)                           (12)

    that matches the edge points in step 3 by least squares fitting and find the centre of the circle
    (π‘Ÿ(1), π‘Ÿ(2)) as the imaging coordinates of LED1 (𝑒1β€² , 𝑣1β€² ).



   Fig. 5 compares the centroid coordinates of LED1 obtained by 4 methods, where the black
upper triangle is the imaging centroid coordinates of LED1 obtained by identifying and cal-
culating the stripes image directly using the fitting method without using the 3.1 signal, the
blue lower triangle is the imaging centroid coordinates of LED1 obtained by identifying and
calculating the stripes image directly using the image method without using the 3.1 signal, the
yellow star is the imaging centroid coordinates obtained by 3.2 algorithm 1 after loading the 3.1
signal mechanism, the red square is the image centroid coordinates obtained by 3.2 algorithm 2
after loading the 3.1 signal mechanism.

4.1. Identification error
In order to see the errors of different identification methods more clearly, our paper chooses the
Root Mean Square Error (RMSE) to calculate the identification error:
                                         √︁
                               π‘’π‘Ÿπ‘Ÿπ‘œπ‘Ÿ = (𝑒𝑖′ βˆ’ 𝑒𝑑′ )2 + (𝑣𝑖′ βˆ’ 𝑣𝑑′ )2                          (13)

where, (𝑒𝑖′ , 𝑣𝑖′ ) is the imaging coordinates of LEDi measured by the above 4 methods at the
test point, (𝑒𝑑′ , 𝑣𝑑′ ) is the ideal imaging coordinates of the test point.
   Cumulative distribution function (CDF) is the integral of the probability density function,
which can describe the probability distribution of identification error. Taking LED1 as an
example, the error cumulative distribution function of different identification methods are
shown in Fig. 6 and Fig. 7. Fig. 6 is the deviation diagram of the centroid coordinates fringe
Table 3
Parameter of the experiment

                                                  Parameter                                           Value

                                                  Test space unit size (𝐿 Γ— π‘Š Γ— 𝐻)            50 Γ— 50 Γ— 50π‘π‘š3
                                                  LED diameter                                      2.2cm
                                                  LED1 position                                  (10, 40, 50)
                                                  LED2 position                                  (40, 40, 50)
                                                  LED3 position                                  (40, 10, 50)
                                                  LED4 position                                  (10, 10, 50)
                                                  Frenquency of LED transmitters                    3kHz
                                                  Camera height                                     10cm
                                                  Camera resolution                               1920Γ—1080
                                                  ISO                                                100
                                                  Exposure time                                    1/3000s
                                                  Focal length of the lens 𝑓                        26mm
                                                  Image Sensor                                  SonyIMX686
                                                  Test point (𝐿, π‘Š, 𝐻)                           (25, 25, 10)



                                          900

                                                                                      Fringe coordinates 1
                                          800
                                                                                      Fringe coordinates 2
                                                                                      Algorithm 1
                                          700
                                                                                      Algorithm 2
                                                                                      the ideal imaging coordinate
            Column pixel value (pixel)




                                          600


                                          500                   710

                                          400
                                                                700

                                          300
                                                                690

                                          200
                                                                680
                                          100
                                                                670
                                            0                          535     540      545       550        555


                                         βˆ’100
                                            400    450   500   550     600     650      700       750       800      850   900
                                                                      Cow pixel value (pixel)


Figure 5: The centroid coordinates of LED1 obtained using different methods.


images identified directly and the ideal imaging coordinates. Fig. 7 is the deviation diagram of
the centroid coordinates using the algorithms proposed in 3.2 and the ideal imaging coordinates.
                                                      1

                                                     0.9

                                                     0.8




                   The CDF of identification error
                                                     0.7

                                                     0.6

                                                     0.5

                                                     0.4
                                                                                      the deviation of fitting directly
                                                     0.3                              the deviation of image processing directly

                                                     0.2

                                                     0.1


                                                      0
                                                           0   10     20         30            40           50          60         70
                                                                           Identification error (pixel)


Figure 6: The coordinate deviation of fringe images.


                                                      1

                                                     0.9

                                                     0.8
                   The CDF of identification error




                                                     0.7

                                                     0.6

                                                     0.5
                                                                                              the deviation of algorithm 1
                                                                                              the deviation of algorithm 2
                                                     0.4

                                                     0.3

                                                     0.2


                                                     0.1

                                                      0
                                                           0    0.5        1            1.5             2             2.5          3
                                                                           Identification error (pixel)


Figure 7: The coordinate deviation of algorithm 1 and algorithm 2.


   It can be seen that the coordinate error of the light source identified by fitting the stripes
image directly is 0pix-50pix, and there is a 57% chance that the error exceeds 10pix. The error
of the image method is 0pix-60pix, and the probability of exceeding 10pix is 68%. They all have
large errors. The error of using the signal mechanisms proposed by 3.1 to identify the full bright
spot are all about 0-3pix. Among them, algorithm 2 is to calculate the pixels of images, and the
pixel coordinates are integers, so there are many repetitions of error values.
   Fitting the coordinate deviation distribution obtained by the above four methods. Fig. 8 is
the abscissa deviation distribution fitted according to the histogram of deviation coordinate
difference using 4 methods, and Fig. 9 is the ordinate deviation distribution fitted according
to the histogram of deviation coordinate difference using 4 methods. It can be seen that the
deviation of the actual imaging coordinates of the light source is in line with the Gaussian
distribution as shown in (14). The parameters are shown in Table 4.

Table 4
Gaussian distribution parameter

                          Figure                    𝑦0              𝐴0           π‘₯𝑐             𝑀

                          Fig. 8(a)         0.00128             0.1575         -0.90724     2.50698
                          Fig. 9(a)         0.00413             0.17473        1.15132      10.66326
                          Fig. 8(b)         0.00418             0.12752        -0.62293     1.45571
                          Fig. 9(b)         0.00119             0.10844        4.80804      17.96898
                          Fig. 8(c)         0.00144             0.18235        -0.10493     0.21353
                          Fig. 9(c)       0.00082384            0.68534        0.43672      0.28756
                          Fig. 8(d)        -0.00245             0.16772        0.16288      0.24843
                          Fig. 9(d)         0.01512             0.54942        0.48163      0.11644




                  200                                                200

                  150                                                150

                  100                                                100

                   50                                                    50

                    0                                                    0
                    βˆ’10       βˆ’5          0              5     10        βˆ’10       βˆ’5           0         5
                                          (a)                                             (b)
                  200                                                800

                  150                                                600

                  100                                                400

                   50                                                200

                     0                                                   0
                    βˆ’1.5    βˆ’1     βˆ’0.5         0        0.5   1         βˆ’1     βˆ’0.5      0         0.5   1
                                          (c)                                             (d)


Figure 8: The abscissa deviation value.
                  300                                   200

                                                        150
                  200
                                                        100
                  100
                                                         50

                    0                                     0
                    βˆ’40   βˆ’20       0         20   40     βˆ’50                  0         50
                                    (a)                                        (b)
                 1500                                   1500


                 1000                                   1000


                  500                                   500


                    0                                     0
                    βˆ’2    βˆ’1    0         1    2   3      βˆ’2              βˆ’1         0        1
                                    (c)                                        (d)


Figure 9: The ordinate deviation value.


                                                               (π‘₯βˆ’π‘₯𝑐 )2
                                          𝑦 = 𝑦0 + 𝐴0 π‘’βˆ’         2𝑀2                              (14)



4.2. Positioning error
In our positioning system, the source of the positioning error is mainly the quantization error
caused by the image identification of the LED described in section 4.1, so the ideal imaging
coordinates of the light source need to be rewritten as (15).
                                  (οΈ‚ )οΈ‚ (οΈ‚ )οΈ‚ (οΈ‚ )οΈ‚
                                    𝑒𝑖′       𝑒𝑑′       𝑛𝑝π‘₯
                                          =         +                                      (15)
                                    𝑣𝑖′       𝑣𝑑′       𝑛𝑝𝑦

where 𝑛𝑝π‘₯ (π‘₯𝑐 , 𝑀2 ) and 𝑛𝑝π‘₯ (π‘₯𝑐 , 𝑀2 ) are Gaussian white noises that are independent of each
other on the horizontal and vertical axes of the light source on the imaging plane, the means
and standard deviations are shown in Table 4.
   Considering the identification error, we do positioning tests on the data obtained by each
method at the test point. Then we calculated the positioning error which is the RMSE of the
estimated position results and test point. The statistical result is shown in Fig. 10.
   It can be seen that algorithm 1 and algorithm 2 perform better than the direct solution. More
than 90% of the positioning error is within 1.69cm when using the fitted stripes images directly.
More than 90% of the positioning error is within 1.77 cm when using the stripes coordinates
identified by the image method directly. While more than 90% of the positioning error is within
1.26cm when using algorithm 1, and the positioning error of algorithm 2 is within 1.25cm. So
the two algorithms proposed in the paper all have high accuracy.


                                  1


                                 0.9        0.9
                                                                   X: 1.251        X: 1.694    X: 1.771
                                                                   Y: 0.9          Y: 0.9      Y: 0.9
                                           0.88
                                 0.8              0.65   0.7   0.75

                                                                                                  0.95
  The CDF of positioning error




                                 0.7

                                                                                                     0.9
                                 0.6

                                                                                                  0.85
                                 0.5

                                                                                                     0.8
                                 0.4                                                          0.85

                                                                                                           1.1     1.2     1.3     1.4     1.5       1.6       1.7
                                 0.3                                                           0.8
                                                                                        Fringe1 positioning error
                                 0.2                                                    Fringe20.75
                                                                                                 positioning error
                                                                                        Algorithm 1 positioning error
                                 0.1                                                    Algorithm 2 positioning error
                                                                                               0.7
                                                                                                     0.5     0.6     0.7     0.8     0.9         1       1.1
                                  0
                                       0    0.5                1                  1.5                  2                    2.5                      3               3.5
                                                                              2βˆ’D Positioning error (cm)


Figure 10: The cumulative distribution function (CDF) of positioning error.




5. Conclusion
Indoor visible light position based on a single camera will be a large error when calculating
fringe coordinates directly as the imaging centroid coordinates of light sources. In this paper,
we proposed a light source emission signal mechanism combining information sequence and all-
bright sequence, we compared and fitted the imaging deviations of different methods. The results
show that the imaging deviations obey the Gaussian distribution. We simulated positionings
after taking the identification error into account. The results show that the two identification
methods based on this signal mechanism can reduce the positioning error to within 1.3cm. The
positioning accuracy is greatly improved compared with the positioning method of identifying
the light source directly.
   This paper tests multiple positioning at the test point, and the statistical results show the
superiority of the signal proposed in this paper in positioning accuracy. In future work, we
consider transplanting the positioning system into the real environment and consider the
influence of some non-positioning light sources (such as sunlight) in the real environment to
complete random and dynamic positioning.
6. Acknowledgments
This research was supported by The National Natural Science Foundation of China (62171075).


References
 [1] C. Qiu, M. W. Mutka, Crisp: cooperation among smartphones to improve indoor position
     information, Wireless Networks 24 (2018) 867 – 884. URL: http://dx.doi.org/10.1007/
     s11276-016-1373-1.
 [2] M. Smieja, Zigbee phase shift measurement approach to mobile inspection robot indoor
     positioning techniques, Diagnostyka 19 (2018) 101 – 107. URL: http://dx.doi.org/10.29354/
     diag/94498.
 [3] M. D. Redzic, C. Laoudias, I. Kyriakides, Image and wlan bimodal integration for indoor
     user localization, IEEE Transactions on Mobile Computing 19 (2020) 1109 – 1122. URL:
     http://dx.doi.org/10.1109/TMC.2019.2903044.
 [4] L. Pei, J. Liu, R. Guinness, Y. Chen, T. KrΓΆger, R. Chen, L. Chen, The evaluation of
     wifi positioning in a bluetooth and wifi coexistence environment, in: 2012 Ubiquitous
     Positioning, Indoor Navigation, and Location Based Service (UPINLBS), 2012, pp. 1–6.
     doi:10.1109/UPINLBS.2012.6409768.
 [5] K. Paszek, D. Grzechca, A. Becker, Design of the uwb positioning system simulator for
     los/nlos environments, Sensors 21 (2021). doi:10.3390/s21144757.
 [6] Y.-F. Hsu, C.-S. Cheng, W.-C. Chu, Compass: An active rfid-based real-time indoor
     positioning system, Human-centric Computing and Information Sciences 12 (2022) 17921–
     17942. URL: http://dx.doi.org/10.22967/HCIS.2022.12.007.
 [7] H. Cheng, C. Xiao, Y. Ji, J. Ni, T. Wang, A single led visible light positioning system based
     on geometric features and cmos camera, IEEE Photonics Technology Letters 32 (2020)
     1097–1100. doi:10.1109/LPT.2020.3012476.
 [8] Y. Zhuang, L. Hua, L. Qi, J. Yang, P. Cao, Y. Cao, Y. Wu, J. Thompson, H. Haas, A survey of
     positioning systems using visible led lights, IEEE Communications Surveys Tutorials 20
     (2018) 1963–1988. doi:10.1109/COMST.2018.2806558.
 [9] A. Raza, L. Lolic, S. Akhter, M. Liut, Comparing and evaluating indoor positioning
     techniques, in: 2021 International Conference on Indoor Positioning and Indoor Navigation
     (IPIN), 2021, pp. 1–8. doi:10.1109/IPIN51156.2021.9662632.
[10] K. Abe, T. Sato, H. Watanabe, H. Hashizume, M. Sugimoto, Smartphone positioning using
     an ambient light sensor and reflected visible light, in: 2021 International Conference on
     Indoor Positioning and Indoor Navigation (IPIN), 2021, pp. 1–8. doi:10.1109/IPIN51156.
     2021.9662520.
[11] J. Jin, L. Feng, J. Wang, D. Chen, H. Lu, Signature codes in visible light positioning, IEEE
     Wireless Communications 28 (2021) 178 – 184. URL: http://dx.doi.org/10.1109/MWC.001.
     2000540.
[12] W. Guan, S. Wen, H. Zhang, L. Liu, A novel three-dimensional indoor localization algorithm
     based on visual visible light communication using single led, in: 2018 IEEE International
     Conference on Automation, Electronics and Electrical Engineering (AUTEEE), 2018, pp.
     202–208. doi:10.1109/AUTEEE.2018.8720798.
[13] H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, Z. Chen, A fast and
     high-accuracy real-time visible light positioning system based on single led lamp with a
     beacon, IEEE Photonics Journal 12 (2020) 1–12. doi:10.1109/JPHOT.2020.3032448.
[14] W. Guan, L. Huang, B. Hussain, C. P. Yue, Robust robotic localization using visible light
     positioning and inertial fusion, IEEE Sensors Journal 22 (2022) 4882–4892. doi:10.1109/
     JSEN.2021.3053342.
[15] Y. Wu, X. Liu, W. Guan, B. Chen, X. Chen, C. Xie, High-speed 3d indoor localization
     system based on visible light communication using differential evolution algorithm, Op-
     tics Communications 424 (2018) 177–189. doi:https://doi.org/10.1016/j.optcom.
     2018.04.062.